Real-World Challenges with Deploying ML into Production

Jul 7, 2021 1:30 PM2:30 PM EST

Request The Full Recording

Key Discussion Takeaways

The field of machine learning operations is growing exponentially — but with all the progress comes unpredictability. Models wildly vary from each other because of a few discrepancies in the data. As the programs learn and develop, adjustments need to be made to keep them going correctly. For many early adopters, there are plenty of unanswered questions about how to make these adjustments and how to apply them to production.

Fortunately, Wallaroo has been at the forefront of ML programs and has the experience to answer some of the industry’s toughest questions. Their platform is designed to be simple and fast, implementing live AI algorithms against production data. They have 25 data scientists and 625 people deploying their models and learning how to operationalize them. Now, they share their hard-won expertise with you.

Greg Irwin hosts Vid Jain, the Founder and CEO of Wallaroo, and Aaron Friedman, the Vice President of Operations at Wallaroo, for a discussion on machine learning in production. They field questions from an array of industry professionals, including “How do you organize product management function?” and “What kinds of data should I focus on collecting?” They also detail the successes and roadblocks of their company’s different ML programs and how to improve their models. Stay tuned!

Here’s a glimpse of what you’ll learn:

 

  • Vid Jain and Aaron Friedman talk about how 625 users are operationalizing Wallaroo’s models
  • How do you organize project management within an AI team?
  • Holding employees accountable throughout the workflow
  • How successful are current AI ML programs?
  • The different kinds of data that businesses should collect
  • Some of the main breakthroughs and drawbacks of ML models
  • Having an experimental mindset to make the most out of your programs
  • How to transition away from legacy systems and to modern platforms
  • What’s the best way to handle change management?
Request The Full Recording

Event Partners

Wallaroo

Wallaroo is an enterprise ML | AI platform that turns your data into business results faster, easier, and with a far lower investment. We streamlining the ML lifecycle and giving data scientists the freedom to use the tools they love.

Guest Speakers

Greg Irwin LinkedIn

Co-Founder, Co-CEO at BWG Strategy LLC

BWG Strategy is a research platform that provides market intelligence through Event Services, Business Development initiatives, and Market Research services. BWG hosts over 1,800 interactive executive strategy sessions (conference calls and in-person forums) annually that allow senior industry professionals across all sectors to debate fundamental business topics with peers, build brand awareness, gather market intelligence, network with customers/suppliers/partners, and pursue business development opportunities.

Vid Jain

CEO & Founder at Wallaroo

Vid Jain is the Founder and CEO of Wallaroo, an enterprise platform for production AI. Wallaroo is engineered to help data scientists quickly and efficiently deploy ML models against live data. Vid has also founded and co-founded many other businesses, including Sendence Solutions and Petal Computing. Before his time as an entrepreneur, Vid worked in postdoctoral research in the field of theoretical physics.

Aaron Friedman

VP, Operations & Delivery at Wallaroo

Aaron Friedman is the Vice President of Operations at Wallaroo. He started as the company’s Vice President of Sales before starting his current position. Before his time at Wallaroo, Aaron served in many leadership positions, including as the Global Director of Business Development for Qubole, the Manager of Business Development for SADA Systems, and many others. His areas of expertise extend to the development and execution of teams for burgeoning markets.

Event Moderator

Greg Irwin LinkedIn

Co-Founder, Co-CEO at BWG Strategy LLC

BWG Strategy is a research platform that provides market intelligence through Event Services, Business Development initiatives, and Market Research services. BWG hosts over 1,800 interactive executive strategy sessions (conference calls and in-person forums) annually that allow senior industry professionals across all sectors to debate fundamental business topics with peers, build brand awareness, gather market intelligence, network with customers/suppliers/partners, and pursue business development opportunities.

Vid Jain

CEO & Founder at Wallaroo

Vid Jain is the Founder and CEO of Wallaroo, an enterprise platform for production AI. Wallaroo is engineered to help data scientists quickly and efficiently deploy ML models against live data. Vid has also founded and co-founded many other businesses, including Sendence Solutions and Petal Computing. Before his time as an entrepreneur, Vid worked in postdoctoral research in the field of theoretical physics.

Aaron Friedman

VP, Operations & Delivery at Wallaroo

Aaron Friedman is the Vice President of Operations at Wallaroo. He started as the company’s Vice President of Sales before starting his current position. Before his time at Wallaroo, Aaron served in many leadership positions, including as the Global Director of Business Development for Qubole, the Manager of Business Development for SADA Systems, and many others. His areas of expertise extend to the development and execution of teams for burgeoning markets.

Request the Full Recording

Please enter your information to request a copy of the post-event written summary or recording!

Need help with something else?

Aaron Conant

Co-Founder & Managing Director at BWG Connect


BWG Connect provides executive strategy & networking sessions that help brands from any industry with their overall business planning and execution.

Co-Founder & Managing Director Aaron Conant runs the group & connects with dozens of brand executives every week, always for free.


Schedule a free consultation call

Discussion Transcription

Greg Irwin 0:19

Without doing the sales pitch, just please explain to the group. What is Wallaroo?

Vid Jain 0:26

Yeah, I think probably the way to kind of put some context here is that I actually spent 10 years in the high frequency business at Merrill Lynch. We, my group built that business from no revenue to about a billion dollars a year in revenue. And that was all about machine learning. We call them quants back then. But nowadays, we call them data scientists, we had about 25 data scientists that were developing a variety of different models, creating models, surveillance models, risk management models, we had 625 other people, right. 25 data scientists, 625 of the people that were operationalizing, these models supporting the infrastructure, we add about $100 million a year in infrastructure costs Randy's thing. And so a lot of what we're doing here is really born of that experience. It turns out that, you know, at least for us, and and also what we're seeing in in our customers, is that it's actually a lot harder, a lot more complex requires a lot more time to take data science models into production environment. And then once they're in production, you know, getting them to have the business impact that you were hoping for expecting is also more work, right. And so we are very much focused on as a company, we're very much focused on simplifying that whole process, making it simple, making it rapid making it low cost, to take data science models into a production environment, and then giving data scientists and the business tools to improve those models to iterate those models, run experiments to get to business impact. That's that's our focus.

Greg Irwin 2:14

Aaron, do us a favor, give a quick intro to you.

Aaron Friedman 2:18

Sure, so Aaron Friedman, VP of operations here have a background in working with companies and startups like Qubole, and also doing large, basically cloud and data operations and migrations and deployments. And Elaine, I spent five years at Verizon actually in their IT outsourcing division and got the joy of running lowes.com and JetBlue. So had a lot of history there.

Greg Irwin 2:44

Vid, let me ask you 625 people deploying the models of 25 data scientists, what are those 625 people actually doing?

Vid Jain 2:57

That's a good question. So some of them are building code. Right? So if you think about, like all the pieces, right, why it like the reason most organizations find too hard to get models into production is that you actually need like 1520 other different things to work together, you have to orchestrate the infrastructure, you have to orchestrate the process, you have to keep audit logs like in in the case of trading, if there's something goes wrong with the trading, you've got a million dollar trade stuck somewhere, you can't you have to know what's going on. Right? So you have to audit everything. You have to create data lakes, or being able to develop new models, right? Yeah. Because you want to collect all that data, and use it later for the next model. But you also need to collect data for compliance reasons. Right? Yeah, you have. So you have actually, you know, I go through all the different things that you have to do this, in any enterprise, you end up having, like 15, or 16, different technology components that you have to build a maintaining, they all have to work in sync. So part of the 625 people are all like there's a group of 30 that are building, you know, a trade repository, there's a group of 15, that rewriting models to work in c++, so they can go super fast. There's a group of, you know, 40 people that are scaling and managing the infrastructure, when there's new models that come in, there's another group. So once you add it up, it actually ends up being quite significant. You know, and I think this is the issue if you're doing something at scale.

Greg Irwin 4:33

Got it. Alright. I'm going to start with a customer story. But I'd like to also layer in and respond to the questions that are starting in the chat. Big shout out High Five to tasos and Elaine for, for asking some questions and getting getting it going. So let's cover Elaine's first on product management within the AI team. How have you seen our company at scale? organize a product management project project management function, so that it's so that it's effective?

Looks like Deepak wants that. Alright, let's go Deepak.

Deepak 5:19

Yeah, so. So Hi, everyone. So currently, I am a product manager with a healthcare firm in Pittsburgh. And the product here, I mean, in short, I would say it is to trigger the risky kind of risky profiles as in which profiles are more risky in terms of their health care agent, who might have some hip surgery six to eight months down the line, who might have diabetes six to eight months down the line. So we are building a product to predict those healthcare journey of our existing insurance. And that is through building the machine learning models. Now, where product management comes in picture and how it is integrating. So I think having the knowledge of machine learning and how these algorithms work is very much essential for a product manager. Otherwise, unlike any other product manager role for a software development, it is not going to work here exactly in that way. Because because of the iterative nature of model building process, right, you just build a process, it's not that in three times, you will get a perfect product, you can just continue building and iterating the product until you get the best one. Or you can get the best one in the first one in the first round first iteration. So you need to have the understanding of machine learning technologies and little in depth to really understand the engineering team and how they are building the model. When you are at product management perspective, and that's where it helped me a little because I took some courses online on Coursera, just to understand what is the mathematics behind the behind the machine learning. I mean, of course, I'm not coding. And I'm not using all those mathematics understanding. But it really helps me and understanding the engineers when they talk about. Okay, so this is the F score. And then this is the precision, this is the recall. And this is the F score, which we got. So this really helps us in translating their technical language into business language and communicate it to the leadership. So it's really helpful if you have some, some machine learning, technical experience, or at least if you have done some courts to answer that you can understand product management is very essential.

Greg Irwin 7:41

Deepak, I'm going to pause it First of all, thank thank you very much. Have you measured the the improvements or actual the outcome of putting that kind of function in to help with the the iteration of the models, the improvement of models? isn't working?

Deepak 8:01

Yeah. So the matrices, which we have used is the F score and precision and how well the model is performing as compared to the previous models. And it has improved over a period of time, by two things by improving the data and also by doing some feature engineering to the existing. We have found the improvement. And yeah, that's what we are I felt.

Greg Irwin 8:26

Excellent, before we move on, Elaine, does that cover the core of your question?

Elaine 8:33

Yeah, that's definitely helpful. Thanks for sharing Deepak. I think if time permits I, I would like to learn more about how to define the success of this role. And how do you guys balance between efforts with versus outputs in a how to how do you hold the entire team accountable to deliver results?

Greg Irwin 8:57

Yeah, that's a tough one. Everyone holds everyone in the channel, one piece of responsibility, right? So yeah, how do you hold everybody accountable to the success of the whole workflow?

Aaron Friedman 9:09

It actually has a really good statement around this as we refer to this as the PNL of ML, which a lot of people don't take into account of the cost and the business return. So, Elaine, the first to kind of dive, dive a little deeper. It kind of also depends on how your organization is set up. Like, you know, who owns what, and then what, where do those folks actually live within that org? Because what you're seeing is kind of this battle between data engineering, platform engineering, IP administration and data science. And so some of that, as far as being able to hold is just the visibility of all right, did I get corrupted data? Was the model corrupt? What's the insight for how the environment you know did I lose uptime on the environment, but to start out when you actually design that business, use case of You know, dynamic pricing or real time inventory or, you know, anomaly detection as far as inventory, right? You have to first understand like, Alright, is this a model that's actually going to cause the company to, like do revenue capture? Or is it cost avoidance, and then you use a very traditional mathematical, you know, financing of like, Alright, well, if it's cost avoidance, if I were able to detect X amount of anomalies in manufacturing, that would save the company, $10 million, it cost us $3 million to actually put this in the place and run the infrastructure and everything. So there's a net add of the IC three, yeah, 7 million, sorry, math and head. And so that kind of like business value isn't necessarily kind of done as part of the business use case upfront, and then basically tying that back to all the different teams that have a hand in it, so they can all basically share in the success of it.

Vid Jain 11:00

Okay, can I just add one thing to that. So I think the product role is actually what the under low key role for the success of AI, because that role is the glue, right between the data teams, engineering teams, the compliance groups, the business folks. And and I think a key thing, I don't know, again, if this is the role that you're entertaining, or you're in, but you have to have power and a mandate, somebody has somebody senior senior level, has to have given that role, real power to work across these different organizational lines, and align them towards the business goal. Right, and, and if you can't do that, in that role, then it's going to be very hard to execute and actually deliver at the end of the day. And so I keeping those things, I think the most important part of that job is aligning all the different players, all the groups that are in the organization, towards the business goal, but then also having the mandate and the power to enforce that in alignment. And so you have to set it up correctly, there has to be set up by senior folks in the organization correctly bested with the power, there has to be governance around it. And and without the right governance without the right power, and without the right structure to align all the different groups, the AI, whatever AI initiative you're going to do is not successful?

Elaine 12:27

Yeah. So exactly like you both described, right? We are I mean, Aaron probably can relate to that very well. We're a massive company. And our, you know, analytics function resides in every single discipline businesses units, they all have their own, you know, AI and data science team or whatever. So we just recently centralize the AI and data team, with the intent to build a structure probably to transform the business better. So the way it's structured from the technical side is the data engineering and data governance and data strategy. And then the data science team, which is the modeling team, per se, right. And then we're talking about among the leadership, there's a talk about whether we would need to establish a product management function to help ensure the success I guess the answer is yes, is just how to define the role and responsibilities. That's still because everything is so new. So we're still trying to figure out that's why I'm interested to know how everyone else here is doing when it comes to implement and scale your AI projects across the large enterprises, right. And the more and more complication just to apply one bit. We also have the business counterpart, I had the different business organizations during the transformation activities, and those are the parties either we'll be sending the requirements. If we say that in a positive way, and in a more traditional way, you can see they dictate the business requirements, right and demand that result. Right.

Vid Jain 14:12

They can be your friend or foe. Right.

Elaine 14:15

Exactly.

Vid Jain 14:15

Yeah. at Merrill Lynch, we saw this all the time. And again, it depends on the organizational structure. And house, the senior management is setting this up and letting people know, you know how they're going to be measured on delivery at the end of the year. So it's it's not easy. It's not easy.

Elaine 14:32

Yeah, yes.

Greg Irwin 14:33

Let's Let's do this. Elaine, I take I mean, I mentioned when I said, Let's use it for networking. And you'll listen, you can ask us for whomever you'd like to connect with and we'll, we'll do our best to help you. Can you just give a little context? Simple, simple question might not be so easy to answer. How many data scientists do you have? How many models? Are you? Are you running? And simply stated? What's the? Who is the task? I was asked the question earlier? What are the obstacles to AI success? I'm going to expand that to AI and ML. Success. So, Elaine, what's the size of the team? How many models are there? And what's the obstacle to success? Maybe it is this workflow that you're talking about or enabling? Can you share a little context?

Elaine 15:31

Yeah, sure. Um, I really have not conscious, specific numbers. But I will say we have data scientists resides on both business unit and in technical team. So within the technical aspects, I would think are roughly at the ballpark around, currently around 20/30. I mean, the hands on people who can develop the models, but the team is still expanding. How many models are there? A lot. I really don't know how to describe the number of counts. Yeah, we have different models that you know, it was one business unit may have 5/10 different chair models, just for one thing, right. Whether that's a problem itself, that's a different subject. But But long story short, we have a lot. Okay. Does that answer your question?

Greg Irwin 16:21

Perfect. I'm going to thank you. And let's stir the pot here and, and bring the others in with some stories. Vinesh, I'd love to invite you to, to share a story with the group. Would you give just a quick intro, please?

Vinesh 16:39

Yeah, sure. Can you guys hear me okay.

Greg Irwin 16:41

Yeah.

Vinesh 16:43

Perfect. So I mean, also, the larger birth management team at Qualcomm, we pretty much focused on the IML. But as hoping you guys are familiar with Qualcomm, we do. It's a large silicon company. And we look at approximately seven or eight different verticals. So what that really means is mobile xR, cloud edge, automotive infotainment, a less than a Toronto for the unique challenges that we have, as a function of this is we work with approximately 300 to 350 deep learning models every year. These deep learning models are widespread, focusing on vision, focusing on linguistics, focusing on communication. And you know, for one, and so forth, across many modalities, I would say, some of the biggest challenges that we kind of come across is, obviously getting datasets to our data scientists, because most of these datasets are extremely secure Private in nature, and not shareable. So we have to have, I would say, the infrastructure in place to be able to tap into these secure resources, from a training perspective, and also have the infrastructure to make sure these data sets are labeled. We don't want to go manual labeling, it's quite intensive. So we have to go towards automatic annotation, cleaning up filtering, who is going to process really make sure we have the right setup to do the training in place, then you really make sure that you have the necessary compute resources, do the training, because our data scientists do a ton of experiments. And the amount of time really necessary is quite limited, because they are only working on 300 models. You cannot expect one team to hog up the resources of compute, right? So it's pretty difficult. And so we have to be extremely, I guess, well, time bound and how we really construct the statements, when we go into advanced techniques like architecture, search, and all those kind of things. Yeah, then once you're finished, the deployment then comes, we should say development. Next come the deployment is the deployment phase is one of the most important challenging elements, because we don't see a correlation between what we made at the development phase and what we do at the deployment phase. So this, historically, has taken us approximately six months only because of differences and software differences in the customer hardware. You know, and many of the challenges kind of come along with that. So it's time you have tried to understand Can I minimize that deployment into a couple of weeks instead of months? And how do I do that? You know, long ways really accomplish it by taking the hardware into the loop really trying to understand an orchestration standpoint, on the software side, what is missing, you know, when cleanup and all this stuff usually goes on? I want them to deploy the last portions of it, I would say at least on few elements, like automotive markets, or the cloud infrastructure markets, it's going to network or healthcare industries. They want constant monitoring, they really want to make sure is are they actually doing what it's supposed to do? And that this has become a little bit of a challenge because this is more of a web based interface. We have to collect the data from the field and if you're trying to save mistakes, we get an element of, you know, active learning or federated model of, you know, making things, active changes. So I would say at a high level, we will look at three different areas of challenges. One is specifically on training side. Next one is physically on the deployment side. And the third portion of it is active learning, wherein the application learns mistakes, or new data has come up, which did not exist before. And we can continue to modify those results based on the new data. So, I would kind of classify, you know.

Aaron Conant 20:31

You've called out the entire workflow, I think I'm going to go to the I'm, it feels, it feels almost insurmountable. Let me ask you this. And I'll ask everybody this in the in the, in the chat, how would you score the the actual success of your current AI ML programs, from training, to, to deployment to iteration that actually drives business results that not only is driving business results, but is appreciated? So scoring is scored on a one to 1010 is it's a home run, it's the best thing that the technology teams have ever done. wonders. It's a disaster. I don't know if I'm going to be working here in a month. I'm sure it's somewhere in between, I believe. But I'd like to see the scores of how people believe these programs are being received and yielding yielding results. Vinesh churcher commentary for us, you write down significant challenges, but I believe they wouldn't be there. If you weren't already having some success. How would you score?

Vinesh 21:44

I would say on the training front, I would give a decent five. We're not there yet. We made some significant progress. From where we stood, you know, I would say three years ago, infant standpoint, I would say, maybe a seven or eight primary because a lot of focus and investment has been influenced. Perfect. So I think we have done some pretty good changes. In terms of learning, I would say, you know, active retraining, I would say probably one or two, you're barely getting started. And one of the biggest challenges there is trying to identify what data sets are good. What point you declare victory? what point do you really update those models? How do you update the models? Can I keep it to the device? Or do I have to have device a device communication? And enter orchestration of you know, what is good to the user? What is good to the application kind of stuff, right? So it's been really getting started. So that's what I'm kind of wave across the three segments

Aaron Friedman 22:46

Vinesh, just a fulsome name eclature. together just because some folks may or have it. So in inferencing, you mean that something running in production?

Vinesh 22:55

Running on production, correct? Yes.

Aaron Friedman 22:58

But just i the only reason is because I get in this conversation. And there's some different definitions out there in the market. One of the things I want to highlight, I'm one of the things that we've been hearing a lot as well is the we everybody wants, like anomaly detection. Like how do I know that I have model rot? How do I know when I have zombie models? How do I know that? The there's data drift? Is that kind of what is driving that one to two on the learning, you know, you know, design and retrain? redeploy.

Vinesh 23:31

I mean, they're going to be a couple of areas, why one is absolutely want to make sure the accuracy is higher and higher. You want to keep those false, true statements as minimal as possible. And second thing is an anomaly detection, what applies to you, or what applies to me is going to be very different. So you want to make sure these models that we go into production, especially for cases like animal protection, can be personalized to the user as a definition of an animal, it means that's, I would say, the third portion, then, you know, you know, you have different classes that always come up, right, and you want to make sure we continue to update the definition of classes and production. And that's one of the reasons why do this model of continuous learning in production, so that the defaults don't become stale. And they continue to be a state of the art optimized for the user.

Greg Irwin 24:25

Let's keep singing all the stories are fantastic. The next thing Thank you very much, folks, you can jump in at any point, raise your hand. I like a little bit of chaos is a good thing. Just a little bit. Um, Eric, Eric Vogt, first of all, Eric, and make sure I'm pronouncing your how to pronounce your name properly. So would you would you do us a favor and give an intro?

Eric 24:49

Sure. My name is Eric Vogt. Normally based in Boulder, Colorado. I think the field where I intersect this is in the data acquisition part of things. And I think a lot of the things that that have been mentioned are right on, I mean, we're seeing a lot of that like recognizing how do you know what data to collect? And how do you know what, how to make sure that that data is applies, how much data to collect, and how much noise in the data that we're collecting will damage the model or, you know, which noise Do we need to worry about versus others bias. So we do all kinds of things, you know, from speech recognition, to, you know, image to annotation, you know, there are a variety of different functions that we're providing, mainly human labor to try to fill that gap. But I'm mainly thinking about it in the case where, you know, data needs to be collected or fabricated, or annotated. And then how do we make sure that the value that we're creating for that meets business needs that the models are looking for, and I'm seeing a lot of interesting phenomenon, like there's, you know, there's edge cases that get intensely. You know, it's very interesting to explore if you're, for example, trying to transcribe audio, to try to improve speech recognition, the hearing or the way different people, the workers are hearing the data and how they interpret that for the purposes of transcription can be anomalous, there's a variety of problems that can happen, whether or not they're familiar with that accent or that, you know, that that particular type of data. Anyway, that's kind of where I'm approaching this problem. And no, I think another point that you made earlier point about the business case, understanding the purpose of the what you're trying to achieve from both an ROI or, you know, perspective, is right on. I mean, I think the idea that you just kind of lob some, some data at a and see what you get, generally I'm seeing that doesn't really come to fruition. But if there is a mission, that you're trying to achieve something very clear about what the business problem, and if I'm hearing, you know, the folks that I'm working with articulating that business objective clearly, then generally speaking, the health of the data acquisition or the annotation, or whatever, that process is much more likely to be successful, but often can be very difficult to do that, like even trying to interpret natural language is a very complex phenomenon, like how do you understand what intent was or understand what how to interpret a request that is verbally received and turning that into something that can be acted upon in a consistent way and understanding what the tolerances are? So the mean, if you fail certain amount, like how much can you fail before? Before it's no longer useful? And then start being customer experience problem? That's where I'm coming from, but I'm still listening so.

Greg Irwin 27:48

How much visibility, do you require to model performance?

Eric 27:55

Well, the more the better, because if I could see the the results, then I can see where we can tune our, our, you know, our function within the system to optimize results. So the best, the best approach is an area of one where there's a section or like, collect a little see how that does. But I mean, it's one of these things where the more your it costs a lot of money and the more that you it's like, how do you figure out how much money you need to spend and whether or not you're spending good money or throwing good money after bad, but an iterative, tightly closed loop that is clear, and its mission objectives, and then arrive, like every 10%, or something, you're going to re recheck and make sure that you're going in the right direction, those those projects tend to go more smoothly.

Aaron Friedman 28:42

Well, quick question, Eric. So we work with another government contractor that is doing like foreign translation, and then detection and in translation and intent. And a lot of what they're having to do is pull it down train, execute, model, take that data execute model again. And so basically chain those models together. And one of the things that we've been working with them in something that's kind of near and dear to us is model pipelining or model chaining. Is that something that you all are also doing in order to try to drive down maybe some of the, hey, that one big model for natural language processing can now execute faster? Or more accurately?

Eric 29:25

We would only be involved with that when we're seeing Well, when when the when the team than the production teams, the product teams are actually asking us to, so we wouldn't necessarily see otherwise. But I know exactly what you're talking about. And I mean, if you're if you're going to be in the business of hiring a bunch of translators to do that, you don't necessarily know where deviations or variations in in that data collection or the chance in this case translation, transcription, how that how that's going to lead to new model, new models being made. So I think error of model again needs to be the approach because if you're just kind of looking at one, one big model, throwing more data at it, you can see the accuracy get to a peak, and then you can see it start dropping around. And like, Where's the Where? Why is it getting worse? The more data you put in it, I mean, it's, yeah.

Aaron Friedman 30:17

You hit on something that most people don't, don't come across, which is just because you have a model, once you actually start start applying large data sets to it, that model can drop off very quickly. Yeah, and being able to iterate at speed, like, you know, train, retrain, deploy, train, retrain, deploy, just becomes like it's a half two.

Eric 30:37

And it's a it's a hassle, which is why a lot of people just think they can train and and, and ignore and just hope that it meets necessary requirements, but

Greg Irwin 30:46

at a high level one, and then I'm going to keep circulating here. How would you score the success of your ML program? Is this turning out good results? Is this still working process? Give us give us understand?

Eric 31:05

No, it's really good question. And I would say it depends on the program, because we're working with a lot of different programs all across the board with that I mentioned earlier, some of them are our finance, a great, great success. But again, because it's clarity about what success looks like, and they're they're meeting their business objectives with within the boundaries that they had created in the first place. So whether it's a you know, if you're trying to find, for example, customer experience, you want to interpret text for verbal input for search queries to find help or find something, there's some lemmatization, there's some, there's some trimming, that happens, try to try to simplify and then focus search, then to get like better search results. But that stuff, you know, if you know what your goal is, I want 80% to 90% accuracy and the feedback that you're getting from end users about whether or not they're getting what they're looking for, that's a tangible result that you can then get a tangible ROI from. But there's other cases in which it's it's incoherent or, or just experimental stuff, you don't really know what you're getting, or you have pie in the sky expectations for solving a particular business problem for you know, 100% confidence, and it's generally a failure. So I've seen some colossal failures. And I've seen some pretty well managed deployments, in both in both cases, again, we're, we're a partner, so we're not, we're not, you know, we're seeing other people either swim or sink. And we can see the results of their products in the market sometimes. And we can see what people are saying about the product. So it's kind of cool to see a variety of different problems.

Vid Jain 32:50

Hey, Greg, can I just I want to just add on to what Eric was saying, I think he he's sort of saying a very important thing, which I don't think people that get into AI, or investing in AI really think through, you have to have an experimental, this kind of mindset, you have to go into it thinking you're running a lot of experiments, most of them are going to fail, some of them are going to be hugely successful. So you you need the infrastructure, to run experiments to be able to learn from those experiments to be able to measure those experiments. And I don't see this being done very well, in most places. Unless you go into that mindset, unless you build that infrastructure, to basically, you know, run experiments, learn from experiments, see what's working, iterate, you know, you might be lucky, you might put something in production that somehow it's generating real business value, but your your generally speaking, most of the time, you're not going to be successful.

Eric 33:48

I'll, I'll stop talking because I don't want to dominate the conversation. But I'll say one more thing. And that is one of my thesis argument in undergraduate psychology was how to interpret humans understanding of words. And there's kind of a bell curve of how people interpret language, they might classify something in or out of a certain class in different ways, depending on how they understand the question. And I think CrowdFlower also had discovered that quite a lot when they were doing some data acquisition, trying to get crowd workers to sort of do certain tasks, then is realizing that the data can be wildly anomalous, depending on tiny deviations and how the, you know, the This, this, the individual humans are interpreting what we're trying to achieve. And if you have bias and how you're collecting the data and or how the humans are interpreting what they think that the AI is intending to replicate, then then the end results can be wildly off. So it's it's very, it's a very complex thing where you have to think about both the model the data that you're training, the model, the use of the model, and then the interpretation of that model, the decoding of that output by intended end users, all that stuff can break. And it's a, you can't just guarantee that a great math will fix it, you have to think about the entire lifecycle of the thing.

Greg Irwin 35:11

Awesome. Air. Thank you very much. I really do appreciate. Deepak, I saw your I saw your hand.

Deepak 35:20

Yeah. So I just wanted to add to what Eric and with said regarding the experimental mindset, I think that is very important for at least for the, for the companies who are trying to implement it new. Because this mindset may be at the level of product manager or the engineering team, but it might not be at the leadership level. So it is really essential to scale up even the leadership level people to understand how these machine learning models work, and to be in the experimental mindset, from the very beginning, with a minimal kind of minimal success metric that, yes, at least this much we will achieve. But please be in the experimental mindset that the model will be improved over a period of time. It's not like the product is built and delivered 98% or something like that. That mindset has to have even on the leadership level.

Greg Irwin 36:17

Yeah, it's challenging early. I mean, I'm thinking to myself, as a business owner, operator, I'm going to be investing in the whole workflow. ML to say, we're doing this just as an experiment. Doesn't sound that good to me. I mean, I I'd much rather have a real problem that we think we reasonably have a chance to improve. It's, that's, that's a difficult proposition.

Elaine 36:48

Yeah, I can't agree more to that point. Yeah, if you tell our listeners or viewers, you're only doing experiments, you probably wouldn't survive for law, I guess it has to be experimented with the purpose is to show the quick wins step by step right.

Deepak 37:05

Yeah, right, it has to accompany with some solution as well. So maybe if we are starting what we can do like solution plus some experimental solution, which will go along with the actual solution, yes, when a normal software solution, or a normal product solution, plus a ML component to it, which will continue as experimental product, even if it does not succeed. That's fine. But at least it will be a good entry of ML into the organization. Right? For sure.

Vid Jain 37:39

Yeah. And don't forget data science as a term of science in it. So anybody that's using science, if you're new, if you're developing vaccine, it's, it's, you know, it's all experimentation is trial and error. You just, you just have to go with that mindset.

Aaron Friedman 37:58

That's the mindset we are talking to what one, I guess, saying this just kind of coming from a little bit of my consulting background is, understand your culture, understand the culture of the company understand what drives that company, and understand that you're going to have to translate the benefit based on culture to why experimentation is the way to go. Assuming that people, like even in the chat, and then you kind of raised a great question is the definition of AI the same as it was on? I will tell you no, and it's by person by person, right? There are people who took me to task over that's not machine learning. That's the statistical analysis. You know, and so it just depends on what level of knowledge people have around the term, then to be able to apply the that conversation and that understanding and that mentality of how we're going to approach this to that organization. But if you run afoul of their culture, you're you're setting yourself up for no failure.

Greg Irwin 38:56

Folks, I am looking forward to bringing Nitin into the conversation. So getting the heads up. But before we do that, we've got 15 minutes left. And these calls can go one of two ways. We can kind of start multitasking and thinking about our next meeting and all the other things we have going on in our lives. Or we can focus in and try you know, we now have a sense of who's on the call, we have a sense of the conversation, we can now have take the opportunity to to really drive value for yourself. So I encourage you to do the latter. Now, as you know, and as you can see, I'm partnered with Wallaroo here, I really truly consider Vid and Aaron, experts in the ideas of ML ops. Obviously, they're here to meet you. And if there's an opportunity for you to do some follow up or one of your team members. Wonderful. That's about as much as I'll go on my sales pitch. But please know that certainly there are opportunities for follow up. Okay, enough of that. Let's Let's go To Nitin, great to speak with you and I chat on the phone quite a bit, but not so much over video. So it's nice to see you give an intro.

Nitin 40:10

Yeah, good afternoon, everyone, what a great conversation. So I head up the enterprise platform for data, AI and robotics. So in my role, I'm responsible for running the data and AI platform, both on prem on Cloud. In terms of the challenges, what I see in the financial industry, and it's a very regulated industry, one of the challenges that we see in when we're working with the data science communities around emerging the legacy platform with the cloud platform, so a lot of wistow, I think there's a bank and financial institution, we're not like a technology company. So the investments are in the technology is always a little bit limited. In the last two years, I think, because of the uncertainty around it, and the last year before was not so healthy for the financial sector, not a lot of investment has been made. And now we are dealing with a lot of legacy. And on the other hand, we're also going on the modern platform like cloud, the integration of the two because quite complicated in just in the platform side, as well as the data side. And that, you know, somebody just mentioned rightly, that you needed to have the right path from and right scalable platform, and that is not there. And on top of it, you know, the the legacy environment brings the complexity into it. While we continue to make investments which are not coming to the speed we need. It's always always a challenge. And I don't know how, what the answer is there. So that's one challenge I see in my world today. Second, second problem, we have assumptions, because we are now very regulated industry. The often the challenge is that these data scientists, they really wanted to work with the production data, they wanted to enter the production environment. And that becomes a quite a bit of a struggle with our security team with our governance team that they are now No, not funding any standard, any governance framework and want to have access to the production data and apps and not only actual production data, but at time actually work on the production environment. So that that boundary of you know falling, that traditional development lifecycle, and security just just falls apart. So those are the two struggles. Of course, they are the struggle still. But those are the two one which I face in my in my world every day.

Greg Irwin 42:35

Alright, so what do you what are you tackling? What's one? what's what's one of your top priorities over? Not not matter of weeks, but over the next two years? 24 months? What's one big operation that you want attack?

Nitin 42:52

Well, I think we wanted to go DevOps as much as possible. So automation, scalability is the key. So we do want to attack along that. And of course, you know, the second one is to see how much funding business case we can make, to start actually moving off the legacy system to the more modern platform. So those are the two priorities we're tackling.

Greg Irwin 43:13

Now, how does that help the AI? I mean, generally, that probably helps every application. But how specifically, does that help the, you know, your data scientists and your AI?

Nitin 43:26

Absolutely. So I think the by doing by those, doing those two things, one is, you know, going more and more on modern platform like cloud where we can provide a scalable platform is definitely the key because on on prem environments are constrained, we cannot scale on on prime, it is not a strategy as well. So we wanted to go on cloud as much as possible, it gives the flexibility of scalability and in paid dynamic environments. And of course, it gives the modern platform tools to the data scientists which they are looking for. So that that's the way it how second is the DevOps, because I think without doing much automation, we can scale. So if you have to scale we have to invest in in automation and DevOps. So and the more we invest there, it does help the date the the AI team because ultimately part of the production cannot be achieved without automating all the whole machine learning in AI pipeline.

Greg Irwin 44:25

It's interesting. DevOps and automation is different in AI, and I'm learning. I mean, I know it's not just terraform Ansible, right. We're talking about iteration of, of models, managing models, visibility of models, 100 mission data. That's why it's a whole different mess.

Nitin 44:45

Correct, absolutely, absolutely. So I'm tackling both I'm using loosely but I'm definitely we wanted to not only tackle the DevOps purely by In a cloud formation, things like that, but we're also talking about automation when it comes to the AI as well, where we can automatically build, train and deploy the models as well. So it's both, of course, and also integrating that change management and everything from the process standpoint, in a change management as well.

Vid Jain 45:19

Yes, there's another, I just want to add, there's another complexity, especially in these regulated industries, where we're working with a couple of banks, and and they're having a real struggle around reproducibility. Right. It's not just people think reproducibility means I have a model, I needed some data, I can reproduce the results, it turns out, that's actually not very useful, because you need to be able to reproduce, and essentially replay the entire pipeline, right. So the model was trained on a certain set of data, it went through feature extraction, that I went to model, and then it went to model two, and then it generates some result it did so at a certain time, they did so in a certain environment. And so being able to reproduce that entire pipeline at the moment of when it occurred, is actually pretty hard. It's much more difficult problem than kind of software, you know, traditional software. stuff, right. So this, this is an area where there are really a lot of good solutions. And, and where there's a lot of, you know, thinking and needed fresh thinking.

Nitin 46:24

Absolutely. And just to add to that, I think the trustworthy AI is another paying. So we're definitely accountable for that as well. So a lot of discussions are happening around not only just making sure that we're able to reproduce and satisfy some of the regulators like AF c and others, but also to basically make sure that the more that you're billing, we can demonstrate that there's no biases inside it, and it's trustworthy AI is another domain. That is also you know, a lot of discussion that comes up.

Greg Irwin 46:57

Nitin, thank you so much, really appreciate it. Of course, you've got a community here that that I encourage you to to leverage as you dig in on the journey. Let me go over to Skylar Skylar, I've been checking out that backdrop for the better part of an hour. I'm just pretty quick into

Skylar 47:22

I'm Skylar Hawes, I work for a company called Smith and Nephew, we do medical device manufacturing, so highly regulated, we have a global footprint. Basically, the company is based in the UK. But we have a large footprint within the orthopedic space here in the US. I work for the digital analytics team. So I'm specifically work with their supply chain, business partner there, and, you know, we work through we're really new into this and we're trying to get more predictive more, you know, trying to figure out how do we you know, how do we leverage AI and ML we're partnering with a few different companies to to implement various different solutions to help us get there. And we're really early into into one of those, we have a, a, just kind of a, an initial pilot that we've we've gone through and gotten out there and kind of starting to train us and it's pushing, there's a lot of pushback with that in various different areas between the it between the actual users. So it's been kind of a rocky road trying to manage that change. Management. We have the support from the the senior leadership, however, it's just kind of you know, it's hard, still hard going. Right. So this conversation has been interesting.

Greg Irwin 48:55

Point to us, where's the stumbling block? If you if you can describe?

Skylar 49:01

Where's the what, sorry?

Greg Irwin 49:02

Where's the stumbling block? Where's the Where's, where's the issue in the pilot that you're running?

Skylar 49:09

Um, so part of it is part of it is that we didn't get as much done as we wanted to in the timeframe that we had. And so we didn't get all the features into the original pilot that we thought that we were going to get. The other piece is really driving that change management, right? We had, we had information, right people that were giving us PDS information, here's how we're doing things. Here's how, you know, here's the process. But then, when it comes to the actual implementation, we're hearing different things right. Well, my process is this are my processes that and so trying to get everybody and understand everybody's nuanced processes can have been difficult.

Greg Irwin 49:59

Interesting. For an excellent Skylar, you have any specific questions for them for your peers?

Skylar 50:09

I mean, a lot of people have been talking about, you know, the different implementations, but how does what is what are some people doing around kind of that change management piece, right, as we introduce new tools as do new way of doing things, as we implement the various different experiments and trying to get people, you know, in line with that, how's that? how other people done that? That's kind of my question.

Greg Irwin 50:38

Aaron, you want to take a shot?

Aaron Friedman 50:39

Sure. I was I was gonna see, but so yeah, so the first portion about change management is is the awareness aspect. And so the first thing we try to actually say is like, Hey, why are we doing this? Right. And that is to say it this way, there's some people that just do, I'm going to lane I'm going to pick on Verizon, forgive me. Verizon used to did this thing around change management that says that change energizes us. And if you actually look at the psychology of it, it does not most people do not care for change, I did something this way, I'd like to continue to this way I know how to do my job, right. And so by introduction of change, you're asking people to relearn something for which they are already doing. And so the first thing that you have to do is address that add on of, hey, this change is coming. And this is why and kind of do some announcement. The second way that I've seen that was someone at a large pharmaceutical company that is trying to handle this was to actually do a citizen data scientist initiative, to be able to bring people in to say, Hey, this is what's really cool about data science, this is what will allow for some of the pieces for some of the things that you'll be interested in. And honestly, the third one is, is you have to identify who's going to be disrupted by this, right. And so we're working with a client right now. And again, we're not on the consulting side at all, I just happen to have some of that in my background. But that client brought in the chief data officer and like, you know, what we're going to take on dynamic pricing, and real time logistics. And then nobody kind of talked to the fact that there's an entire team that does nothing but pricing. And so now you have a chief data officer, who is now in front, who is now whose success is now tied to dynamic pricing, who does not only pricing for the company, right? And so there had to be this kind of partnership that needed to do it. Here's how I'm going to empower your people, versus here's how I'm going to take things away from your people. And if you don't address those things, in fact, most large things like you're taking on literally has a dedicated change management person as part of their project. Yes, it's just a I mean, that's this is kind of outside of data science. But that that's how that works, you know.

Skylar 52:56

Thank you.

Aaron Friedman 52:57

You're welcome. And more than happy if you want to connect offline as I've done pretty massive projects, and I can walk more through that.

Greg Irwin 53:05

Vid, would you help us wrap up here, we've covered a lot. And so maybe one or two key takeaways that you heard, they may just repeat something somebody said, or just an observation, but give us give us a quick wrap up on our session. Before we all, you know, go back to our day jobs. Oh, sorry. That's for Vid.

Vid Jain 53:32

Okay, I wasn't sure what it was. I think this has been great. I think this is, you know, I think it's hard this stuff is hard. Because it's really not predictable, you know, we're dealing with things that we don't understand how they work. Inside, we don't understand how the models work, we don't understand how they work once you actually put them against production data, they don't behave in any deterministic way necessarily. You have a variety of different moving pieces in side of not only the technology stack, but inside of the organization. So it's a very complex, dynamic system that you're building and maintaining. And so I think it's hard. I think it's okay to say it's hard. I think, you know, the most important thing you can communicate to senior management is that this is going to take time, there's going to be a bunch of failures. And before they're probably successes, but you also need to be very clear headed about what success looks like, and to empower the right people to do it. And if you do that, and you do it systematically, eventually you will get to a good spot. Right. And-

Greg Irwin 54:50

I mean, we're talking a lot about problems, but I think we should have a big, you know, starred billboard that says you can have amazing outcomes.

Vid Jain 55:02

Yeah, you just have to you can't be looking for quick wins. I think that's that if you're getting into data science to get a quick win, then you shouldn't be getting in it. I think you know, I'm not a big fan of auto ml and anybody here does on our guest anything out of it. I think that's just stupid. But maybe there are people that actually get something out of it. I think this is complex stuff, all the way from, you know, domain specific models, to how they work in, in, in actual data environment to the change management through the governance. It's a complicated, complex thing. I think you should just admit that it's complicated admit, it's going to take some time, I can't be very clear headed about what you're trying to achieve. And then and then go for it. You know, because I don't I think if you can't, if you're not going to succeed, if you're not going to do it, you're not going to do it the right way, then you're going to be left behind.

Greg Irwin 56:01

Right, are brilliant, folks. Big thanks to Vid and Aaron, and everybody for taking some time off and joining in what I thought was really interesting discussion. Please use the group will send out an email that has everybody's names you probably have seen our drafts but we're not going to share emails. But if you can't connect with people directly, come back to us here at BWG and we'll be happy to be with that. Thank you all and everybody. Have a great day.

Read More
Read Less

What is BWG Connect?

BWG Connect provides executive strategy & networking sessions that help brands from any industry with their overall business planning and execution. BWG has built an exclusive network of 125,000+ senior professionals and hosts over 2,000 virtual and in-person networking events on an annual basis.
envelopephone-handsetcrossmenu linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram