Jumpstart your SAP Cloud Analytics with Qlik and BigQuery
Dec 7, 2021 1:30 PM - 2:30 PM EST
How can you use analytic data software to improve and better align your decision-making to scale your brand? In a data-driven culture, is it possible to understand and implement methods that work without the confusing jargon and strategies of the past?
In a data-driven workforce, having accurate, real-time data can impact the everyday hustle of a business. What if you could forecast and access historical trends to reach new customers and target your audience more effectively to scale your brand? Unfortunately, many brands can become lost in the data integration process. So, how can you take action to improve your data integration and analytics?
In this virtual event, Greg Irwin sits down with Matthew Hayes, Vice President of SAP Business at Qlik, and Edy Sardilli, head of Global Strategic Business Development and Partnerships at Google, to explain how to turn data and analytics into better business outcomes. Together, they discuss some of the myths around integrating data, integrating an SAP data warehouse, and the importance of forecast accuracy.
Global Strategic Business Development & Partnerships at Google
Edy Sardilli is the head of Global Strategic Business Development and Partnerships at Google. He has over 12 years of experience delivering SAP analytics and results for early-stage startups and mid-sized and large organizations within commercial markets. He previously worked for SAP in the Cloud Platform and Data Management Group, West Strategic Customers, and in Strategic Accounts for Pivotal Software Inc., and Hortonworks.
Co-Founder, Co-CEO at BWG Strategy LLC
BWG Strategy is a research platform that provides market intelligence through Event Services, Business Development initiatives, and Market Research services. BWG hosts over 1,800 interactive executive strategy sessions (conference calls and in-person forums) annually that allow senior industry professionals across all sectors to debate fundamental business topics with peers, build brand awareness, gather market intelligence, network with customers/suppliers/partners, and pursue business development opportunities.
VP of SAP Business at Qlik
Matthew Hayes is the Vice President of SAP Business at Qlik, a data integration platform focused around SAP and a data-literate world mission. He has over 20 years of experience in the SAP industry. Before joining Qlik, Matthew founded Hayes Technology Group, which was acquired by Attunity in 2013. Matthew also developed Gold Client Solutions, a vital data replication tool used daily by some of the leading global SAP databases in the industry.
Global Strategic Business Development & Partnerships at Google
Edy Sardilli is the head of Global Strategic Business Development and Partnerships at Google. He has over 12 years of experience delivering SAP analytics and results for early-stage startups and mid-sized and large organizations within commercial markets. He previously worked for SAP in the Cloud Platform and Data Management Group, West Strategic Customers, and in Strategic Accounts for Pivotal Software Inc., and Hortonworks.
Co-Founder, Co-CEO at BWG Strategy LLC
BWG Strategy is a research platform that provides market intelligence through Event Services, Business Development initiatives, and Market Research services. BWG hosts over 1,800 interactive executive strategy sessions (conference calls and in-person forums) annually that allow senior industry professionals across all sectors to debate fundamental business topics with peers, build brand awareness, gather market intelligence, network with customers/suppliers/partners, and pursue business development opportunities.
VP of SAP Business at Qlik
Matthew Hayes is the Vice President of SAP Business at Qlik, a data integration platform focused around SAP and a data-literate world mission. He has over 20 years of experience in the SAP industry. Before joining Qlik, Matthew founded Hayes Technology Group, which was acquired by Attunity in 2013. Matthew also developed Gold Client Solutions, a vital data replication tool used daily by some of the leading global SAP databases in the industry.
Senior Digital Strategist at BWG Connect
BWG Connect provides executive strategy & networking sessions that help brands from any industry with their overall business planning and execution.
Senior Digital Strategist Tiffany Serbus-Gustaveson runs the group & connects with dozens of brand executives every week, always for free.
Greg Irwin 0:18
So I'll continue to reinforce throughout, thank you all for joining. My name is Greg Irwin I'm one of the partners at BWG these interactive forums, we've been running for 8 8 9 years. The key model we follow is interactive, you know, driving the engagement driving the conversation, learning from each other's experiences. And that's going to be our goal again, today. All you know, I'm very fortunate to be joined by Matt Hayes and Edy Sardilli over at Matt's over and Qlik Edy is over at, at Google. And we're going to be talking about some of their solutions and, you know, packages they put together, in really being able to make great use out of the data that's often trapped within SAP ERP system systems. We'll talk about some of those use cases. And we're going to go around our group here to talk with, really the group about what experiences others have seen, and others are pursuing in terms of getting a full view of all the operational data across the organization. But I think it's time for us to introduce our CO our co host today. Matt, it's great to speak with you again, do us a favor. And please give the little intro on Qlik and in your focus.
Matthew Hayes 1:44
Sure. Thanks, Greg. And good morning. Good afternoon, everyone. My name is Matt Hayes, I'm the vice president of SAP Business at Qlik. And at Qlik, we know we focus on on analytics, data integration for enterprise businesses. And my focus at the company is to help our SAP customers with their data integration needs around SAP data. I also interface with our product management and our r&d team to make sure that what we're delivering to the SAP market is specifically tailored to the unique needs and requirements of our SAP customers.
Greg Irwin 2:15
And for people understand the full stack of Qlik, I know you guys as you know, Qlik View going back a long, long ways, what is what is Qlik today.
Matthew Hayes 2:24
So Qlik today is data integration and data analytics, the data integration part of the business came in through acquisition of a tunity, which is, which is where I came from. So so there's there's a Data Integration focus for helping customers get the data where they need it when they need it. And then there's the data analytics, part of the business, which is more the Qliksense visualization, but it goes beyond that it goes into the AI and ML capabilities. Inside advisor, there's a ton of products that we've added to the application or to the analytic stack as well, to help customers work with that data.
Greg Irwin 3:02
Awesome. Awesome, man. Thank you. It's good. It's gonna be fun today.
Matthew Hayes 3:06
Thanks. I'm in Chicago to just so everybody knows. So it's I think we're, we just cracked 20 degrees today. So I'm happy to be indoors.
Greg Irwin 3:13
Alright, let's get Edy into the mix. Edy, first time on one of our calls. So first off, great to meet you. Thanks. Thanks so much for joining. Please give your intro to the group. Yeah,
Edy Sardilli 3:26
thanks for having me. And I guess I have to I'm the frail one. So it's, it's about 65 degrees, and I'm wearing a sweater. I'm in California, and I met I've been at Google Cloud now for about three years coming up on three years. I lead a lot of the partnerships and go to market activity are around our SAP solutions. So we are part of the SAP solution engineering team here at Google Cloud. And part of our role was to build the SAP on Google Cloud business, globally. And I'm surrounded by some really experienced partner managers, solution architects, center of expertise, people, as well as developers, and solution management to actually deliver a lot of these new solutions to market. Many of us come from over a decade of tenure at SAP or part of the SAP ecosystem for a very long time. And so we you know, we've built a really strong engineering team, a lot of SAP knowledge here at Google Cloud. But you know, our mission really is to accelerate every organization's ability to digitally transform reimagine their business and it's done through data powered innovation. And so yes, we we focus on the best infrastructure and expertise around our technology. But most importantly, we're we're really focused on the industry solutions, the line of business solutions, that are helping our customers pivot their business and transform their business, right. So it's really getting on that digital transformation journey. As I said, I'm based in California, I've spent a little over 12 years in and out of SAP, as well as some startups, I do have a strong analytics background. As a matter of matter of fact, that came over to SAP via the Business Objects acquisition, I did spend some time at Pivotal software, where I learned a lot about cloud cloud native development. And as well as Hortonworks, where I learned a lot about the data lake architecture with a dupe. And so I'm happy to be here, and thanks for thanks for having me. And our partnership with Qlik is of utmost importance. And I'm excited to talk about that today. Well,
Greg Irwin 5:40
I think that everyone here has an understanding of the role of SAP and the data that resides within that system and the importance of it for the organization. I mean, it's interesting that Qlik and Google are putting together a partnership really focused on realizing the value from that, you know, I try and stay away from sales pitches. And I'm going to ask that we do that again today. But I think it's important to at least level set on what is the joint solution. So please, either Matt or Edy, take take a moment and just explain what is the solution offer?
Edy Sardilli 6:19
Yeah, so Matt I’ll first, and then I'm happy to hand it over to you. So what what we're doing. So in our solution engineering organization, we focus the first couple of years on really building the SAP Business here. And that's if you think about it, that's modernizing SAP application. So infrastructure modernization, and that's a lot about, you know, customers wanting to move their SAP workloads to Google Cloud, be it ECC, or ERP, or S for HANA, and any ancillary systems that they run today. We also have a very, very powerful relationship and very close relationship with SAP, there's an ISV. Alliance, there, were part of the RISE program, and SAP does lead with SAP on GCP, in a lot of their pursuits. And we also have a co innovation partnership with them, where we're actually building integration with our technology and, and their applications and really improving how their date, you know, their cloud applications help customers. That being said, it was important to pivot and to start to help our customers really adopt and accelerate their innovation on cloud. It's one thing to run your applications on cloud, it's another thing to start to take advantage of some of the underlying technology to really improve, you know, whether it's by an AI or ml, or industry use cases that augment SAP, or you think about it, it's really a framework for innovation. And what we did is we set out on this journey to build this new solution called the Google Cloud cortex framework. And, you know, at the end of the day, cortex stands for extending the core. And we started with SAP because that is a majority pillar, a very strategic career
Greg Irwin 8:06
round, maybe for 59 and 38 seconds on just about an interactive session. Participate anytime, is just careful, a background noise. Hey, perfect,
Edy Sardilli 8:22
though, yeah, so as a strategic pillar, I say he's a strategic pillar for us, you know, we have 1000s of customers running on Google Cloud. But it's not just about those customers that are running on Google Cloud. It's about any SAP customer that's out there that wants to take advantage of some of the Google technology and innovations and data intelligence that we have. And really, you know, drive some of those business scenarios that they're trying to accomplish. And so the cortex framework is, it's not a product, right? The products are various technologies, like Qlik and, and Big Query, for example, in Looker, if you're looking at business intelligence, and other data pipeline, potential technology for, you know, for streaming as an example. So those those types of things come into the picture. But the framework is all about the building blocks to help customers accelerate their adoption. And it's the blueprints, the accelerator templates to really put best practices in place. And it helps them assemble these technologies in the form of, you know, industry data Mart's line of business data Mart's so that they can actually start to get some new insights or analytical insights even apply machine learning to these new insights. But also take advantage of this data to build new applications as well via an API framework, for example. So the framework is really about helping accelerate that innovation. And so when you think about an SAP customer that's never touched cloud before, between Google Cloud, SAP and Qlik, we're able to accelerate that and I'll, I'll talk I'll touch A little bit about why Qlik is important. It really takes away the complexity and the guesswork out of how to get the SAP tables. And the SAP information combined with the other information the customer needs into a very easy consumable data model for lack of a better term, so that you can actually build these new use cases. Now. We're starting with SAP. And what's amazing about the framework is we're going to expand this across other systems of record as well. And see, if you think about a customer that's making a decision around a proper extract load transformation strategy. Qlik can actually do this across a very complex landscape and multiple source landscape. It's not just SAP customers around Oracle, they run Salesforce, they run workday, and a variety of other systems. So the idea is that we're able to extend this capability far beyond SCP, but we're definitely starting with a very strategic pillar. So I'll hand it over to Matt, because I want to make sure I explain what that framework was. But Matt has his own insights on how embedding into the capabilities.
Matthew Hayes 11:14
Thanks, Edy. Yeah, and
Matthew Hayes 11:17
what we do at Qlik are our goal is really to be agnostic. Our goal is to help customers achieve their objectives, based on the architecture that they've decided is best for them. If they're sorry, I'm picking up some noise. Greg, I don't know if we have the ability to out there we go.
Greg Irwin 11:36
offline. So Krishna, sorry about that. I just dropped you on mute for a moment, raise your hand all on mute, for whenever you want to chime in. Perfect,
Matthew Hayes 11:46
no, no worries. So you know, when when SAP customers look at what they want to do for Cotter Analytics, you know, you've got all the solutions within the stack. If if customers are all in on HANA, if they're looking at sa P's RISE program, or Hana enterprise cloud or SAP analytics platform, there's a lot of things in the stack that we can stay within the whole SAP solution stack. And the end, those are customers that we don't hear much from, we tend to hear more from customers that that are that have brought in some of the requirements outside of outside of what sa P can do for them. Yeah, they might run their business on SAP and their ERP might run on SAP, mission critical business applications. But when it comes to the analytics journey, when it comes to their cloud journey, we tend to talk to customers that are looking at those as separate projects, they're looking at it saying, Okay, our goal in moving to the cloud is more than just SAP. So because of that, we've determined that Google is our best approach. You know, whether it be moving SAP workloads on to Google, that might be part of that part of it, but it might just be a broader journey. When it comes to analytics. A lot of times, that's a different project, too. But it falls into the scope of what products and services are available from SAP, what products and services are available from other providers. And when it comes to Google, what are some technologies that we want to take advantage of there where we can leverage our SAP data. So for us, we we we go to those customers and say, Look, we understand that you're, you're you're not going to be SAP centric in your cloud or analytics journey. But you might not be final to it might be pivoting, they might be looking at, you know, they might be looking at one data warehouse that runs on Google today, another data warehouse that runs on Google tomorrow, looking at file based storage. And when it comes to our data and integration solutions, our goal is to make it so that they that we can pivot with them that we have, we have three things that we can do on the data integration side. One is we can handle the SAP data with white gloves. So we can handle the SAP data as well as SAP so we can be entrusted with that data, that's a that's a big barrier to entry right there making sure that customers are confident comfortable, that we're that we should be touching their SAP data. So that's the first piece of it. The second piece is that we can land the data wherever they need it wherever they want it. And if they change their mind, they that we can pivot with them, that they're not going to be boxed in that if they want to load the data into, you know, a database running on a, you know, a database running on Google today, but then switch to BigQuery tomorrow, that we can do that. And then the third is that we can help them prepare the data for analytics. And this is a big piece of it, because just getting the data into the target. That's only half the battle. The other half of the battle is making sure that you can work with the data, and that you can model the data appropriately so that you can create those data Mart's and again, having some agnostic capabilities there. So if they change if the customer changes their mind or their architecture, they know that we can generate those data Mart's no matter what we're landing the data into. All of that creates a a, you know, really ticks all the boxes on the technical there so that when you go to consume the data and you go to create the use cases and and model the data, or or visualize the data through through an app through applique, a front end application, that we could get them to that to that point, so that they can focus on just that.
Greg Irwin 15:16
Let me let me jump in with just dig in with a couple questions, I think, I think I've got the basics, which is, you guys are creating an independent landing zone, where you can basically extract SAP data, and basically combine it with any other datasets, you need to basically run the analytics you want, there's some really good things and doing that one is cloud. So you have ultimate scalability to it's not SAP, so you don't have to worry about the data, you're not bringing data into SAP to run your analytics, and therefore, you know, having to deal with the cost of and perhaps Perhaps lock in bringing third party data into SAP, hence, you're creating an independent landing zone. Am I okay, so far?
Matthew Hayes 16:02
Yeah, that's correct. I mean, we can land data in SAP, you know, but the data integration solutions that SAP has for loading data into Hana are probably going to be more attractive than a third party solution for that.
Greg Irwin 16:16
That yours, I'm sorry to cut you off. Here's my catch. I don't think anybody here has any free time during the day. I think everybody's projects. I mean, I'm sure everyone has projects, they can't get to let alone free time to brainstorm new steps. So how much time does it take to put together even even a basic structure of setting up your API's, setting up your data structure, basic governance, standing it up and doing it for not the most complicated, but not the most simple SAP environment out there, in terms of time, and then and then cost just to get started, let's put that put some scope on what a project like this might might look like? Well, one of our,
Matthew Hayes 17:03
you know, one of our biggest success stories is a large customer that did this in a very enterprise fashion. And they were able to complete the project end to end in seven months. And this is this is one of our larger customers. So smaller customers or customers that have fewer use cases, then then that customer can probably accomplish this in much, much, much quicker time
Greg Irwin 17:26
along to do a POC, to basically know that this thing is going to work that you can show it to your compliance, legal and compliance and data policy. And now that you have all your GRC items clicked off, that you know that you have full mobility of your data, you can basically drop it into the landing zone, bring it into your analytics and start running some basic reports along for a POC.
Matthew Hayes 17:51
Well, our proof of value program with with Google is focused around two weeks. I mean, you know, a lot, a lot depends on how quickly you move there. But we have these SAP accelerators where we could say something like order to cash or inventory management, if you want to tie into one of the accelerators that we've already built, we can move the data, apply that order to cash scenario to generate the data Mart's we even have front end applications with Qliksense that plug right into that. So you could get to a pretty quick end to end display of KPIs around around your SAP data very quickly.
Edy Sardilli 18:26
If you permit me, I'll add a couple of things. So one of the one of the big differentiators, if you think about what customers had to do to do to get this accomplished is they have to build an integration service. And most customers will typically go down a cloud native path to try to use some kind of data pipeline technology to copy data over. That's not necessarily the right approach for enterprise applications, because they have especially SAP has a very complex table structure, and does not really translate to business terminology very easily. So you have to have knowledge around the content. You also have to have knowledge around how you can connect to an ASAP system. So there's a reference architecture involved. So what Qlik has done is they've they've created that connectivity, that integration service and they basically simplified it took all the complex, you know, coding that goes behind it, and they've packaged it up into an accelerator. And so so now the the data movement, but there's also another piece and the reason why this used to take six to eight months for regular customer now takes weeks because because not only is the integration service package, the actual data content, the data Mart's the translation, the data foundation inside BigQuery is packaged as well. So you're actually looking at not having to build the translation into into business semantics, and also trying to build the data model and the data Mart's and the dashboards on top There are dashboards and reports all that content gets delivered in the in the cortex framework, for example, right? So, so what you have is you have an intent service that that's why the proof of values are two weeks because we can get data flowing in half per day, really where the weeks come into play as we, we don't just want to copy data for sake of copying data, that's not what we're here for, we're here to actually stand up a real use case around, you know, supply chain data that's integrated with other consumer intelligence data that can actually drive a scenario for a customer. And I wanted to touch on one other point, because you talked about SAP, and they're reporting and data, data services solutions, right? We're not, we're not you know, so SAP has got data warehouse cloud, they've got Hana scenarios, they've got Hana content, data content that they provide their customers. And so customers are very good at just reporting off of SAP data using s for BW or data warehouse cloud. So we're not trying to replace general ledger accounts payable, accounts receivable reporting, just at a base operational reporting level, right? The reason people want to come to BigQuery, and use all the Qlik architecture to do this is, is to actually merge or marry that data with other consumer intelligence information that we have. So you know, if you think about it, like, you know, people are doing supply chain type scenarios with like weather data, or taking real time promotion data, they're taking, you know, any ads, information, anything that they're doing from an ads, ad impression perspective, any Commerce Data, that then they're bringing this all together. And that's how they get these new insights, these new predictable insights that they can then create some workflow around. And that's the difference, right? It's the content. That's the other information that comes together with the SAP information. And the fact that you could do this on a cloud native data warehouse like BigQuery, you could do this at petabyte scale.
So petabyte scale, it's serverless, you could, you know, you could apply machine learning on top of it, you can create very predictable insights as to what to do next. And you know, and I'm happy to talk about a number of solutions. But the idea is that when we get into some of the joint customers that we have, together, you start to see, the total cost of ownership has decreased significantly for these customers, they've put the framework in place. So repeatable, simple, repeatable innovation happens quickly. So somebody wants to integrate new data into the data models that they have, because they've now seen another influencing factor that they need to take into consideration, they could do that very quickly. So you look at companies like Gordon Food Service, Jaguar, land, rover, even Conrad electronics, these are the types of things they're doing. And the ROI is significant, because the heavy lifting is just getting the framework in place, getting the integration, service connected, getting everything done, but once that data is landed, doing iterations on top of it, and integrating new information, it happens very quickly. So you get into this data ops concept, just like you have DevOps with, with apps, right, you have the same thing around data now. And that's where the change happened. That's where the transformation starts to happen.
Greg Irwin 23:22
So let's do this. We have a chat window here, it's incredibly powerful, because everybody can basically weigh in. So I'm going to ask everybody, grab your keyboard, and please, in the chat window, put down your number one challenge. Yeah, that problem you're trying to solve, or initiative related to data analytics, perhaps it's related to sap that would be that would be most relevant. But if your big problem is data pipelines outside of SAP, drop it in there, what's put everyone put in one initiative around data analytics for for you as a consultant or for you as your your organization. Let's, let's see what people are working on. And while we're doing that, Sathish I'm going to ask you even time to turn your camera on, nice to meet you, would you do us a favor and maybe just share with us yours live? What's the one kind of like 12 24 month life project that that you're thinking about or focused on?
Sathish 24:31
So for us, it's not on the, on the SAP side of things, just to introduce I'm the director of data engineering at. So, we have the we are dealing with the large Life Sciences problems in the area of drug discovery. So we use a lot of Ai m n or not so data analytics in order to in order to solve a problem. So the big problem that we have are the next the next two months. is analyzing lots and lots of medical imaging data genomics data, and also clinical data. Clinical EMG is these are multi modern data that come with different challenges in terms of analysis, our biggest challenge is trying to reducing the duplication of data. One of the things as Edy Matt, were talking about as we move the date, SAP data into Google Cloud. But after that, again, we are moving the data somewhere in order to do the analytics. So that's something that we want to reduce as much as we can, because that's a huge challenge we are facing, we have terabytes of data that we are moving from one location to another to do the analytics, and also cleaning up doing pre processing and multiple steps. How can we avoid it, let's say if it's click or any other data pipelines that we can develop. So that is one key challenge for us maintaining one good copy of data that we can utilize, we have data next. But we're actually seeing a problem as a way of doing it a mini data makes multiple database.
Greg Irwin 26:06
Yes, yes. Can I Can I ask you what are some of the approaches you're using for your D do
Sathish 26:13
for the day due at this point, we are not using much, mainly because of the types of data that we have, especially for imaging data, we are dependent on external sources. And we do not have control of all the external sources, excellent source data is produced. And even if the source is the same, for example, Nation Institute of Medical Health, they are dependent on academic institutions submitting the data, and each academic institution submits data in a different way, though, there is a standard. So unfortunately, for us, we are constrained by the kind of data that we receive, that kind of forces us to develop multiple pre processing pipelines depending on the dataset. But however, the data that we produce in the lab, we have control, at that point, we are actually in fourth, we are doing crossing at the edge. Before we know the data into the policy, if
Greg Irwin 27:11
you have a policy issue as much as anything else, it's not just technical. Exactly. I'm like, let's keep going or going. I'm gonna go around here, let's bring in some of the others in terms of the challenges because Sathish, that's very helpful in terms of helping us focus the conversation. Dave, Dave I'm gonna ask you if you'd be so kind enough to turn on your camera. But if you wouldn't mind, you shared with me directly, what are some of the issues you're working on their day?
Dave 27:42
Yeah, so we're working on a s four implementation, but the way that we're doing it is only a portion of our business is going to go on that s four environment. So we're going to have, we've got one system, that's a very old ECC system, and then a second ECC system, that are also going to be used by that business. And their setup is completely different. So it's a huge data conversion, you know, an ongoing process, you know, it's one thing when you are implementing a new ERP, and you're converting your history. But in our case, we actually need to convert our data loads into our data warehouse on a daily basis and convert it into the format that's used, that's going to be used by us for so you know, converting every material number, every customer number, everything. So it's, yeah, it's a pretty daunting task from a data conversion standpoint.
Greg Irwin 28:38
Why can't you go? Why are you keeping multiple instances of export? Why if you're going through the conversion, why not go through the conversion into a single once
Dave 28:48
we have some shared retail locations, shared warehouses with some of our other brands. And so I think that's the complication. I'm on the Data Warehouse side. So I don't deal with that a whole lot. But that's my understanding. And then, a couple years ago, decision was made that the Chinese portion of all of our brands is now running its own SAP system. Which is ironic, because they actually want on an older version than what we were already running. But I think it was more of a political play as far as who's in charge of that, going forward. So and they're going to remain as all of our brands move to S for over the next probably five years, but seems like the Chinese business will continue on ECC, or they'll, you know, come up with some other solution as they see fit. But, you know, there's, they're kind of branched out, let's say from our global discussions at this point, but you know, of course, everybody still wants to see that data. But there's no effort now on their side at all to you know, homogenize their data. They can, you know, create their own model however they see fit.
Greg Irwin 29:53
They've, do you have a What's your data lake strategy for your data warehouse?
Dave 30:00
Um, we're currently you know, heavy SAP BW. So we're actually implementing BW for HANA alongside the, the S for implementation, but we do, you know, someone had mentioned the, you know, the, you know, data sitting outside of your SAP platforms, you know, for that we do have some xuer. We've got a little bit of snowflake, but I think we've kind of decided a way away from the snowflake. And I think they're looking at rise now for the SAS for implementation as well. That's the last I heard. Got
Greg Irwin 30:33
Dave 30:48
Yeah, I mean, right now, it's very limited, really, where they're using that is for, like, the EECOM, traffic, eCommerce sales, things like that, that are not going through the SAP system. But I mean, that may change the brand that we're putting on as for, you know, they're coming from Viber they were a sequel, you know, they were using SQL warehouses in the past. So, you know, they may use this BW system as a pastor and decide to do their reporting, you know, often a xuer platform, we're not having committed one way or the other. I mean, we have to get their data into the, you know, SAP BW system regardless, but they may pass some of it through and do more of the AI type and analytics, using, you know, whatever they see fit again, but I think it may be some, some Power BI, but we're trying to push because we are, you know, pretty heavy on SAP said, we're also, you know, bringing up some sap analytics cloud stuff right now, which has been pretty interesting. We're looking at the planning piece as well. That seems
Greg Irwin 31:52
pretty promising. Cool. Dave, thanks so much for sharing it. Really appreciate it. Thank you. Yeah, sure. Let's get a couple other stories. Before we wrap back here with Edy and and Matt, and Sanya. We're gonna stay on the go back to the healthcare mode here. If you're able, able to, can you can you share a little bit of a story with us what's going on at j&j?
Sanya 32:17
Well, currently, there is a big project called signifie. That's going on to go on to the SAP for Hamath for the finance perspective. And I was just I'll be starting in January, an initiative for the EP transformation, which will also go on the SV Sep for HANA as well, um, but right now, we also have this spinoff of the consumer group, and moving j&j to two different companies. So that may put passes on some of the projects we're, we're just awaiting,
Greg Irwin 32:51
right? It tell us a little bit about data warehouse strategy, in within SAP or outside of SAP.
Sanya 33:01
Tender, interesting, I'm not that close to exactly where everything is on the new signifie. And right now, there's multiple different data warehouses they have depending on which group in which segment and what they're actually looking at. So as they look to do a global, they may be changing their Prem premise of what they're doing. And I'm not that close to that project as of yet. So I don't know where their their future forward path is.
Greg Irwin 33:31
They're fine. No worries, I do want one more for you. Sathish talked about the challenges of data duplication. I imagine j&j with a lot of the protections around your data that you have similar issues, how can you just give an idea to of how you manage, you know, data duplication, and just, you know, data explosion of multiple copies across different databases?
Sanya 33:59
Hey, the problem being is we have, depending on where you want to go very disparate data resources, and folks trying to tie out the data to the different. So it's more a matter of finance not being able to tie out. Because if they're only looking at a certain slice that is meant to be one, one version or one thing, and it may not actually equal something that another group has, that it's the timeout, and what does it actually mean? Who's right and, you know, kind of fighting over, which is right, who's there from an audit perspective, who's the right one? So it makes it complicated. And a lot of times it's timing or it's just a matter of one one updates weekly one updates hourly or one's real time is and it just really depends on the purpose as well.
Greg Irwin 34:58
Excellent. Hey, Sanya. Thanks. Thanks. So much good to speak with you appreciate it. We have a couple good points. And now I'm going to ask everybody again, give us give us some more things to talk about put in, put into the chat. One thing you'd like to hear as we continue, but Matt Edy we got a couple good ones to talk about data d do. And, you know, Golden Record MD, and master data is huge. So Edy, I'll start with you. And come back to Matt, what have you seen some of your clients do in terms of trying to manage, you know, data duplication and master data records?
Edy Sardilli 35:41
Yeah, I mean, everything Sathish Dave and Sonya talked about are, you know, common problems we run into everywhere, right? This is this is very, very normal, especially when it comes to a multiple ERP landscape. As a matter of fact, the biggest use cases we're seeing are around, you know, data cleansing master data, using machine learning, or even Qlik technology to D dupe records, right and augmenting it like if it's image recognition, and image mastering, for example, or materials harmonization, there's a lot of ways you could do that. That's the beauty of the technology is you can apply, you know, out of the box capability without you know, without having, you know, to really learn the technology from from a code level perspective, you know, these are transforms that you can apply to exist that are existing that can help you with D duping and data quality in general. Now, one will argue that data quality or master data governance is an ongoing program that the executive leadership, you know, you mentioned the policies around this, right? You have to put some, some regulations and rules around how do you keep it, you know, clean? How do you keep records clean? How do you keep? How do you keep updating your central repository, regardless of where it is, if it's on cloud or not in a data warehouse, you have to you have to build a data governance team, right? Especially if you're getting a lot of messy data from various sources. So that, you know, that kind of talks a little bit to Sathish this point from a, you know, general executive sponsorship concept, right? So you really have to put a team around how to manage those policies and govern those policies so that people can continue to improve the process of which you receive data, but applying technology to fix it, do some batch cleansing and things like that, that's something we you know, we can we can solve that. That's, that's something that's totally available to you. Yeah, Dave touched on a little bit on, you know, harmonizing material numbers and doing data conversions and some of the utilities right that come out of the box or, you know, things like data conversion harmonization, being able to match, merge purge, doing fuzzy logic matches across multiple ERPs, the list goes on. We've been doing this with, again, hundreds and hundreds of SAP customers, they all have this challenge. As a matter of fact, we talked about divestitures, mergers and acquisitions. These are types of situation like j&j wants to spin off two companies, for example, there's a lot of situations where we have to cut off a piece of the ERP system, and, you know, it belongs to the other company, right? There's a lot of data problems that come along with that, and data harmonization issues that need to be solved when doing that. And so that's, that's some of the things that we deal with every day. In terms of like, the data, you know, coming from multiple ERP is one of the biggest analytical use cases that you'll find, I was just on the phone with a customer actually here in California that had eight ERPs. And, you know, their biggest struggle is they actually had orders being processed from two different ERPs, because they've acquired a company. The problem with sa P is it needs to own the business process from beginning to end. So it's gonna own the order management process in one ERP, and it's going to own the order management process for fulfillment in the other ERP. So what ends up happening is a logistics problem, because now you're, you're placing orders, you know, to different parts of the company, but it could be the same part, or it could be something that actually gets fulfilled to the same destination. And you don't want to send two trucks to the same destination, right? You want to be able to be sustainable, you want to be able to harmonize this intersect the orders prior to the actual order fulfillment taking place. So the way you do that is through an analytical process and workflow, because you know, yes, the, the holy grail is to actually bring the two ERPs into one single instance. So you don't have that problem. But that could be a you know, $100 million project for a company that has multiple ERPs. So the low hanging fruit is to solve it with a Data Fabric.
And we solve these issues all the time. Customers come to us in every line of business and say I have this problem. I need to combine this mation I need to solve it, I need to build a new workflow. And what you do is you decouple the workflow from your system of record, your back office SAP, or Oracle or other ERP system of record is going to continue to operate and do what it does best. But the way you build new workflows, you, you build that in the cloud, using a very light application that talks to the same data, right, because Qlik can bring that data in near real time, we're still going through the same process, and then you update the records in your ERP system to reflect the change. That whole workflow is totally possible. And so I like I like the challenges, they're things we solve all the time, I didn't hear anything that was completely out of the ordinary. And as you're going to like s for with SAP rise, you're going to BW for HANA. Again, there's reference architectures on how you integrate all this stuff. Because that's, you know, that's the challenge is integration is the CIOs top agenda, because they have so many applications, Salesforce, workday, SAP, Oracle, Manhattan associates, if you're in manufacturing, and it just gets really complex, and you know, we're seeing the data, you know, the data transformation initiatives around these types of scenarios grow significantly, and it's been growing for decades, it's just getting even more important now, because now you factor in all this other unstructured content and other signals that you want to marry. And so it's important to get get your arms around this data governance and this process, because, you know, otherwise, it'll spiral out of control, and, and then it becomes an even more costly problem. So I thought I'd touch on that a little bit. And I'll hand it over to my colleague to speak on it, you know, the experiences he had as well.
Greg Irwin 41:44
Matt, and Matt, maybe we talk in the context of a customer, one customer who's been able to solve some of the data quality and governance issues that that's been brought up.
Matthew Hayes 41:57
Yeah, I mean, I think that, you know, the thing that comes to my mind is one of our one of our core assumptions that we make well, we make two core assumptions, when we, when we develop our software is, one is that the SAP customers going to want to combine the SAP data with non SAP data. And the second key assumption is that whoever's modeling the data, or consuming the data might not even be an SAP user. So if we, if we stay true to those two, it forces us to always think in the context of normalizing the data, you know, creating the tables and objects in the in the data lake or data warehouse, with English text around columns, and table names instead of German acronyms. Bringing over aspects of the tech textual aspects of the metadata. So status values that are stored in the data, instead of the status being A, B, or C, we actually can correlate that to what that what those statuses actually are. So when it when it comes to that almost all of our success stories revolve around leveraging the SAP data, but putting it together with data from other sources. You know, the one that comes to my mind really is a is a forecast, forecast optimization project where the customer was actually looking to Him to improve efficiencies in their supply chain. And they identified the biggest gap and doing that was forecast accuracy. forecast accuracy was was critical in making sure that raw materials were in the right locations at the right plants to meet the manufacturing schedules. And they didn't want to have, they didn't want to have inventory sitting on the shelf. For you know, if they had a decrease in, in orders, and they said they didn't want to surplus, but they also didn't want to have not enough raw material to meet the manufacturing needs, which would cause them to have to buy it on, buy that material on short term at a higher price. So there's obviously a lot of SAP data that goes into, you know, dialing in that forecast accuracy, but there's also, you know, for this customer they had, they had different warehouse management software that was non sap that had that had to participate in that use case as well. So when, you know, for us, it's it, you know, we look at it and say, okay, whoever's modeling this data, if this person doesn't know SAP at all, or if this person doesn't understand the SAP application, how can we spoon feed them the data in a way that they can work with, they can look at the data know what they're working with, simplify it, model it and no apply, applied the metadata where, where it makes sense, so that they can display the data in a normalized manner. So that when when you're looking at a dashboard that talks about, you know, clients and inventories, instead of looking at a dashboard that showing you material numbers or product numbers encoded In with an encoded value, and the plant numbers with it with a coded value, you actually see the name of the plant, the name of the product, the actual inventory. So those are things that we can do to help customers work with that data more,
Greg Irwin 45:16
more reliably. Excellent. Yeah, folks, look, we've got about 10 minutes. So let's finish strong. I'm going to go around the group. And I want to bring in more stories. And with the idea that everyone here on the call are working on similar things. And I want to encourage people to find a contact across this grid to add to your personal network, so doesn't need to be Google or Qlik, or BWG. Anyone anyone's fair game. And my dad is somebody who's gone through a journey that would be instructive for somebody else on this session. So let's share a couple more stories. I'd like to invite Teresa Roby to share a comment. Theresa, let's first make sure you're on the line with us. Are you are you with us? Maybe not, maybe, maybe it's muted. Let's see. Maybe you stepped away? How about June? Oh, Theresa, are you with us? Now we don't hear you. How about June June? Are you online with us? Hello. Hey, thanks for Thanks for joining.
June 46:33
Everyone. I'm June Yeah. I work at eCommerce And we are the GSA team. We are we're not using SAP system, but we have a lot of data pipelines from the third party customer, you know that the quality is not ideal situation, sometimes, you know, the different customer give us different identifier for for the same products, right. So for my team, now, the big issue, just like all the people mentioned, you know, the data quality data assurance, and also data drifting those things, is a big challenge for us. So now we are trying, we were already using DBT work to do and also Marquis color alert system to generate a lot of alert follow the data quality issues. But still, for the data drifting issues, we still want to find out good to, to, you know, automation to to, to catch all of those, the wild Santosh can share those exceptional report purpose, and users and then we can, you know, use less effort to to do all of the traffic tickets. So that is the our challenge. And so that's why I'm here. I want to hear some other experts, you know, what's your strategy to deal with those data drifting things by mean data drifting, that means, usually without data by the weekly cadence. And if there is no seasonality or some other campaign going on this week, RSP should not be a big change when compared with last week or the year before, year before. So probably there's some change. But when compared with last week or last 12 weeks, I really shouldn't have the bigger than 15% change. So I just want to hear is anybody have any experience can share what tools you're using? And how can
June 48:34
solve those kinds of challenge.
Greg Irwin 48:37
Let's do this June 1, thank thank you for, you know, for sharing with us. We've heard from Matt, we've heard from Edy, why don't we hear from somebody else because I'm sure others here have put in some of these data quality programs. And I'm going to ask Harish, to perhaps share, share or comment because all I'm betting that you've done something, at least tackled some of these issues. So quick intro, and what have you put in place that you've seen work? Or?
Harish 49:09
Sorry, I missed a question. But but looks like you want to highlight on how you handle the data quality issues, right.
Greg Irwin 49:17
Yes, any draft data quality problems? Yeah,
Harish 49:21
totally. I mean, just to drop parallel to a kind of a traditional ERP systems, what we've solved my previous engagement at Sunrun was Oracle ERP exactly the same way it says sap in terms of the financials, ERP, GL and so on, so forth. But but we also had the Salesforce as well as HR from workday, where, you know, some of the order processing would happen in Salesforce and by the time it would land in the, in the, in the ERP environments, it would be jumbled up and the data quality issues would arise because the the transactions would run from a financial standpoint of the order Got it. And a couple of things, what we've done we've used in all those days, it was not Arctic's framework. But by and large, we used a fairly standard approach in terms of ingesting this data staging it. And when it comes to transformation rules, in terms of consolidating the, the the transformation, we built, you know, standard approaches in terms of the structure of the data sets the way it would come transform it close to what a DVT would otherwise do it, right. And that's essentially what may standard and by and large, we set standards for the developers the way they would go about. And, and internally, we managed kind of a Master Data Management specially on the dimensions and things like that, once we standardize that, and kept on enriching those master data stages in isolation, that no matter who taps into those master data, whether it's customers, products, vendors, and things of that nature, the entire company would only come across to those schema sets and, and tap in. And that was kind of a single source of truth from a dimension standpoint, and, you know, we could manage that. It certainly, over the course of time transformed and became the single source of truth, but but there was a lot of emphasis from the business standpoint, to ensure that the processes in place, you know, we can continue to keep fixing these datasets, but to a large extent, we got some sponsorship from the business to ensure that the process are in place. So that way the Master Data Management from, you know, technical standpoint became much easier, and so on, so forth. So, you know, and some of the standardization help the developers out to go and surgically, surgically go and fix any issues, because everybody was following the same standard, you know, if somebody were to do a lookup for customers, they knew exactly where it is being transformed. Where exactly is the location, where is the final data sets going to reside, versus if you give them the freedom, right, people tend to start bringing their own set of single source of crude sitting in multiple spots and things like that, not as something which we managed through governance and things of that nature. I hope that helps. And so yeah, I think in general, like, that's how we we set the standards, I think it's the developer community, which you need to manage also, to an extent, I would say, good, good team members would totally contribute to that.
Greg Irwin 52:26
I'd be Eris, thank, thank you very much. And I welcome anybody else, if you haven't, if you've had a, you know, an initiative that you found improved operations that are technically around around data quality. I'll go I'll go one more before we wrap.
Harish 52:45
Yeah, if I were mentioned, one point is the good part. When you take a scale of, you know, database, like big queries and things of that nature, your entire development cycle also gets significantly reduced, right? You're not over engineering, how you stay the radar, how you transfer, you know, it's kind of a two step process, you stage the data, do majority of your transformation in the second step, and boom, you're there, and the semantics and the reporting layer, right? We're in the old schools, you know, you go and touch too many touch points, like stars, emails and things of that nature, but gives you that ability in the new architecture to scale up and and also simplify the way you approach the solution. So that that brings in a lot of control to the system. And so you don't need to over engineer these platforms.
Greg Irwin 53:32
Excellent. Thank you very much. Let's go over let's see when you have one more involved Hardik kalapuya, Hardik. Are you on the line with us? That's that sounds like a no. Or maybe you've stepped away? Let's try. Oh, Arctic, perhaps aren't you with us? Yes. Yeah, I'm sorry. But that's yes. Join I, what's one initiative you've got in terms of data movement, data quality, you know, data blending, wrangling. So I currently work on
Hardik 54:12
Salesforce are not for an architecture management. And we do take care of data duplications by using some AppExchange tools within Salesforce. And, yeah, we do often take care of returning all the qualities again, and seeing what duplication rows are coming out as a result, and we do take care of that demand. We take care of that manually. Like we don't have onboard any of the tools don't have exchanges app to understand any of their data obligations. So yeah, everything we do is manually and so we do take care of by every quarter we done accounts, we run some cases, things like that we do so we want to see how how agents are duplicating the records and that how As in reading more duplication rules and things like that,
Greg Irwin 55:03
yes, you I mean, you're you're doing it, you're doing it the hard way, but at least you know exactly what's happening with it. Right. Very good guy. Let's, let's wrap here with Matt Edy. And please, we'll send an email to everybody with the list of attendees, and just come right back to Matt or any of the team here at BWG, for intro requests, we'll be happy to connect people across the grid. And of course, if anyone wants to learn more about Qlik, or any of the package offers over the Google Big Query, this is about driving awareness. And that's a big part of what what we're doing here. So let me go to Matt first, and then Edy will wrap with you put a closing comment you've got here for the group, as we've kind of gone around, heard some stories, and what you've seen with with clients,
Matthew Hayes 55:55
you know, the, the one thing that resonates to me, and Edy and I have talked a lot about this is, is the end result. You know, everybody talks about the value that's in your data, you know, and if you get caught up in the technical pieces of it, it can get really complex. But our strongest customers, our customers that that really leverage the data, they come to us with the use case, they come to us with the vision, and they're like, alright, we know what we're trying to get out of the data. And here's, here's what we're trying to do. And then when you start with that vision, you start with that strong use case, and you know how it's going to impact the business, then you back into everything else. And then the technology falls into place. And you know, Edy and I talk about that all the time that, you know, we're technologists. I mean, we love what we built, we love showing people what we've built. But it's only cool. On the technical side, if you're interested in the technical side, if you're if you're interested in the result. And if you're interested in what it's going to mean to the bottom line, you know, then the technology just needs to support that vision. And we you know, we talk about that a lot in the partnership with Google that, you know, that's got to be our goal, our goal is that the technology has to it has to support the vision for the customer. And sometimes customers need help with that vision. But like I said, when the customer knows what they want to do with the data, and they know what value they want to drive out of the data that makes it makes the technology piece a lot more a lot more achievable.
Greg Irwin 57:15
Perfect. Thanks, Matt. All right. Edy closing, closing comments here for our group?
Edy Sardilli 57:21
Yeah, no, thank you to everybody for making the time I agree with everything Matt said, I, I'm happy to connect with everyone. So like, you want to use me as a sounding board for anything. You know, I'm happy to help out whatever way I can. The you know, it's really based, like if I say this all the time, if I can walk into a customer and talk about Matt mentioned demand forecasting, and how to improve that. You never even have to mention the word cortex or Qlik right as the idea is that that's the how, like we've we've put together partnership here to make sure that we simplify this as much as possible and save customers money. It's the outcome that matters, because that's how you fund these projects is your line of business stakeholders saying I've got a demand forecasting problem. I used to base it on historical trends, that's no longer viable, because I've got weather patterns that are disrupting things, I've got inventory that needs to be moved to certain locations where there's a raging pandemic, I don't know what to do. Like, there's a lot of things that come into play. And so these trends and these other types of information that Qlik can help integrate very easily. You know, take your use cases and your business outcomes to the next level. And I think that's the way we like to go to market is we'd like we like to help on those business outcomes, we'll figure out, technically, you have plenty of strong resources that will help you as well as yourselves. That'll help you figure this out, technically. So you know, the technology really is in support of these outcomes. And I think we have a bright future ahead. It's really exciting times, because we're starting to get out of the weeds and get more into the application levels, right, which is, which is exciting. So thank you again.
Greg Irwin 59:05
Absolutely. You can see the maturity here. It's come so far. We're not just talking five databases, we're talking about datasets, data blending, we're talking about the real stuff. That makes a difference. Alright, everybody, let's wrap it up. Big thanks to everybody for joining, of course. Be happy to connect you here with the teams at Qlik and Google and again, build your networks. Let's let's connect across this group and keep you know, keep helping each other to solve the problems in front of us. Thank you all and everybody. Have a great day. Thank you. Thanks, everyone. Bye bye.