← All episodes · Agentee Digital Podcast

Shipping Enterprise AI at Scale (Stephen Gatchell)

Stephen Gatchell
· Hosted by

Show notes

In this episode, I’m joined by Stephen Gatchell, Partner & Head of AI Strategy at Ortecha, to break down why most enterprise AI strategies fail—and what to do instead. We talk about the “150-slide deck” trap, why speed and measurable outcomes matter, and how to build an AI “factory” by executing focused use cases first.

🔗 Guest & Resources Connect with Stephen Gatchell: https://www.linkedin.com/in/stephengatchell/

🔑 Keywords AI Strategy, Data Management, Machine Learning, Business Intelligence, Consulting, AI Governance, Enterprise Solutions, Data Quality, AI Implementation, Ortecha

Full transcript

Welcome back to the podcast, guys. Today we are joined by Steven, partner and head of AI strategy at Orteeka. Stephen, welcome to the podcast. >> Thanks for having me, Nick. Appreciate it. >> To start, could you share a bit about your background and what does for enterprises? >> Sure. So, my background has been in the data space for quite a few years and I was a practitioner at largecale companies like EMC and Dell and Bose. After that, I went on the vendor side and worked at a company called Big ID, which really got me to deal with literally hundreds of customers on the technology side and the data and AI space. And then there was an opportunity to join as a partner in Oteka. Or Tekka is a boutique consulting firm with headquarters in Canada, the US as well as London. And we focus on anything from assessments of data in AI through strategy architecture all the way through implementation of AI because we actually acquired a technology company called Broadgate last year.

I think that gives us a little differentiation in the marketplace of being able to do from theory to actually execution. >> And as a head of AI strategy, what are the most common ways that companies get AI strategy wrong in your opinion? Yeah. Well, the first thing they do is they typically hire a consulting company. I know that sounds weird coming from a consultant at this point, but they hire a consulting company to deliver 150 slide deck on AI strategy >> without actually understanding one, how to execute that strategy, two, where really the focus is and the challenges that they have with actual use cases that they need to deliver with measurable valuable results. Right? setting things like objective and key results on use cases that are going to increase revenue or decrease cost or decrease their risk. Instead, what they do is they focus in on the end to end strategy that sometimes takes 12 to 18 months just to develop and by then people have lost interest. So I think that that is the number one challenge is don't focus on the end to end of a strategy.

Focus in on very specific use cases. go execute that use case and that will start building out your strategy and kind of build a factory is what I call it. >> So what you are saying is that they aren't focusing on the things that pull the lever the most. >> Look in today's AI environment if you don't deliver something within weeks or sometimes even days people lose interest. That's problem number one. Problem number two when you talk about strategy you talk about things like governance. And governance has failed over and over again over the decades because people will invest time in energy but then they can't measure the value of what that governance delivered to them. And if you can't measure value of something, guess what? When budget comes out the following year or even during the year and they have the choice of creating a new product or reinvesting in AI or data governance, they're going to choose creating a new product because that there's a direct lineage to generating revenue.

They don't measure what is valuable outcome in the short term while building that foundation for the long run. >> I know that Orteeka talks about making data and AI work in practice. Where do you see the biggest bottleneck today? Is it data quality, maybe governance or operating model? It's really a combination of things, but I think the number one key focus is that people have been talking about data management for decades, but they actually haven't executed good data management. Meaning, what data do you have? What data should be used? You point out quality, where's it coming from? How can we use it safely because of regulatory stuff? Now, you not only have data regulations, but you now have AI regulations. So, understanding exactly how can you use data from different parts of the world. For instance, if you're an international company or even in the United States right now, you have individual states that have different policies.

So there's no way to really go and say this is the set of data that you should use for this use case because we have a nice inventory. We have an owner of it. We have good data management from an endto-end perspective. I think the problem we've been ignoring data management for decades and now when we have AI, all of a sudden they're thinking, well, data is in good shape. we can just utilize it in AI or we're going to use AI to get our data in shape and go into that a little bit. It's just ignoring data management for the past few decades for sure. That's the biggest problem. >> What do you think separates Orteeka from other companies? >> Yeah, I think it's a couple things. One is a partner model. We have nine partners across Ortecha in which each one of these partners including myself are practitioners. We actually did this stuff, right?

So if you talk about a partner that is specific to architecture, that person actually led architecture for a very large international company for years. Or if you're talking about myself, I managed data science teams and data governance teams and built data strategies for large scale companies. Whereas other consulting companies, you may have less experienced people coming off the bench out of school trying to solve these very complex problems. I think that's one differentiation. The second differentiation is when you have nine partners. What I mentioned earlier taking it from assessment of where you are from a maturity curve of data in AI space through strategy through architecture through actual execution of that architecture on a technology stack. we have somebody across that entire landscape and that's very difficult to do because data and AI they're complex topics you usually have a company or you have some of the bigger companies too they have different divisions they have different functions they have different P&Ls we're one team that's working together and we'll pull in whoever we need in order to solve that client's problems and again since we have the end to end we can pretty much talk about anything AI space >> so you walk the talk.

>> We walk the talk. That's a good way to put it. Yes. >> I would like to talk about security, privacy, and governance. What are the top guard rails you put in place so teams can actually ship AI faster without increasing the risk? >> Yeah. So, you mentioned three different things, but they are all related, right? Security, privacy, and governance. I think if you have good foundational capabilities that start with discovery and being able to classify your data to understand what is sensitive and what is not sensitive, you can then understand the security aspect of it. What do I actually need to pay attention to and manage from a security perspective in case of a breach as an example versus data that's been classified as public data and who cares about it? Just give people access to it. Don't worry about protecting it. The classification is public data. That's a major issue that companies don't do today. they they treat all the data the same.

And so you have from a security perspective, you have way too many alerts, right? So you have this DSPM uh concept, right? Data security posture management where you discover your data, you classify it, you identify the risk, but then you have to do something about that. You have to actually go solve that problem. And when you're talking about pabytes of data, you're talking about extremely large scales of problems. And people just get overwhelmed. they don't even know where to start. On the privacy side, if you're an international company, you have many, many different regulatory capabilities that you have to comply to. If you break down all those different regulations like a GDPR and a CP and China and South America, they everybody has them. Now when you look at the number of business rules that breaks in down into and how do you translate a regulation into a business rule so that you can have it machine readable and actually apply those rules at scale from an automated data management function as an example. Right? People are not doing that.

They're they're taking policies and they're manually writing them down and they're writing manual data quality rules. That does not fly anymore for the speed of innovation of AI. You have to use AI to do data management for AI. You have to take your business rules, your policies, your regulations, your data quality rules, your classifications and automate all that stuff into machine readable and then you put a human in a loop to to oversee the things that are critical in nature. >> I know that you mentioned that Orteka is practitioner. How does it look like in the first 30 to 60 days to get from strategy lightear to something that actually ships? >> I think it starts before the 30 to 60 days. It starts with the proposal. It starts with the discovery of the customer's use case. Since we're not a technology company, we build frameworks, procedures, and so forth on how to use technology to execute like things like automated data management and architecture and so forth. So I think it starts with the first call, the discovery call. What are the business problems is the customer trying to solve?

Who are you talking to? Extremely important. So if you're talking to a CISO versus a chief privacy officer versus a CDO, those could be three very different conversations. Okay? And you have to speak in their language. So we start with who are we talking to? What's the persona? What type of problems are they trying to solve? you bring in the subject matter expert if you're not the one to solve that specific problem and in the proposal you're very descriptive around here's the problem that you talked about here's how we think we can go about solving that problem and here's the resources that we can do and oh by the way let's reutilize some of the stuff that you may have done in the past versus starting from scratch again so I think it starts with the proposal then it goes to the execution so your first 30 to 60 days are confirming exactly what you talked about during in the proposal, make sure there was no loss in translation and then you start executing and solving actual problems from the first week is the intention.

>> If there is a enterprise leader right now listening to this podcast and maybe he has strong ambition for AI but he has weak foundations. What would be the recommended three moves that he could commit to and create something valuable? >> I'll mention this again. It's very complex environment, right? So we deal with leaders that are technology first leaders. They're very good in the technology side. We have other leaders in the same exact title position at another company that is very strong on the business side. So I think the first advice is understand what your strength and weakness is. If you're a very strong businessled co as an example that's awesome. Make sure you have a great partnership with your technology groups that they play to your strengths and you play to their strengths. So step one is crossf functional collaboration amongst the senior leadership that you need buyin and that you need to be successful. I think the second thing is understanding exactly what adds the most value to the company.

A lot of times I see data strategies and execution led by technology teams and I'm not blaming them for this but I will say they may not align to corporate strategy. So your data and AI strategy. How does that actually influence your corporate strategy and enable that corporate strategy? Don't just stop building capabilities for the sake of capabilities. Have direct lineage into the success and strategy. So when it comes budget time and somebody's saying we need money to build up this piece of data and AI strategy or we need money to go and expand in another marketplace, how do you fight that battle and actually get budget for the data and AI strategy? If you have direct linkage to your corporate strategy, that's 100%. So that's the second thing. Make make sure it's direct linkage to the corporate strategy. The third piece I would say is don't boil the ocean as they say. You need to understand that this is not a project. It's a long-term forever program.

It's something that needs to be done on a consistent basis and evaluated consistently. So don't just spin up a small group for six or nine months and think that you're going to have a data strategy and execute it and then nine months you can dispand that team. Focus on small deliverables that deliver value in the short term. Build the factory as I say. So as you have more and more use cases, you'll get more and more buyin from the different functional teams. You'll start growing at the enterprise level a centralized repository of knowledge and capabilities and policies that can then be federated and executed into the businesses. And I think that that's the third critical piece. >> Amazing. I would like to wrap this up with your vision for Orca in the next year. What would that look like? >> It's a good question, but I think one quite frankly AI is here and everybody should be using it, right? So, how do we use it effectively but ensure that we have our expertise in the mix for our customers so we can actually deliver what may be a six or 12 month project.

Perhaps we can deliver that in now three and six months, right? Add more value to our customers faster. That's number one. By using technology and being a consumer of it, but also using our expertise and having that human on the loop, over the loop and in the loop, right? I think the second piece is as we go across and we look across our nine partners is to connect the dots so that we have a cohesive customer journey from assessment all the way through implementation and monitoring and everything in between. So we have the services now. We have developed them. We're working with our customers in all different pieces of the journey. But I would love to see a cohesive end to end execution of that. And then the last piece I would say is we need to really figure out how to get people up to speed much faster in the AI space. I see things like data literacy programs which by the way Orteka has a data literacy and culture program and you know companies have these nice 12 and 18month journeys and I'm like well in 12 or 18 months by the time you teach people stuff it's actually going to be no longer it's going to be much different right it's going to be a different data literacy program or data culture program so how how do we help customers understand the speed of innovation and how fast they have to move both with our help and then once we give them the tools without our >> Awesome.

Thanks again. I will add links in the description so listeners can check you out and we'll see you in the next one. Well, I appreciate the time and the energy for