← All episodes · Agentee Digital Podcast

AI Hype Or Reality? (Kim Crawford)

Kim Crawford
· Hosted by

Show notes

In this conversation, Kim D. Crawford discusses the evolution of embedded computing and AI, emphasizing the importance of security, especially in medical devices. She highlights the challenges and opportunities in edge computing, the need for regulatory compliance, and the future directions for the industry. The discussion also touches on the investment landscape in AI and the necessity for businesses to justify their AI expenditures with tangible benefits.

🔗 Guest & Resources Connect with Lena Kim D. Crawford: https://www.linkedin.com/in/kimdcrawford/

🔑 Keywords embedded computing, AI, edge computing, cybersecurity, medical devices, technology evolution, critical infrastructure, cloud applications, industry trends, business development

Full transcript

Welcome back to the podcast, guys. Today we are joined by Kim Crawford, VP of global sales and business development at Dedicated Computing. Kim, welcome to the podcast. >> Good afternoon. >> To start, could you share something about yourself? >> Yes, I'm uh a seasoned veteran of the embedded computing business. I've spent my whole career working with companies that make uh critical infrastructure in a variety of markets, healthcare, life sciences, training and simulation, telecommunications and uh system control and I've seen a lot of changes through that period in terms of what customers want and what they need and how they use and deploy technologies. >> Could you briefly explain what did you actually see? What what were the changes? If you if you look at uh how technology has developed and how it has evolved from kind of where we historically have been until today. So over the last uh 25 plus years there's been a number of uh interesting transformation points in terms of how the technology is developed. Initially technologies were developed and deployed based on what the needs of the applications were and that means whether you're using it for business purposes or you're using it for infrastructure to actually drive and control the underlying equipment.

So the requirements on both sides have changed. There's a lot of focus today on the enterprise side around the use of cloud for enterprise type applications and uh particularly a lot of talk and hype around the benefits of AI and what they bring into that environment and when you look at the environments it can get deployed in either the critical infrastructure so the embedded environments or uh the uh enterprise environments that run the business technologies those two use cases are similar and different and they've also evolved very significantly here over the period of time that I'm talking about. >> You mentioned AI. Do you think it's all hype or is AI here to stay and it's really doing a good job? >> AI has been around for a long time. So it is here to stay. The real question is whether there is hype around the applications and the use of it and how it's getting used and in what market does it justify getting used. If you look at it from the context of the valuation of AI, the benefits that it's bringing to companies, the industry, the governments, it's uh hasn't reached critical mass to date from the standpoint that revenue that's actually generated from the use of AI is keeping pace with the amount of money that's being spent on the u deployment of critical infrastructure.

Now if you look at that from the cloud or from the enterprise perspective as an example there is a lot of talk right now around 6 to700 billion being invested in AI infrastructure and facilities deployment services here in 2026 as an example but if you look at the amount of revenue that's actually being generated by the investments that are being made there is not keeping pace. So when I'm talking about hype keeping up with the promise of AI, that's the gap that I'm kind of describing here. Revenue generated from the benefits that AI can bring is not keeping pace with the amount of investment that's being made to create different businesses using that as a as a technology. >> So it's a bubble from my understanding. >> You you could consider it from that standpoint. It's actually more of a controversial topic here right now because the funding models and and who can actually deploy AI is governed and the approaches on the use of AI are tiered based on what you want to use AI for and actually where you want to deploy it.

So if you look at a company like Microsoft, they're spending billions of dollars building out AI investments, investing in the development and productization of AI enabled technologies. And they're probably the most advanced in terms of actually capitalizing and realizing from a revenue perspective the benefits that that can bring. If you look at it on the investment side, they're still spending billions of dollars to go off and put the capabilities in place to support that uh technology and you're seeing kind of similar levels of investment and return from other giants in the industry here right now. One of the ones that gets talked a lot about is Nvidia where a major portion of that six to700 billion dollars whatever that number is on a global basis right now that's getting getting spent gets directed towards them because they're enabling a lot of the foundational technologies that are driving it and they're also making some very significant and bold investments of their own to go and underwrite the capital expenditures that are required to go off had put this high performance level of AI in place to support a lot of the training applications that uh we're seeing on the enterprise side of the uh the house.

Interesting. I would like to talk about more on embedded computing. When you look at important devices that are powered at the edge, what do you think is changing the fastest? Is it the expectations from customers or the requirements of security? If you look at edge computing, there are a number of constraints around what technology can actually get deployed there and what's driving the utilization of that compute resource at the the edge of the network. So security is an issue there because the more points you connect to the network, the greater your attack surface becomes and the greater risks that are created for security which drives a need for more cyber more security technologies. So that's one issue that shows up. The second issue that shows up is connectivity. So if you look at enterprise hypers scale deployments they require centralized processing of resource edge computing it gets done in a distributed manner.

So it's done in multiple places is reliant in many cases on networks to transfer that information or that data from where that data is coming from. So that's a related constraint or concern to security but it's also another very very real and critical one. And then the underlying concept of tiered compute resource becomes a third. What compute can I do where? So what kind of AI applications or critical infrastructure applications can I run at the edge based on what is that specific class of compute capable of uh processing either from a data perspective or from an application and workload perspective. And more importantly, when you look at critical infrastructure, what resources do they need to go off and compute those applications? But they're all reliant on security. They're all reliant on connectivity, which is a network. And data and data sovereignty become policy issues and decisions that get layered on top of that.

So there are many constraints that uh are more unique and based on the class of computing and network access that you get in environments that are not necessarily centrally deployed in the cloud in a hypers scale location. >> So security is a big one. >> Security is definitely a big one. You've spoken in the context of cyber security considerations for medical devices including firmware and software updates. What do you think is the biggest practical shift that you are seeing? What do teams need to do differently now to stay compliant and safe? So one of the considerations you need to make when you look at edge devices in these regulated environments like like healthcare is the duration the period of time that this infrastructure this equipment is actually deployed for and look at the hardening. So basically the test the validation the regulatory approval to put those applications in place. So the common trait that all these critical applications share is that once the technology stacks have been integrated and are operational takes uh a it takes time and effort to go off and revalidate them and change them and our policy decisions around oversight in healthcare in particular.

If you look at the North American market, you got the uh FDA that's involved in device certifications. And that's a process that involves both testing standards, how the products are actually developed, what's required to develop the hardware, the software, what's to develop the applications, what test evidence, what data needs to be put together to actually validate that it meets the requirements of what that device are. And that data needs to be reviewed and approved. And once only once that's approved can those applications be deployed. There's an upfront cost and there's an ongoing cost to actually sustaining the regulated environments and the state that the underlying equipment and technology stack is in. So, so that becomes the common thread that goes across all these edge architectures. And there's layers to it. One layer is just the interoperability and compliance piece which ties the policy. Safety and security are certainly part of that.

In a lot of these applications, you think about robotics, you you think about uh healthc care devices, anything that involves or even transportation for that matter. Anything that involves potential risk to human uh life brings additional oversight and brings additional resistance to change. There's formal processes that influence how quickly AI can be applied to those applications or if it even makes sense to apply to applications like that. In many cases, it does, but you really have to think about what the benefits are and what the costs are. And even when you look at it from an enterprise hypers scale application there are different capabilities and technology that from a sovereign perspective you can get access to. So bringing those into specific edge environments, bringing them into healthcare brings along with an additional set of uh constraints and those constraints need to be managed before the capabilities of what you can do with the underlying technology both the infrastructure itself how you involve it but also adding additional capability to it using AI becomes more complex.

Now, what are you most focused on this year? Will you expand partner channels or go deeper in some specific vertical, maybe medical, industrial, or how will you evolve dedicated computing this year? >> Well, I'm going to move it back away from dedicated computing because this is more of an overall industry discussion and industry topic. If you go and look at in the embedded space, there's more use of AI for robotic and autonomous functions coming into play and there's more applications of it in things like computer vision. So these are classes of applications as opposed to be market specific applications that show up in multiple places in the market. So as a general rule, what companies in the embedded space are looking at doing is taking those early wins, those early proof of concepts and scaling them in the marketplace. So we're definitely seeing them in places like healthcare. We're definitely seeing them in places like industrial. So scaling those out going from one customer to to many customers regardless of how you do that regardless whether you scale your sales channels to do that identifying where to go and on the other side it's within companies and customers where you have established an initial deployment looking at how you expand it from one to many applications within the environment that is being deployed.

That actually is where the smart money will go is looking at scaling from one to many in the places where they've establish an initial footprint. And there's a bunch of reasons for that that are tied to what it costs to develop, deploy, and harden these regulated environments, which is what we just talked about. >> Perfect. Kim, thanks again for the chat. I will add links so people can check you out. And thank you guys for watching. >> Thanks a lot for the uh the time and the opportunity.