On this episode of the IoT For All Podcast, Eric Conn, CEO and Co-Founder of Leverege, joins Ryan Chacon to discuss the state of AI and IoT. They talk about IoT companies that “do” AI and AI hype, the role of edge AI and edge computing, how IoT enables enterprise AI, AI and IoT privacy and security, the benefits of AI for IoT and user experience, and the future of AI and IoT.

About Eric Conn

Eric Conn is a serial entrepreneur, technology executive, and software engineer with a passion for building innovative software products for enterprises and consumers. He currently is CEO and Co-Founder of Leverege, an enterprise software company focused on increasing business intelligence through sensor-based decisioning with IoT and AI.

Interested in connecting with Eric? Reach out on LinkedIn!

About Leverege

Leverege is on a mission to enable and accelerate the digital transformation of all organizations, amplifying their human potential by providing software products and managed services that help them successfully develop and deploy IoT-enabled enterprise intelligence solutions. Industry leaders around the globe trust Leverege to launch innovative IoT solutions to optimize operations, automate processes, increase revenue, and delight customers.

Key Questions and Topics from this Episode:

(02:39) Introduction to Eric and Leverege

(03:27) The state of AI and IoT

(06:57) IoT companies that “do” AI and AI hype

(11:30) The role of edge AI and edge computing

(14:25) How does IoT enable enterprise AI?

(17:24) AI and IoT privacy and security

(19:54) Benefits of AI for IoT and user experience

(33:36) The future of AI and IoT

(39:02) Learn more and follow up


Transcript:

– [Ryan] Hello everyone and welcome to episode 300 of the IoT For All Podcast, I’m Ryan Chacon. And on today’s episode, we’re going to talk about everything going on in this space as it relates to AI and IoT, how they’re coming together, how enterprise IoT is a big enabler of enterprise AI. Fantastic conversation planned, and with me on this big episode is Eric Conn, the CEO and co-founder of Leverege.

They are a company that is focused on enabling and accelerating the digital transformation of all organizations by providing software products and managed services and help them successfully develop and deploy IoT enabled enterprise intelligence solutions. Eric’s obviously been on the show before, as many of you regular listeners know, so this will be a fantastic episode to cap off that big number 300.

If you’re watching this on YouTube, please like this video, subscribe to the channel if you have not done so, and hit that bell icon so you get the latest episodes as soon as they are out. If you are listening to this on a podcast directory, subscribe to our channel. Other than that, one big announcement before we get into the episode, we are actually in the process of launching a new podcast called AI For All focused solely on AI topics, more enterprise AI topics.

We also have a newsletter coming out, and we’re in the process of building the web experience just like we have for IoT For All but focused on AI. There’ll still be some crossover on IoT for all into the AI space naturally. But we wanted to build another focused destination for those companies in enterprise AI to have a voice and have an opportunity to showcase their expertise and thought leadership to the world to help the world just better understand AI as we’re already doing with IoT.

So without further ado, please enjoy this episode of the IoT For All Podcast. 

Welcome Eric to the IoT for all podcast. Thanks for being here this week.

– [Eric] Hey Ryan, how are you? I’m doing great.

– [Ryan] Yeah, it’s good to have you. This is kind of like our tradition when we hit a big milestone episode, episode a hundred, 200, now 300. Get you to come on and talk about things.

– [Eric] It’s unbelievable. Kudos to you for sitting through and hosting 300 episodes. That’s quite an achievement.

– [Ryan] It’s an interesting thing to think about, not just- it’s really cool to meet all the people and talk to different people all across the world and in the industry, but just thinking about looking back and being like, wow, you really sat through 300 conversations over the last number of years. Doesn’t feel like that, but it- I think that’s a good thing.

– [Eric] Yeah. Yeah, it adds up. It’s just, any long journey takes one step at a time. So you’ve been doing a lot of steps.

– [Ryan] Absolutely. So for today’s conversation- I want to have you do an intro here in a second for our audience who may not be familiar, but our conversation today is really gonna be focused on talking about how things in IoT have evolved with AI coming into the fold. But before we do that, would you mind just giving a quick introduction about yourself and Leverege just for audience who may be new to this?

– [Eric] Sure. My name’s Eric Conn. I’m a co-founder and CEO of Leverege. Leverege is a software company and we focus on enabling enterprise intelligence through the use of IoT and AI. We’ve been doing AI for a while, but it seems like everybody in the world is now doing AI, and AI and IoT, as we’ll get into the discussion here shortly, are very intimately connected because it’s ultimately about data, and so we’ve been doing AI types of things for our customers for quite a while but definitely the interest has ramped up over the last year or so.

– [Ryan] So that’s a perfect segue into our conversation, and I think maybe for our audience’s benefit, if you could high level it for them as to what’s really happening in the IoT industry when we talk about AI being a part of IoT or IoT companies saying, Hey, I’m now doing AI. What does that really mean? What is happening now that maybe wasn’t happening before?

And where are we as an industry?

– [Eric] Yeah, there’s- I view the whole AI trend as- there’s kind of two levels to it. So at some level, every company in the world is gonna be doing AI or has been using AI. They just didn’t know it. Through tools that they might be using or other processes because AI ultimately at its base is probability and statistics, it’s mathematics.

It’s at a huge scale with huge amounts of data, so it can provide more fine grain predictions. So IoT companies, I think just like any company could be using AI such as generative AI products like LLM, ChatGPT, all these things to help with workflow, to build new dashboard interfaces, to make it more interactive as opposed to point and click.

So there’s that element, and that’s more on the user interaction level, making- increasing usability, things like that. So that would be using these generic tools that everyone is using but to actually improve the IoT experience for their end users. The more interesting way that AI is being applied to IoT is really around the data itself.

So IoT at its core, it’s essentially using sensors to generate data that’s never been before captured. So unlike the ChatGPTs and a lot of the generative AI products that everyone is used to using now, that’s using data that’s been basically put into the internet over the last 10 or 15 years and crunching it all down to predict the next letter, the next word, the next thing in a sequence, the next pixel in an image. With IoT, this data has never been mined before because sensors are now capturing things that humans hadn’t captured on a regular consistent basis. So IoT and AI are extremely closely coupled together and IoT really has- AI has a very big impact on IoT when you hit scale of data. So we used to use the term five, seven years ago, big data.

So what AI is doing now is taking that big data that IoT could be collecting, and it’s organizing it and adding insights to it because what everyone learned during the big data hype was, okay, great, we have all this data, but I can’t make sense of it. I can’t find the needle in the haystack. I can’t do predictive things with it, so it’s great I can go back and look at it, but I can’t make decisions in real time with it. AI now applied to IoT at scale has enough data specific to a particular problem or something you’re trying to do, especially for an enterprise, that it actually can provide these predictive capabilities and help them make better business decisions for increased efficiency, customer experience, increased revenue opportunities, all those different things.

But IoT forms the sort of the data pipeline for AI, and you basically apply a lot of the same AI algorithms that are used, general purpose, you can apply them specifically to IoT data.

– [Ryan] Fantastic. So I know with a lot of the companies out there, especially over the last, I don’t know, eight, 10 months or so with the explosion of ChatGPT, and just AI in general, you’re seeing a lot of companies adjust their- the way they describe themselves and their language and their marketing that they put out there to talk about being an AI company or doing AI now. How do- for our audience that’s listening to this, how should they be viewing that?

It seems like some companies really are adding new capabilities that are AI capabilities to bring their product to the next level, while others are just throwing it in there, so they’re not left out in that discussion, or they at least check the box to say, hey, I also do AI.

How should the audience be looking into that when they’re maybe working with a vendor or looking into different vendors and seeing this AI brought up all the time in an IoT company?

– [Eric] Yeah, there’s a lot going on there. Certainly a lot of companies that are trying to raise funding or trying to go public or anything want to latch themselves onto the AI hype. Now there is a lot of hype around AI. There’s a lot of misunderstanding of what’s possible practically with AI, but it is moving really, really quickly.

So things that a lot of people didn’t think could happen, including the people working on it, are starting to happen because they’re hitting scale that they couldn’t simulate before, and now all of a sudden the algorithm prediction capabilities are getting incredibly good. So I think for a customer looking- there’s another play. In addition to the marketing for financial reasons that any company would now latch themselves onto AI- you see every tool we use, internally, we use tons of tools. They’re all announcing AI. Coda AI, Figma AI. Everyone has AI, right? In that case, they’re just playing up to the marketing. It’s probably just a feature that they’ve added, but they can say that it’s AI because ultimately they’re doing some sort of predictive thing or things that the consumers are used to calling AI. But there is also a lot of things that have been done forever or not forever, for many, many years, that now all of a sudden are getting tagged as AI more actively from a marketing standpoint. For instance, I just read, our audience may have heard about Paul McCartney was able to extract John Lennon’s voice using AI to essentially create a lost track, right?

And before they had- they didn’t have it isolated and they used AI to isolate just his voice so they could then record all the instrumentation underneath of it new, but it still would be the original Lennon voice. There’s a product, and I actually, as a musician, I’ve used it, called LALAL, L A L A L dot AI, that has been doing this for several years, and it’s essentially just frequency analysis where they can take all the audio spectrum and they can isolate certain types of sounds in certain frequencies such as voices or guitars or drums.

And you can now separate that out from a fully mixed album. So that is- that’s like an application of AI, but it’s really frequency analysis and there’s probably algorithms similar to AI built into that tool, but it’s not like they just invented this. This has been going on for a while. And so everything’s an evolution of what’s been done.

The AI piece is getting layered on and it’s really confusing the market too, because if everyone does AI, then how does a buyer know what matters, right? If everyone’s doing AI, so how do I know? So I think for a buyer that really is looking at sensor enabled decisioning, they have to look at what is the AI actually being used for in that product or service?

And is it gonna help me do a better job with this new sensor data as opposed to just adding a new term to some marketing as the same exact product that they’ve always been selling. So that’s- it is difficult for customers and it’s difficult- this happens in every bubble. Anything with a .com in the early 2000s was worth a billion dollars.

And then people realized, okay, maybe not. So the customers as we get through the hype cycle, they will start to get smarter about asking about what specifically are you doing with AI and what is the value created?

– [Ryan] Yeah. And speaking of a point you made there about layering AI on, the capabilities, on to things that are already being done, we’ve seen that a lot with edge computing and this- what’s happening in IoT when it comes to moving a lot of the processing power to the edge for a variety of different reasons and not just the ability to analyze the data faster, make decisions faster, cut down on computing power and cost, potentially having to send things to the cloud and so forth.

But now you’re starting to hear edge AI come into the conversation and so what- how do you- what have you seen with edge AI or how is that really different from edge computing? Or is it the same thing? Just a new name because AI has become so popular.

– [Eric] Yeah. Edge computing is not new. We go through this cycle in the technology field of first everything gets centralized in a cloud, then it gets pushed to the edge, and then it goes back. So ultimately the decision of where you place compute power and decisioning depends on the bandwidth between the remote device and the local devices, and how much does it cost to move that data?

Does it make sense to do it? So it’s a financial and architectural decision as to whether edge period makes sense. So edge AI, which we actually do, and it is very valuable. Especially in things like computer vision or types of use cases where you need real time decisioning locally. You don’t want to have latency delay sending a signal up to the cloud, have it do something, send something back, that whole path can be interrupted. If you do everything locally, you’re much more bulletproof from a decisioning standpoint and much more quick, quicker. So for computer vision, edge AI in our standpoint is a really, really good solution because what you do is you can build your models on the data, either real data or simulated data, collect it, analyze it, fine tune your models, and then push those models in more of a compact form down to the edge, and so as the sensors are feeding, predominantly in vision cameras, are feeding data in, those models are working locally on that data and making decisions and just sending events back as opposed to streaming the entire 4K data from 25 cameras that would require an incredible amount of bandwidth out of a store or out of a manufacturing plant to be able to even get that data out. That’s more of a security use case and even security use cases, surveillance use cases, they store most of the video locally and only send things of interest to the cloud so that everyone else can see it. So edge AI is real, edge computing is real, these are real things, but they’re really come down to technical decisions ultimately of where you place the processing power. But for Leverege, we are doing a lot of edge AI now, especially on computer vision because it makes a lot of sense to do it that way.

– [Ryan] And this is- I wanna tie back to something you mentioned earlier about just, generally speaking, when we’re talking about how AI and IoT are now working together. If we look at this on the enterprise side of AI and this new industry that’s really starting to take off now, can you just touch a little bit further on why IoT is so important to the value and growth of enterprise AI in general. We talked about obviously the ability to have access to new data, but aside from that, what are you really seeing as the importance that IoT is having on the enterprise AI space?

– [Eric] So there’s a couple things. We haven’t talked about it, and I’m sure we’ll get into sort of privacy and security and things like that. As most of the audience knows, if you’re using the free version of a lot of these AI tools, you’re essentially training it on whatever data you put in, and it gets reused for the next person that asks similar questions.

The cool thing about enterprise AI, and we’re a big partner of Google Cloud, for instance, is you can segment that data so that no one else can have access to it. So if you’re an enterprise, and you are collecting data through IoT or other means, and you’re taking all that data and using, analyzing it, providing, adding AI algorithms on top of it to make decisions, you can be assured that your data is not getting leaked out because that data ultimately is one of- a very critical business asset that you spent a lot of money on, and it’s very unique to your business and can give signals to competitors or others on how you do things, right? So you- that sort of privacy is really important when it comes to IoT data and enterprise AI.

So what gets me excited about enterprise AI is I think IoT is a primary data feed for, that enables enterprise AI. Really specific, business specific types of decisioning, you need automated types of data, large datasets that are unique to that business. You can intermix them with the general LLMs and other generative AI things to help interpret, but you want to keep them segmented.

You don’t want to directly mix the databases. So in our architecture, we have, and this is what Google Cloud and I think a lot of the cloud, major cloud hyperscalers are doing, is they have an LLM model, kind of like ChatGPT, and it’s completely separate from any data that may come in from an enterprise.

And in that case, you can still run your AI on the enterprise data, but then use the LLM data to help when it makes sense to create a better decision or to actually just take the decision that might be made and express it a different way to a human that they can make a better decision quicker.

– [Ryan] Gotcha. Gotcha. So let’s go ahead and dive into that point you made regarding privacy and security. As we know there are implications on both of those fronts when it comes to bringing AI and IoT together and individually. But how can IoT companies out there address those concerns when they’re thinking about bringing AI tools or AI different capabilities into their offerings and their company in general?

– [Eric] Yeah. So you know, IoT and security have been a big topic for years, right? Anytime you take a device, no matter what type of software is running on it, and you put it out somewhere where people can physically access it, and there’s no security around the device itself, so if it’s a tracker that’s put in a truck or an automobile, the driver can just take it and do something to it, right?

So they can plug things in, they can take it apart. And so there’s the physical aspect of IoT that hasn’t changed. Now on that device, there’s software, there’s firmware, there’s a bunch of things depending on the type of device, and there may be actually some AI types of algorithms running on the device locally or at the next level in a multi-level architecture at a gateway or an edge computer or something.

So just like any software, you need to protect that from any types of intrusions. So you have to set up your firewalls and your DMZs and white list IP addresses. And like all of the things that we’ve been doing from a cybersecurity standpoint completely applies still. IoT layers in the physical aspect too, because when everything’s in a cloud, it’s in a cage, it’s locked up, the physical security is very, very tightly controlled, who can even get access to the front panels of the computers that are running, and so it’s very physically secure in any type of cloud environment. IoT is less physically secure on the device side because that’s out in the wild. So layering AI on top of it, the thing we just talked about, basically data leakage, of taking proprietary data that any customer or enterprise might have, and getting that out into the while so that everyone can then see it, that’s a real problem. So you have to make sure that that doesn’t occur on a information security standpoint. But the physical security stuff, the cybersecurity stuff, that hasn’t changed, it still needs to be in place to make sure the solution overall is as secure as possible.

– [Ryan] Good things to keep in mind. Do you think as you look at, and I know Leverege is- from- on the Leverege side of things with what you all offer to the market with your IoT stack, if other companies out there are thinking about bringing AI into their technology and to their stack, how do you think AI will be able to, if at all, either augment or automate parts of an IoT stack?

You see that either as capabilities that are available today or potentially into the future as this continues to evolve, what are your thoughts on that?

– [Eric] Yeah, there’s a- it’s pretty exciting. There’s a lot of ways that AI and IoT can mesh together to create higher value, to move faster, provide better decisions to end customers. We- I mentioned earlier in the interview, one of the things could just be on how you present the data itself to any end user.

So most- in most circumstances, the average user in enterprise is used to having some sort of dashboard. It’s a visual type of thing with graphs and charts and numbers, but they have to use their brain to interpret them and whatever training they have in the context of that data to make- to understand what to do, and they could set up alerts, and we have all of that built into our stack.

You can set up business specific rules and alerts and set up thresholds and all that stuff. That’s all great, and you still need that. Those are almost becoming table stakes. But how you present that and to an end user is really important because you now have to train them on how to use the dashboard, right?

There’s- these things can get very complex. There’s a lot of buttons to learn, a lot of different things to do to filter search. If you layer an LLM on the front of that, now you can have a conversation with your user instead of having to train them, oh, click on this button, and then type in this and whatever.

They could actually have an LLM like ChatGPT interface where they could say, hey, have any devices in the last 24 hours exceeded 20 degrees Celsius for more than three minutes at night, right? So they could actually type it in text and the software, which is what we’re working on, will actually interpret all those different pieces using an LLM to break them into the various actionable things, call APIs inside our stack, to then take that data that they would’ve had to point and click to go around to find and just coalesce it all together into one simple thing. So it’s like a choose your own journey from a user standpoint where you’re putting it in their context instead of forcing them to learn a tool, and at- the way we organize the data, let them organize it the way they want to ask the question. So that I think is a really, really cool way to now interact, not just for IoT, but any complex dashboarding system or UI where you can minimize training by using LLMs. And then on the data itself, having it set it and forget it, right?

So maybe you have an expert user that is trained on the entire solution and how it works. They set a whole bunch of different rules based on roles that need certain types of information, and no one ever has to log into anything. It just feeds them the data when they need it and only when they need it because everyone’s busy, everyone doesn’t want to spend time looking for stuff. They want to be able to set rules and say, when this happens, let me know about it.

– [Ryan] Do you think with that point you made about the LLMs and being able to create that experience on the user interface side for the end user, this is going to help companies make adoption of these solutions easier by the end users as well as maybe the usability to be able to have more people be able to use this more quickly, more efficiently within an organization?

Because to your point, it might limit the amount of training that needs to go into or even the way a solution is built for an intended end user, you may be able to make it a bit more open to more people or more types of roles within an organization because of these capabilities that seem to enable more human-like interaction and more capabilities to be able to be used by different people within the organization without having to tailor everything to a certain end user only.

– [Eric] Yeah, I think it can accelerate greatly the proliferation of data within an organization because you can now make it available in whatever way they want to try to get at it. How they want to ask questions. As opposed to having users manuals and trainings and other things where now you’ve just updated your interfaces and now the- everything has moved around, and it’s probably better, it’s more user-friendly, but it’s now you have to train a whole new paradigm to existing users who know the path through the software to get to the answers they want. So if you take away that barrier, I think it frees the data to more people with less friction.

So it accelerates and enhances the value of whatever data and decisioning you are surfacing through IoT and AI to more people and makes it more accessible more quickly. And that seems to be the way the world is going. And the other thing that’s interesting is- there’s general purpose LLMs like we’ve been talking about that basically understand language, English, any language at this point, right? So you just, it’s natural language processing and it responds in that way. But then there’s always colloquial language associated with the business. There’s acronyms, there’s things that are not in the general vocabulary of most people, but if you’re in the tire industry or the manufacturing industry or in a specific business, you have playbooks, terms you use, things that- you can train those terms if you have enough of that data, you can train a very business specific LLM in addition with the generic English LLM to now interpret things that normally you wouldn’t find in social media, like people wouldn’t be talking about in this certain way, certain expressions.

So they would seem very unusual for an LLM to just know how to do that because this dataset is very segmented and specific, so there’s almost this layering of LLMs to be able to do that, and I see that also being important for IoT when you, not only are you basically making the data more accessible and actionable more quickly, but you’re bringing the data to their context in their language instead of a translation.

And people always have a problem. There’s always a learning and friction with I want this, but it’s presented this way. How do I make the two work? You would rather just say this is the way I think, and this is the way I want my data back at me. This is the question I ask. And so the prompt type of interface I think has a lot of promise.

There’s- you’re still gonna have to refine it, but I- we feel internally and we’re working on it, to look at ways we can exploit what we’ve already built but now put a prompt interface on top, maybe as even an option at first, to see, and then test it and refine it to see how our end users use it.

And then how, you still need to layer on user roles and permissions and all that- that still has to all be there. So all that is already built, but now you’re just having a slightly different presentation layer.

– [Ryan] Yeah, I’m interested to see how it- anytime you’re working on increasing the accessibility of a tool or solution and making it easier for the end user to understand and adopt is probably going to be a win for everyone involved. Not just the end users and the experience they have, there’s gonna be less resistance by them to adopt something new into their workflow if it’s more easy for them to interact with and use without tons and tons of training.

And then also if you have that occur, then that likely allows the true value of that solution that probably the management team brought in and spent money and resources on to be realized more quickly and probably at a higher level by- because the users are having a better experience with it. It’s more of an easy thing for them to understand and adopt into their workflow and hopefully everyone’s happy and you’re able to basically get that ROI back more quickly to justify the spend and the time that you put into it.

– [Eric] Yeah, it’s similar- for those that have been experimenting with ChatGPT over the last year, they keep- the underlying models keep getting better, and they get more data, more tokens, everything just keeps getting better. But the interface is fundamentally the same. It’s a chat interface, right?

So by decoupling the presentation layer, which is in- for the generative- a lot of the generative AI is more interactive chat-based, you could completely change out the bottom of it, how it actually works, but the same exact interface. So when an end user is trained on how to do prompts and how to do chat, you can change everything underneath and just make it better, but it’s the same exact experience. It’s not like you’re moving things around on a dashboard, and now they have to rediscover where are these things now placed? Or we changed this feature and this feature now works this way and now you have to get trained back on that.

You see a lot of backlash and this is going on for as long as the tech industry has been around, especially software industry where a company spends years- this- the classic example is like Microsoft products, right? Which everyone in the world learned how to use Word and PowerPoint, Excel, and everything, grew up on them in the 90s.

And then Windows started changing a lot of things because they did all these studies and they said, hey, if we move things around, this is gonna make it easier, it’s less clicks, whatever. People hated it or didn’t use it because they were like, I’m used to the way I already do it. I don’t need a new way to do it.

I’m fine with the way I am. So there’s always a resistance once someone has learned how something works to now change, even if it’s better. It’s just a change. It’s a friction. You don’t want to have to now relearn something. So similarly, I think AI will take away that kind of friction because you can have- you can bring the data to the people instead of training the people on how they can interpret the data.

That- if you remove that curtain that sort of exists now, the value creation and the distribution is accelerated tremendously to anyone. Anyone that speaks the language and understands the business can start asking questions as opposed to, oh yeah, what’s a manual on how to use this? And especially the higher you get up in an organization, executives are very busy, they have lots of things that they’re doing every day, and so they don’t have time to learn all of this, so they rely on other people to learn it.

Now they can interact with it in the way that they want to as opposed to the way the developer of whatever the product was instituted and thought was the best way.

– [Ryan] Super exciting to think about. Just the way you explained it is- you don’t have to be technical to understand the value of what potentially this can do for solution adoption, for end users experiences with IoT solutions because I know from discussions that I’ve had on the podcast before, a lot of issues that cause these pilots to potentially not get through to scale is the friction that is caused by bringing in something that requires the end user to really change the way they do things and how they work with their day-to-day tasks. And there’s some level of resistance, even though their management is saying you need to use this, they still just don’t find either the value or want to take the time because they’ve been doing their thing their way for so long and they’re happy with that.

So the more you can lower that, the better I think everyone’s gonna be.

– [Eric] Yeah. And that’s just a human tension that we all have. We- there’s a general resistance at some level or at least there’s a- what’s why we call it a learning curve, right? So anytime you have to do something else, there is more thought process and memorization. You have to use your brain a lot more to do anything new.

And at some point that becomes taxing, especially when you’re really busy, and you don’t have enough time in the world we move in where every second matters anymore. You just don’t want to devote a half an hour to learn how the new release organized things. It’s like, I have a half an hour, I have meetings in that half hour.

I have to write software, I have to get a build out, whatever. Everything’s accelerated now, so you want to meet the user where they are as opposed to having them come figure out how you decided that it should work. Even if you’ve done all the great user experience testing and everything, it still doesn’t mean that particular user was part of that group.

So they’re still gonna have to learn it cold. And maybe they only use it twice. And so why do they want to put on all the effort to learn how to use something when they’re only gonna use it two times? They would just rather just be able to just do it the way they’re used to interacting with the world based on their job description.

– [Ryan] Totally agree. So we’ve talked about a lot of very exciting things. Using LLMs for the end users benefit and being able to create this chat conversational experience is super fascinating to think about. As we wrap up here, I wanted to ask you where do you really see AIoT, that marriage of AI and IoT, going in the future?

I know we’ve talked a lot about it, things now and that we’re excited about happening, but just anything to kind of round the conversation out that you’re really excited about, that you think is going to be possible, may not be possible today, but people should be on the lookout for and really start paying attention to.

– [Eric] Yeah, probably the most excited sort of niche, it’s a very large niche area of IoT and AI is around computer vision. So the reason we’re super excited about that and really doubling down on any use cases where you can solve them with computer vision is, if you just look at humans, right, our most high bandwidth sensor we have is our eyes.

We can interpret so much data through our eyes. We are visual creatures, right? So a camera serves as your eyes, it’s your eyes placed, you can have them moving, you can have them fixed, they can zoom in and out. They can do all the things a human can. Now you layer on AI to now interpreting what it’s seeing, which our brain is already doing.

Like we can interpret what things in front of another, what color things are, like these are all things we just naturally do without even consciously thinking about it. These models that we’re building and others are building that can take that data you train them on, hey, here’s someone, when someone walks like this or walks through this, time how long that path took, look at how their bodies may be moving to basically indicate that they’re doing some sort of activity right, and create an event based on that.

Or I saw a car enter a garage versus an exit. Get me the timestamps automatically every time a car or a lift or a patient is moved into a room. It can be anything you could see as a human if you were standing there, you can now do with computer vision as long as you put the cameras in the right spot to be able to see it.

So we’re super excited about using cameras as sensors for dozens of use cases. We already have quite a few customers that we’re working with using computer vision as a sensor, because, at Leverege, we’ve always been sensor and connectivity agnostic. So we don’t make sensors, we don’t sell connectivity.

We make the software that integrates it all together into a solution. Whether it’s a GPS sensor or a Bluetooth tag or whatever, we work with those things. But cameras are really interesting to do, and you couple that with edge AI, which we talked about earlier, where you can build the models, test them, train them at the cloud, deploy them down to the edge, and have that edge, those models detect all these different types of events very, very consistently, and then just send those events up to the cloud, which minimizes the bandwidth. And those events are really important to people and businesses, and you can make events out of pretty much anything if you collect enough data. So there’s always a training period where you install the sensors, in this case cameras, you look at what’s going on, you get thousands of samples of data, and then you do some sort of human feedback loop where you’re tagging things, labeling things, that you know there is- and there’s lots of tools that’ll help you do that even in a more automated fashion. Back in the old days, you actually had to go image by image and say yes, no, and describe it with metadata. Now there are tools that automate that process, so you can do much larger bulk imports of that and speed up your model development. Fundamentally, we’re very, very excited about computer vision with IoT. Sensors are cheaper, you can do power over Ethernet, so you don’t have to worry about batteries, which are always a problem with IoT, batteries.

So that’s a really big area. And then the other area is just things that we’ve been doing all along on any type of data. When you get enough data at scale, you can start doing predictive types of things on it. That would be like a predictive maintenance. If you get enough data on how a machine operates, how it oscillates, vibrates, whatever it might be, or its temperature, you can set up characteristic profiles of if it starts doing this, the most likely case is it’s gonna break down. So let’s detect that early so before it breaks down so we can fix it and not have it break down. This is really important in manufacturing and mining and other areas where if a machine goes down, it’s costing millions of dollars a day for them to get up and running.

So any type of critical machinery, predictive maintenance is really, really important, and you can do that with IoT data.

– [Ryan] Lots to be excited about and keep our eye on and really pay attention to over the coming months to years even. But Eric, as always, it is a true pleasure to have you come on to a special episode like this and talk about what’s really going on in the industry. AI and IoT is super exciting.

Lot’s going on that we are excited to really keep our finger on the pulse of and having experts like you come on and share what you’re seeing happen as you are in- you’re on the front lines every day doing this, building these solutions, these technologies, for the benefit of our listeners and the companies out there that are adopting IoT solutions. For our audience who wants to learn more about Leverege, potentially follow up on this discussion, reach out in any way, what’s the best way they can do that?

– [Eric] Well, they can always send- I’m not- we’re not too big that you can’t send me a personal email, so it’s just Eric with a C at Leverege, all Es, l e v e r e g e .com. We also- our website has many contact information that you would need. But yeah if there are any other customers or listeners that want to continue this discussion, find out what we’re doing, or at least just get our thoughts on where the industry is moving, we’re happy to engage with them. We’re super excited about it. Not only from a business standpoint, but what it can do to amplify human potential, which is our mission at Leverege.

It’s been our mission for many years now, is- how can we automate things and take some of the drudgery out of what humans do day to day using technology and elevate their jobs to something more meaningful and something they’d be more passionate about and safer, right? If you can start doing things without having humans in unsafe conditions, that’s always a good, that’s a plus too.

– [Ryan] Absolutely. Couldn’t agree more. But yeah, thank you so much again, Eric, for taking the time. Excited to get this episode 300 out to our listeners and hopefully we’ll have you back on throughout between episode 300 and 400 assuming we make it there, to keep the- updating our audience on what’s happening in this space because obviously IoT’s been exciting for many years, but with AI really coming in and playing such a big role now, it’s getting even more exciting to pay attention to.

– [Eric] Yep. Yeah, everything is coalescing together. 5G, AI, VR, AR, like all the acronyms are just continuing, the pyramid keeps getting built bigger and bigger, and there’s bigger impacts going faster every day. It’s an exciting time to be in tech, and it can be a scary time as well.

So being responsible with automation, with AI, this is really, really important. And we’re certainly keeping an eye on that as we do these things and trying to make humans lives better and not worse.

– [Ryan] Absolutely. Couldn’t agree more. Thanks again, Eric. I really appreciate it. And yeah, we’ll talk again soon.

– [Eric] All right, thanks Ryan.

Hosted By
IoT For All
IoT For All
IoT For All is creating resources to enable companies of all sizes to leverage IoT. From technical deep-dives, to IoT ecosystem overviews, to evergreen resources, IoT For All is the best place to keep up with what's going on in IoT.
IoT For All is creating resources to enable companies of all sizes to leverage IoT. From technical deep-dives, to IoT ecosystem overviews, to evergreen resources, IoT For All is the best place to keep up with what's going on in IoT.