Adam Scraba, Director of Product Marketing at NVIDIA, joins Ryan Chacon on the IoT For All Podcast to discuss IoT in AI, computer vision, and simulation. They talk about the growth of IoT, vision AI and digital twins, how AI and IoT are creating value, the challenges of IoT adoption, the importance of domain knowledge for success, and cameras as IoT sensors.
Episode 311’s Sponsor: KORE Wireless
Twilio Super SIM is now KORE Super SIM! On June 1st, KORE Wireless acquired Twilio IoT, and with it the simplest solution for connecting your hardware reliably around the world.
Super SIM is a single SIM that gives access to over 400 networks, including the top 3 tier one carriers in the US. Automatic network failover means maximum uptime for your devices.
And getting started is easy – just visit korewireless.com/supersim-iotforall
About Adam Scraba
Adam Scraba is Director of Product Marketing and drives worldwide evangelism and marketing for NVIDIA’s accelerated computing platform in applying artificial intelligence and deep learning to video analysis to solve critical problems across a range of industries.
Prior to this, he was responsible for leading NVIDIA’s business development and strategic alliances applying artificial intelligence and deep learning to video analysis for smart city initiatives worldwide. Throughout his career, he has worked with Fortune 500 companies, startups, and governments.
Interested in connecting with Adam? Reach out on LinkedIn!
NVIDIA is the pioneer of GPU-accelerated computing. The company’s invention of the GPU in 1999 redefined computer graphics and gaming, ignited the era of modern AI, and is fueling the creation of the industrial Metaverse – with the GPU acting as the brains of robots, autonomous machines, and self-driving vehicles that can perceive and understand the world around them.
Key Questions and Topics from this Episode:
(08:02) Challenges of IoT adoption
(12:54) Digital twins and simulation
(17:00) Cameras as IoT sensors
(20:12) Learn more and follow up
– [Ryan] Welcome Adam to the IoT For All Podcast. Thanks for being here this week.
– [Adam] Thanks for having me.
– [Ryan] Before we get into it, I’d love it if you could just give a quick introduction about yourself and the company to our audience.
– [Adam] I’m Adam Scraba. I lead marketing for an applied AI effort within NVIDIA that focuses on applying AI to infrastructure automation.
We leverage IoT heavily. We work on things like smart retail, smart hospitals, manufacturing, smart spaces like airports and reducing traffic congestion in our city streets, all using sensors and IoT. And so I’ve been with the company for quite a while and involved in this effort from the beginning.
So it’s been pretty exciting. I do a lot of evangelism, and I work with a really large and growing and quickly evolving ecosystem of partners.
– [Ryan] We have seen IoT obviously grow a ton over the last number of years across different industries. The cost of adopting, deploying is going down in different elements.
Solutions are being proven out and scaling even better than they have before. So with all that growth, all these sensors being deployed, what’s happening? What do you see happening now? Or what do you see happening next, I guess I should say. What are the main things that we should be paying attention to with that growth?
– [Adam] It’s so interesting. In our space, one of the biggest I guess sensors or IoT devices that we engage with is cameras. So you know, is the network camera. There’s estimates, and I believe them strongly, that there’s probably about two billion cameras deployed worldwide.
And so that arguably is one of the most important and most valuable IoT devices that we have. There’s so many questions that you can answer with cameras, and we’re seeing really incredible, first off, like you said, the costs are coming down in a big way, and it represents a really important AI application area for us to make sense of it all.
And as I mentioned in the intro, we focus a lot on really important problems, and with the widespread nature of these sensors for the first time, we really can tackle really important things. As an example, traffic fatalities is the number one cause of death in the US, and it’s effectively for the first time because of this data, we can actually approach it like it’s a disease. As opposed to it’s an inevitability, and that’s really important. And that’s just one example. There’s this really interesting effort around bringing those fatalities to zero, and we, for the first time, we can, thanks to IoT.
– [Ryan] So let me ask you, we talk about video. It’s definitely a popular area now, that next level of sensing through cameras and technologies, a lot of things out there, vision AI, computer vision, automated optical inspection. What are those things? Can you just high level define when people hear those terms, this is what they mean or what you should be thinking about?
– [Adam] Yeah, I think the easiest way to think about a lot of this stuff is a very simple analogy. And hopefully it will make sense. The easiest way to think about this is as an automation effort. And what I mean by that is if you think about, we don’t think about a robot, like a, from Star Wars, a robot that’s moving around, and it’s making beeping sounds, but it has some level of autonomy, or you can think about an autonomous vehicle. Both robots.
A robot really does three things. It perceives the world around it. There’s some reasoning that it makes, like reasoning like I’m about to run into a wall or there’s a car in front of me, and I need to apply the brakes. And then there’s action. Some physical action. Brakes, movement, whatever that might be.
Perception, reasoning, and action taking. What we’re doing in a lot of different industries, and what our team actually focuses a lot on and thinks about is turning infrastructure into a robot. And so that vision AI, that perception, that first thing that, perceive the world around you using cameras, that’s like the, that’s frankly the last, since deep learning and AI really exploded, say a decade ago, that was, we spent the last number of years really perfecting the idea of giving machines superhuman vision through perception. And so that’s probably the easiest way to think about it. And that concept of turning infrastructure, whether it’s an airport or a hospital room or an intersection on a city street, frictionless shopping, like our retail stores are increasingly going to be effectively robots that just don’t move.
That’s really what we’re doing. And so that’s that, I would say that’s probably the best way to think about all these sensors and that AI, these are just the perception level, but all the really, that’s an important part, that’s one third of it. But the really interesting stuff is when you can actually say not just what’s happening now, but what’s about to happen next, and how can I improve upon it? How can I save a life? How can I let a shopper have a better, more delightful, super delightful experience as they go and buy their groceries? That’s I think what we’re really trying to get to.
– [Ryan] So how are these technologies helping get to that point, right? Like how is deploying sensors, putting these cameras and these solutions, these AI tools, IoT tools in retail, in cities, how are these things actually creating value?
– [Adam] There’s so much inefficiency. And again, what, you know, our, the role that I, the lens that I see the world through is very much through these physical processes. And again, we could just go one by one. If you think about manufacturing, there’s significant amount of manual labor that is inefficient, or and I wouldn’t say manual labor,
I always just think processes are very inefficient. There’s inspection that is very rudimentary, and that, like, Gillette razor blades coming off the line or PepsiCo products, they could be inspected for defects much more upstream of the process to save a significant amount of dollars all through vision AI. Retailers have incredible amount of waste that can be, there’s like, it’s a staggering amount. It’s trillions of dollars that are wasted in retail. Agriculture. We can make food better where we literally we have, there’s for the first time there’s like robotic pollination is starting to become a thing to make food more efficiently.
But what’s really interesting is that there’s an efficiency component and there’s also a safety component and those two things often go hand in hand, particularly these are all physical processes that we think about. And like workplace safety is a big one. We’ve got increasingly, and as you increase automation in our production facilities, now you have machines and humans coexisting.
And that’s an area we can make a lot more safe with simply with giving our infrastructure more sense, more perception, and more ability to improve the processes.
– [Ryan] When it comes to the adoption, whether it’s the company adopting it to provide their customers with a better experience or adopting it for a company to use internally within the organizations, there’s always challenges when it comes to deploying and adopting IoT solutions, right?
It’s oftentimes new. It’s getting integrated in with potentially legacy systems. It might create kind of new business challenges for organizations. When you think about companies adopting IoT, whether it’s for themselves or their end customer or something that they will sell to a customer, where do you see the biggest challenges lie outside of the technical piece?
Because technical piece, obviously it’s, we’ve talked about a lot before and we’ve also, it also can be dependent upon the environment, what’s current, the current infrastructure that’s already there within an organization, but you take that out, what do you see as the bigger, biggest challenges when it comes to bringing IoT into a business or the business of potentially, of your customers.
– [Adam] There’s one interesting trend that I think hits upon what you’re saying. And it’s interesting because it does slightly overlap with the technical side. But hopefully I can explain. What we’re, because, even in my role, I, we know literally in the last nine years that we’ve been at this, we have seen in the early days, you, as you said, all of this technology is very new. What you had was technology people, in our case, a lot of computer vision people, dictating or creating solutions that they thought was appropriate for a particular vertical, whether it was retail or manufacturing or smart cities. In the last nine years, the maturity of these tools and AI has been increased so much, the accessibility of being able to create these tools has had a really interesting effect where today it’s no longer these grizzled 30 year veterans of computer vision trying to solve a retail or a traffic, a smart city problem. We now have the tools such that industry experts, people within the retail or the manufacturer, like who literally really understand their vertical have access to leverage IoT and AI for the first time because the abstraction of these tools has allowed people to access the magic of things like AI without needing to be an AI person.
They don’t need to be a data scientist. They don’t really need to know much at all. The tools are great. And so that explosion of maturity of these tools has really had a profound effect on what, the value of applications. We’re no longer, it’s no longer a solution chasing a problem.
We’re now able to find a problem that is a burning problem and solve it much more easily. And for example, we literally, even this year, we’ve seen cities, as an example, for the first time, cities creating their own solutions for, using AI for solving traffic problems. Raleigh, North Carolina is one really great example that we’ve worked with for a while.
We used to work with them from the point of view of here’s an ecosystem of app partners that can help you. They’re now building their own solutions using AI. For the first time, we have cities, and that’s just, you know, if you told me that even, you know, six, seven months ago, I would have probably laughed at you, but that’s the kind of thing that we’re seeing, and that is going to change I think everything in a lot of these industries.
– [Ryan] One of the things I’ve seen that really leads or really help contribute to deployments being successful is being able to have a very clear understanding of the domain knowledge and expertise for where it’s going to be deployed, understanding the end customer, the environment, the business, et cetera.
And yes, a company who builds these solutions can learn that. But the closer you can get that to the people actually, or people that are, the closer you can bring that and have the people involved who are doing this day to day as part of that process, the more I guess higher chance you have of building something that’s going to be successful.
So, and I’ve noticed that a lot with companies focusing in on more vertical specific tools, vertical specific applications, while also making it possible for those who are in those industries to use the tools and not have to always be working with another company in order to develop, which can also lead to things being lost in kind of those conversations to build what’s exactly needed for the end user.
So I’ve seen that kind of really play a big role in the growth of or the success of a lot of different deployments.
– [Adam] Yeah, 100%. And I think that’s what’s so interesting about being in a business like this and all of us watching this happen. This is not, we sometimes say, this is not, it’s not a little bit cheaper or a little bit better.
This is brand new stuff, and it takes a very different kind of genetic makeup to almost just an experience and openness to go and try some stuff. And so the early adopters are doing magical work with us.
– [Ryan] I’ve had, I had a guest on a little while ago, and we were talking about simulation in IoT.
And when I first joined the IoT space about seven years ago, simulation was a big topic. It was the ability to deploy without deploying and without the initial investment, without the hardware, without all the technical pieces, to figure out and showcase ROI prior to that investment being needed.
And then digital twins became more popular. That became a big thing. And then I’m starting to just see the combination of digital twins, simulation, like physical twins in a sense too. So there’s a big relationship between success and the ability to utilize simulation and digital twins to build something that’s the best fit possible.
How are you seeing the growth of those areas contribute to just wider spread adoption and success in IoT, even now bringing in AI tools as part of that process as well.
– [Adam] Yeah, it’s pretty incredible. And I think it does speak to a little bit the accessibility of some of these tools. We’re seeing simulation and digital twins, like you mentioned, it’s been talked about for so long, but what we’re really seeing an increase, and what’s also interesting is we have this very enviable and delightful position as NVIDIA to have been at the very beginning in the world of simulation. One would argue very, and I think no one would argue with the idea that gaming, and a lot of people think about well, NVIDIA, you started with gaming. Gaming really is a simulation of a 3D world. It simulates, and we simulate physics, we simulate all lighting. We simulate all these things. So we’ve always had like very much one foot in the simulation world. So now we can take a lot of the technologies that was built for gaming and rendering and physics simulation into simulating, of course, autonomous vehicles. How are you possibly going to drive X amount of millions of miles in a vehicle without ever making the vehicle and ever adding AI to it. You do this through simulation, and we’re seeing that in across everything, and particularly now with IoT, we can now simulate environments. We’re simulating with 5G.
We’re simulating, how does, where do the 5G towers need to be in a city, and we’re simulating that all in digital twins and then rolling it out. In our space to, we simulate cameras. Where should the camera placement be in city streets to simulate the interaction of traffic and cyclists and increase safety.
What’s, and a lot of the work that we do now bridges the digital twin to the physical operations. So when you design in the simulation space, and you design to operate, and when you operate it, a lot of the AI that we do, the perception with sensors and cameras, we now can bridge the, what you try to design the experience or the scenario you tried to design, we now map it to what’s actually happening in the real world. The other really cool thing that we’re seeing is that simulation is not just allowing us to do a digital twin of a city street or environment or a manufacturing facility before it’s built, and just, interestingly, see what it’s going to look like, simulation is now actually becoming a very important part in AI. We can now for the first time use simulation to help us develop really complex AI solutions. For the example of a matrix of sensors in an environment, we can now simulate what’s happening, generate artificial ground truth and then simulate what are all the sensors seeing and use all that information to actually train our neural networks to do something like tracking boxes in a supply chain across thousands of square feet across hundreds of sensors.
We can do that only in the digital twin space. And so some of the really complex and amazing solutions that we’re rolling out now were, had a really, had their beginnings in digital twin. That’s the only way you can do some of this stuff. So it’s very exciting.
– [Ryan] I wanted to go back before we wrap up here and talk, and ask you a question about just how far we’ve come when it comes to cameras and their ability to provide value. Because people I’ve spoken to before that have been hesitant to adopt cameras, they’re just still trying to understand how reliable they are, how reliable the software behind them is for things like computer vision solutions, vision AI, and so forth.
If I’m listening to this and trying to understand what I need to be, what I need to really, what I need to really know about what they can do, the role they really can play and where we are just in general when it comes to those types of solutions, what would you say to somebody who was still on the fence?
– [Adam] We’ve come a really long way. I think, and I’ll give you, I’ll just give you some examples. And by the way, I also think that we’ve come a long way, but we are even nowhere near where we will be in the future. We’re still, this is, all of what we’re doing is still, we’re still in the very early innings of where this is all going to go. But I will tell you, if you think about, it was pretty quick with CNNs, and it was like ImageNet and this was not that long ago.
It’s maybe three or four or five years ago where we achieved superhuman vision with just basic CNNs. Right now we’re in an era of we’re using transformers, right? And transformer, vision transformers is the building block of large language models that you see in things like ChatGPT.
So we’re seeing now the ability to ask incredibly complex questions of imagery and video. And we have, this is state of the art accuracy, and the accuracy keeps going up when we inquire what is happening in this video. And it’s robust to things like, people are worried about, does it work, let’s, now we’re really building models that are robust to noise, to occlusion. Something goes behind a tree or behind a box in a factory, the models can track it with incredible accuracy. We’re also seeing not just the concept of what is in this frame of video, but we’re also seeing what’s happening over time. Did someone trip and fall versus, is it like really bad dancing or is that violence. That’s, these are questions that are silly, but these are really important things that we can very much decipher and understand with a lot better clarity. And then the concept of multi sensors in a matrix of, being able to have this zoom out view of a factory floor, that’s really powerful. And that gets us beyond this myopic view of like I can only look at 10 by 10 square feet of space.
Now, I’m looking at thousands of square feet. These are all really, so I would say the cost of cameras have come down to where they’re not quite free, but they’re roughly very low cost. And we’re leveraging, the world is leveraging them in a really exciting way.
And it again, it’s efficient. It’s really very much efficiency and public safety things that we’re seeing is the big value for this.
– [Ryan] Fantastic. Adam, thank you so much for taking the time. For our audience who wants to learn more about what you all have going on around these topics, follow up potentially with questions, all that kind of good stuff, what’s the best way they can do that?
– [Adam] Check out the work that we’ve done at nvidia.com/metropolis. The Metropolis effort is bringing all of our vision AI solutions and our ecosystem and celebrating the work that’s being done. People can join the effort, join the movement, learn about what we’ve done and ask questions through that. It’s probably the best way to do it.
– [Ryan] Well, Adam, thank you again so much. Excited to get this out to our audience.
– [Adam] Excellent. Thanks so much.