In this episode of the IoT For All Podcast, ZenduIT Founder and CTO Vishal Singh joins us to talk about the role of artificial intelligence and computer vision in IoT. Vishal shares some of the applications that computer vision enables and how his team at ZenduIT has implemented computer vision in high-impact, real-life Applications, like people tracking in airports, for example.  He also shares some of his insight on what exactly goes into building highly accurate, reliable artificial intelligence (AI) models and how companies can determine what decisions should be made on the edge. To wrap up the episode, Vishal shares his thoughts on the future of AI and how we can expect the world to be shaped by these technologies in the near and distant future.

Vishal Singh, CTO of Zenduit an industry leader in building scalable mobile and web solutions that can be integrated across various platforms and industries. With over 10 years of experience, he has successfully established a team focused on developing intelligent fleet and field service solutions to bridge the gap between fleet management and technology to increase safety, profitability and productivity in various fleet dominated industries.

Interested in connecting with Vishal? Reach out to him on Linkedin!

About ZenduIT: ZenduIT is an industry leader in software solutions. Specifically, ZenduIT focuses on building scalable mobile and web solutions that can be integrated across various platforms and industries. By focusing on developing intelligent fleet and field service solutions to increase profitability and productivity in various fleet-dominated industries, ZenduIT is at the forefront of developing and deploying IoT solutions.

Key Questions and Topics from this Episode:

(01:09) Intro to Vishal

(02:43) Intro to ZenduIT

(05:49) What is Computer Vision?

(07:35) How is your team using computer vision?

(10:41) How does AI work together with computer vision?

(14:13) What goes into building accurate and reliable AI models?

(17:00) How do you determine if AI at the edge is right for your application?

(25:34) What have you seen as far as trends in the computer vision space? Where do you think it’s going?

(28:30) Will AI take over the world?


– [Announcer] You are listening to the IoT For All Media Network.

– [Ryan] Hello everyone and welcome to another episode of the IoT For All Podcast on the IoT For All Network. I’m your host, Ryan Chacon, one of the co-creators of IoT For All. Now, before we jump into this episode, please don’t forget to subscribe on your favorite podcast platform or join our newsletter at to catch all the newest episodes as soon as they come out. Before we get started, if any of you out there are looking to enter the fast growing and profitable IoT market, but don’t know where to start, check out our sponsor Levereges IoT Solutions development platform, which provides everything you need to create turnkey IoT products that you can white label and resell under your own brand. To learn more, go to That’s So without further ado please enjoy this episode of the IoT For All Podcast. Welcome Vishal to the IoT For All show. How are things going on your end?

– [Vishal] – Yeah, they’re doing great, thanks, Ryan. How about you?

– [Ryan] Pretty good. Not too bad. We’re getting warm weather down here in the DC area. So it’s very encouraging to be able to get outside after work and do some activities out there as opposed to being locked in here and it getting dark at 4:30 .

– [Vishal] Yes.

– [Ryan] Why don’t you start off by just giving a quick introduction to our audience? Talk about your background, a little bit experience, what brought you to being one of the founders of ZenduIT, and we’ll get into that a little bit more afterwards.

– [Vishal] Okay, great. Yeah well, I kinda got started in this IoT space at ZenduIT. It was about 10 years ago. A little bit prior to that, I was working with Siemens Medical on medical devices, and that was my first exposure to IoT and camera AI. We were trying to improve they read medical specimens. I was always a process guy. I was looking for ways in ways people could work more efficiently, how technology and people intersected in that, and sort of got involved. I was pretty entrepreneurial. So, to the point where formulating a few ideas, I started in the telematics space when it was just basically GPS tracking, you know– where we were talking.

– Sure

– [Vishal] About vehicles and people wanted to know where things were but– now as things.

– Sure

– [Vishal] Sort of evolved, you know started this business in Zendesk which is more on the IoT solutions and– integration space.

– Sure

– [Vishal] I’ve been doing this for about 10 years and always innovating ’cause we always were finding new technology out there and–

– [Ryan] Right.

– [Vishal] And looking for ways in which we can improve the way people leverage IoT and technology. And specifically on camera–

– [Ryan] For sure, right.

– [Vishal] Yeah.

– [Ryan] I’d love it if you could expand on the founding of ZenduIT, and just a little bit more about the backstory there. It’s always interesting for me and our audience to just get that inside look into the opportunity you saw in the market, what you saw as the potential fit in the solution that you guys could provide and how ZenduIT came to be.

– [Vishal] Yeah now, that’s a great question. I mean and this kind of relates to the evolution and maturity of our industry too, right? So when we started about 10 years ago, even before starting, I was quite interested in the fleet market, like vehicles– and information.

– Yeah, yeah.

– [Vishal] The emerging sort of tech was on GPS tracking. And that’s where we started. Like we looked at vehicles and dots on a map and hey, generally the question was where’s my guys? How long are they there? And sort of the software part of it interested me like how we got this hardware on these vehicles how to communicate it to the cellular networks and how people got the result on a map, right? It was very cool to me. But through the last 10 years as technologies evolved there was a growing need for customers to integrate this pack and this data to other software systems. Like how we need an application that helps anytime a customer arrives at a certain location I want to get a notification or to my customers, or if they’re delayed I want to get some sort of update on my ERP system or whatever that might be.

– [Ryan] Right.

– [Vishal] That’s where ZenduIT started, to be on the software solutions company. So I felt like there was a bit of a gap between sort of those companies providing these little GPS trackers out of the box and what customers were looking for in terms of having some sort of automation on top of that. And that’s where I looked to evolve, you know build some software applications for our customers. And we looked at every industry sort of has a different sort of Applications and started evolving on that tech. And it became asset trackers. It became camera and camera vision is where we put a lot of focus and attention today.

– [Ryan] Okay.

– [Vishal] And that’s where we’re sort of innovating because there’s just so much you can do– with artificial intelligence.

– Sure.

– [Vishal] And so, but yeah, that’s how the business has evolved. We are in Mississauga, Toronto, but we’ve also opened up an office in Dubai, working with the the airport over there.

– [Ryan] Okay.

– [Vishal] And again, same sort of deal looking at how IoT could better improve their business operations at the airport. And so it’s been exciting to see that evolution go from–

– Right.

– [Vishal] Basic dots on a map to, you know how do we improve aircraft turnaround in terms of timing and speed, right? So it’s where we evolved and hope to continue to evolve along that.

– [Ryan] So, I have two quick questions, then another follow up to that, one on the computer vision side, can you explain to our audience what computer vision is– and then I’ll follow up.

– Sure

– With a question.

– Yeah

– [Vishal] Yeah, it’s interesting in the sense of like, when I first got into the space, you know it’s sort of that manual look of video and, you know if you think about it in terms of a human like a security person, they’re watching video and they’re sort of looking for events they can see on a video from a security perspective. And then they’re tracking and trying to action out certain events. Like if somebody’s doing some theft, you know they’re monitoring it– and they do it.

– Right.

– [Vishal] But the computer vision and AI, it’s just about automating and detecting different types of events that are of interest.

– [Ryan] Okay.

– [Vishal] When I got into the space working with Siemens Medical, it was around medical specimens where we had a camera pointed at test tubes that were going through an assembly sort of solution. And it was supposed to scan the barcode but also the size of the test tube. You know there’s–

– Right.

– [Vishal] There’s hundreds of different sizes of test tubes, and there would be different systems based on their sizing and other little attributes of these test tubes. And so–

– Sure.

– [Vishal] That was my first exposure onto computer vision. And at the time it was just as simple as sort of detecting the size and reading the barcode and–

– Yeah.

– [Vishal] Sort of on the computer side, making a decision about what to do with that. Right.

– Right.

– [Vishal] So that’s kind of, I guess, a layman’s term– of computer vision.

– Yeah, that’s great!

– [Ryan] That’s great, so how does this play into what you’re doing over there, I guess in Dubai on the airport side. I saw there’s a little bit of information around counting passengers, detecting faces and security and that kind of thing. How is computer vision playing a role in the Applications you all are launching?

– [Vishal] Yeah, so we’re doing a lot of research in terms of how we can use computer vision and AI– to help.

– Okay.

– [Vishal] In these business operations. So right now we’re on some research activities. From right now on the airport passenger side, we have passengers go get transported from their gates by a bus often to their respective aircraft. And here’s where, especially looking at these premium airlines, first class passengers, there’s obviously a big concern on making sure that passengers’ comfort is at the highest level throughout the process. And with cameras with computer vision, detecting, you know a number of people on a bus to make sure that it’s not over capacity– by the computer.

– Sure

– [Vishal] The camera could see how many people are there. And also even looking at things like comfort level. Are they standing? Are they sitting? So just looking and collecting that type of data.

– [Ryan] Okay.

– [Ryan] And looking at that over the course of a day or months, you can see where there might be over-capacity. And then you start to make a decision of, hey do we need to have more buses– or do we do certain things.

– Sure, sure.

– [Vishal] Around passenger comfort? So those are the things that we’re doing some research activities there at the airport.

– [Ryan] Okay.

– [Vishal] There’s obviously a lot of different Applications we can get into, but on that back camera side, but more traditionally, from the software that we offer, we’re focusing a lot more on specific driver facing detection of fatigue–

– Okay, right.

– [Vishal] And detection of if they’re on their phone. Are they navigating the road with a proper way, not going into lanes. So the things that I’m sure that we’re all pretty familiar with with these– cameras on vehicles today.

– Right.

– [Vishal] But slightly different interests in Applications when we’re talking about business owners managing their drivers on the road. Of course there’s a concern around safety and liability and risk, especially when you’re dealing within high risk transport of goods, like oil and hazardous stuff. So at the airports, kind of that comfort level for passengers, but more traditionally, most of our business and sales is more on the driver transport market ’cause it’s just a huge market– for improving them.

– Makes sense.

– [Vishal] Then again, it’s also for driver comfort as well. Fatigue and, so all of this is information we’re collecting and looking at better ways to take these data and insights and give more solutions to customers and in how they can– improve their business.

– Right.

– [Ryan] So how does it work when it comes to integrating in the AI component? We know AI is something that a lot of people have different opinions on how effective it can be, how reliable it is. Is it something that works a hundred percent of the time? You know, is it a hundred percent reliable, you know all those kind of questions. So how do you all approach this type of conversation with customers and you kind of inform them the value that AI plays in the solutions that you guys build?

– [Vishal] Yeah now that’s a great question. I mean, for us right now, it’s also important for us to set proper expectations on what AI is with customers. Some people think of AI as like, you know the black box that can really solve everything, but really we’re at a level of still maturing what AI is. And then, like you said it’s very difficult to achieve a hundred percent accuracy on any AI model you develop. And let me explain what, like an example, right? So we, for example, are continually building our AI for something as simple as like, hey, detecting whether a driver’s on their phone and to improve safety, on vehicles and making sure that drivers are attentive to the road. But in building that AI, drivers could have a mic, a CB radio in their hand or they might have their phone in their hand. But it’s about improving that model of detection. So the work that we do to sort of improve that is we take each of these events and this is about building our AI mode, is about taking each event and sort of tagging it. Is this an actual driver on the phone? Or is it not a driver on the phone? And as you take these events, and it’s sort of machine learning. You saying, yes, it is on the phone. No, he’s not on the phone. You’re sort of improving the accuracy of your AI model. So now, in the ideal scenario, just going back to the security example, you don’t want anyone continuously monitoring some video or events and seeing a lot of false positive and it’s for us, it’s just about that, improving the accuracy of AI. So when we talk to our customers about AI, we talk to them about where we’re at with the level of the ability for us to detect certain events. And I think–

– Sure.

– [Vishal] If we don’t have that conversation, customers can get disappointed in these types of scenarios. There’s AI–

– Of course.

– [Vishal] For facial detection, right?

– [Ryan] Sure.

– [Vishal] There’s AI for detecting objects– and based on this.

– Right.

– [Vishal] You can do many different things. Like facial detection, maybe you put it into a database and you say, hey, this person is John Smith, or is Ryan, or is Vishal. But now when you’re introducing all these variables about facial mask and all this other thing, it becomes a lot more difficult to detect. And now you have to introduce new variables in your modeling of AI– of how to better.

– Right, right.

– [Vishal] Detect certain events. So, it’s super difficult. With customers, when we talk about AI, it’s about setting proper expectations on what the AI model can achieve. But obviously the goal and what we’re trying to set the expectation with the customer is that this is where we’re trying to be and go. So hopefully that gives an idea of what AI is and where we’re going–

– [Ryan] Yeah, yeah, of course.

– [Vishal] But also the challenge of trying to improve your AI model.

– [Ryan] Yeah, so that’s a good point to expand on here. So when it comes to building reliable AI models, can you talk about, just at a high level, process that goes into that, the time it usually takes why it matters and how that results in getting good quality data on the AI side. And maybe that’s more of data going in initially so that AI can deal with it and make adjustments, but just answering that in as simplest terms as possible.

– [Vishal] Sure. The biggest thing that’s of importance is the quantity of data, the machine learning. The more information–

– Right.

– [Vishal] You give to an AI model the better it can become to perform So, in the simplest form, to achieve a good AI model we need, well on a camera side, we need good resolution on the images. So that means putting the camera, and positioning it in such a way that we’re getting the features of what we’re trying to do in a clear way. So if it’s like driver, you know if I’m trying to determine, if we were working on a project where automatically determine when the driver comes in the vehicle. Like one of the traditional factors of driver identification is them scanning in on the vehicle with their little key fob.

– [Ryan] Right.

– [Vishal] But that’s mental energy to scan your key fob on something or log into something to say, hey, I’m on in here.

– [Ryan] Right.

– [Vishal] It’s much better to just see who’s there. And what we’re trying to work on is like saying, hey, let’s look at who’s in the vehicle by their face, and let’s match them up, like how your nest cam is, it matched a group that face and into somebody. So we know it’s Charles Smith. Or if it’s a student or your kids are going on a bus, knowing which kids are coming on the bus so you can have attendance, or when they’re leaving or how many kids are there.

– [Ryan] Right.

– [Vishal] But the big challenge is positioning where the camera is and having enough data to accurately determine that this is John, it’s not from the side of the face or it’s the front of the face. This is the challenge of different things we should consider when we’re– building a strong AI model.

– Right.

– Is defining.

– Okay.

– [Vishal] what we’re looking for and defining a good test strategy of building the AI model.

– Right.

– Hey if we’re looking.

– [Vishal] For this space, we wanna position it in front of the driver.

– [Ryan] Right.

– [Vishal] But it’s definitely a challenge depending on what you’re trying to do. And this level of sophistication we’re trying to get with these cameras.

– [Ryan] So where does the Edge component play into this? ‘Cause obviously, you know another thing out there is people talk about Edge AI. So AI more at the Edge. Can you talk about how you’d make a decision between I’m doing AI more at the Edge versus not at the Edge and what that looks like for the layman out there, who’s trying to understand where AI comes in how it can be applied in the benefits of doing it at different points along the solution process?

– [Vishal] Sure, sure, yeah. Yeah so camera AI at the Edge versus in the cloud. So basically what, when we talk about this what we’re talking about is when I record a video or an event, either that that image, let’s suppose it’s something like I’m driving through a stop sign.

– [Ryan] Okay.

– [Vishal] I can’t really do that in the cloud. Because it’s like unless I’m sending all that video to the cloud, right. And that costs a lot of money because it’s cellular data. And there’ll be like gigs of data for sending video to the cloud. So what I don’t wanna do, I don’t want to send all that video to the cloud. One, it’ll also take a lot of processing power from sending a lot of data in the cloud. And the cost is very costly on the server side– if I’m sending.

– Right.

– [Vishal] Or stopping driving through a stop sign.

– [Ryan] Okay.

– [Vishal] Just to detect that one event. That’s why you need camera on Edge to use, because the camera on the Edge is gonna drive that recording or event only when it detects those objects are right on the camera itself. So just like it’s programming the hardware in such a way that I’m looking for this specific type of event and it might include more than just the camera. It’s about driving at a certain speed. I’m not stopped, right. I’m driving over a certain speed and I can see that there’s a stop sign on the camera. Sort of detecting that, I know that’s right there on the Edge. ‘Cause you’re not.

– Yeah, right.

– [Vishal] Sending all that video to the cloud. I’m detecting that there. And that’s where the camera’s making a decision of let me now send an alert or let me let the driver know that they’ve made a mistake in which hopefully they’ll coach and improve. But that’s the high-level differentiator between Edge and cloud. And the thing about the cloud is you can get data going to the cloud to then build an algorithm to then program the little the camera to detect on the Edge. So there’s the ability to take all that data in the cloud and then program things on the Edge. But the Edge is what allows us to get more real-time analysis of data.

– [Ryan] Okay.

– [Vishal] And we see that every day– driving newer vehicles.

– Sure

– [Vishal] You’ll see Edge processing of cameras– on lane detection.

– Yeah.

– [Vishal] For collision warning, and that stuff is really important. It’s not perfected. I mean, I drive a Tesla and there’s been certain instances where I’m on autopilot and I’m like, okay, is the car gonna stop now? Okay, it’s getting a little concerning here or there. I’m not trying to badmouth Tesla in any way. I mean, it’s great vehicle. I love it. And nobody wants to talk about those little things, right, that happen–

– Sure.

– But there’s still little nuances between, you know I’m up in Canada, there’s snow– there’s this and that.

– Yeah, of course.

– [Vishal] There’s still a lot to do with the Edge side of things as well to make it very reliable in my opinion. So when people start talking about, you know that we’re gonna have all the autonomous vehicles driving and–

– Right.

– [Vishal] Tomorrow, I sort of hesitate, being in the industry I’m in and driving sort of best-in-class autonomous type vehicles.

– [Ryan] Sure, sure.

– [Vishal] You think about that. So I think there’s the camera side. But I think there’s also, if we’re talking about autonomous vehicles, there’s also things that need to be done on the infrastructure itself– to support each other.

– Okay. And that’s, so it’s also like in this industry is sort of camera AI, but also looking at things to support the cameras as well. Like blink markers.

– Sure.

– [Vishal] Or other types of IoT, like our badge or Bluetooth to detect what’s going on and having the camera be better at doing some AI– processing.

– Yeah, I think.

– [Ryan] An interesting thing about autonomous vehicles is especially now, where not all vehicles obviously are autonomous. I drive a Jeep Cherokee which has a lot of interesting technology in it, like lane departure warnings and things, you know where the cameras are looking at the road telling me if I’m in the lane or not in the lane, moving me back in, that kind of stuff. Not to the extent of what a Tesla does, by any means, but it’s come a long way. But yeah, when it rains or when it snows, it can’t see the lane, so it turns that feature off. But I’m curious as to how the experience will be as more cars can communicate with one another and the cars can be talking back and forth. So you let them kind of handle where they are in relation to each other, where they’re in relation to the road and things like that. And I mean, in theory, if all cars were able to communicate with each other and be autonomous, and maybe that’s where we get to, to make it successful. But I think we’re still a little ways from getting there, especially when you have human error, and when anytime anybody drives a car at the moment.

– [Vishal] Yeah, now that’s a very interesting topic. I mean, that’s when we really start talking about neural networks.

– I think that’s the topic.

– Right.

– [Vishal] Of how everything communicates with everything.

– [Ryan] Exactly.

– [Vishal] And it’s so interesting, ’cause now you’re not just using your own little camera’s data to improve the way things work. It’s checking–

– Sure.

– [Vishal] Everybody’s data improve the way things work. And, I think it’s really what will be needed to really make this thing more robust. I mean, I don’t know if by itself, if I’m just collecting vehicle data on my one sort of vehicle– that’s not enough.

– Sure.

– [Vishal] To support the growth and improvement of–

– [Ryan] No.

– [Vishal] Let’s say like, if I have all our vehicles have data we’re collecting and we know that there’s a pothole or some issue in a particular area, that can tell me and tell other vehicles that same type of issue. Now we have a very powerful neural network– and it could be that.

– Sure.

– [Vishal] It could be then also infrastructure that the city is also now considering ’cause they have visions of your lower number of accidents on the road. You’ll see that concept being promoted by a lot of cities, Vision Zero– accidents or death rate.

– Right, right, right.

– [Vishal] So it’s the request of now, auto manufacturers to also look at how to make the cities more intelligent as well– but all these things.

– Yeah.

– [Vishal] Are sort of gonna be playing their part. But I think, part of this kind of evolving really requires some regulatory changes as well.

– [Ryan] One hundred percent.

– [Vishal] I think there’s always two steps forward, and then one step back, ’cause yesterday there was that data breach. I don’t know if you saw that. There was 15,000 cameras, I think hacked, including Tesla–

– Oh wow!

– [Vishal] The factory and hospitals, prisons. Basically, you know these hackers hacked a camera manufacturer through some– super admin login.

– Okay.

– [Vishal] And they were able to get into everyone’s cameras in all these facilities, detect faces and match those faces against who those people are and detect. So basically it’s a huge data breach and huge companies like Tesla and CloudFlare, big news. And it’s like, you know we’re trying to innovate, progress, move forward, me being in the tech space and the camera space, but then we get data breaches on the security side of things. And that’s where I think the regulation, we’re trying to move forward regulation that makes things more open and sharing knowledge, but then you get these sort of data breaches and it’s sorta like you get taken two steps back. Where now it’s like, okay, government’s not as happy to have those open data sharing type agreements– or sorts of things.

– Right.

– [Vishal] So it’s a tricky situation. And I think the security is really what’s gonna sort of, which I think definitely needs to be considered, but it’s something that I think is always gonna be, you need to, well for for good reasons kind of slow a few of these things down, and whatnot.

– That’s right.

– [Ryan] Yeah, for sure. As we wrap up here, I want to ask you one question just from a high level viewpoint, as you guys are very actively involved in the AI side on computer vision in IoT and so forth. What other trends that we haven’t talked about today have you been seeing on the computer vision side of things and where do you see the future of it going over the next 12 to 18 months or so?

– [Vishal] On trend side? There’s so there’s so many interesting areas to look on the trend side. I think the neural network is certainly an area that I see a lot of opportunity. What we’re talking about is cameras talking to each other and collecting data. I’m looking at it right now from a customer point of view. Right now we have these cameras. And I think generally people are looking at it still from like a security camera point of view, where they get these events. They look at these videos and events. But I think it’s really about, where I see the trend on potentially on videos is this managed service. So this service of actually a third-party managing all these video events and managing it at a third-party level.

– [Ryan] Okay.

– [Vishal] So that’s where I see some of that is more of a service layer on top of this, on top of the video. Because I think, that one is people might not have the skill or knowledge to, or it’s even time, a lot it’s just time to manage all of these events and what to do after the event. So now there’s where I see a lot of third-party services being developed around especially because, you know, it’s evolving the security and monitoring industry a little bit, but having third party services to manage or monitor these events on behalf of companies.

– [Ryan] Right.

– [Vishal] And so I think I see a lot of that service being developed.

– [Ryan] Okay.

– [Vishal] And also improving the way we organize this information into neural networks and to–

– [Ryan] Agreed.

– [Vishal] Into machine learning to improve the AI model. So there’s companies that are more focused now on developing those services and helping improve. And another area I think I see a lot of development, trending wise, is just managing regulations around, from a government side is just drone cameras, you know, autonomous vehicles security. So I see a lot more trends come in that space too, which I think is going to change the way– companies like us.

– Right.

– [Vishal] Innovate and develop those hardware solutions. Chinese companies, maybe there’ll be some restrictions against them, who knows on the camera side of things. So I think there’s a lot of those types of services and regulations being developed, which I think are–

– Sure

– [Vishal] Is where I think things are right now, the focus is being driven.

– [Ryan] Yeah, so basically the last question I have before we finish up here is kind of a funny question, because I feel like it’s asked and joked about, but not a lot of real context is ever given when this is asked. But the question is, is AI going to take over the world? You know, people get scared of it, they worry about it. But is that really something that they should be concerned about or are we really not at that stage? Do we really not see it going there? And kinda what are your opinions there?

– [Vishal] Yeah, it’s funny questions. We watch way too many movies. So it’s always a fun question to ask. I love it, but no I don’t think it’s going to take over the world. I think at the end of the day, you know at least not in our lifetimes or I don’t think any anytime soon for sure. I mean, AI, it’s important everyone understands, artificial intelligence, truly like, independent artificial intelligence is very, very hard. I mean, even where we are today, I feel is so still at the starting stages of AI. So I don’t think that’s happening anytime soon. And so, no . I got say it’s a big hard, no, for me.

– [Ryan] Okay.

– [Vishal] Ah so, yeah.

– [Ryan] Fair nough awesome! Well Vishal this has been a fantastic conversation. I appreciate you taking the time to come on here. I would love for, if you could quickly tell our audience where they can find out more information about ZenduIT, how they can reach out if they have any questions, all that kind of good stuff.

– [Vishal] Yeah, oh that’s great! Yeah, our website’s ZenduIT, just Google us. But yeah, we’re an IoT solutions company. We get involved with a lot of different enterprises about different problems they’re having and talk a little bit about how IoT cameras can help sort of solve those problems and build a sort of solution, for business Applications product.

– [Ryan] Awesome. I appreciate the time. Again, this has been a fantastic conversation. Thanks for taking the time to be on here, and would love to have you back at some point in the future!

– [Vishal] Yeah, awesome, thanks Ryan!

– [Ryan] Thank you. All right everyone, thanks again for joining us this week on the IoT For All Podcast. I hope you enjoyed this episode and if you did, please leave us a rating or review and be sure to subscribe to our podcast on whichever platform you’re listening to us on. Also, if you have a guest you’d like to see on the show, please drop us a note at, and we’ll do everything we can to get them as a featured guest. Other than that, thanks again for listening. And we’ll see you next time!

Hosted By
IoT For All
IoT For All
IoT For All is creating resources to enable companies of all sizes to leverage IoT. From technical deep-dives, to IoT ecosystem overviews, to evergreen resources, IoT For All is the best place to keep up with what's going on in IoT.
IoT For All is creating resources to enable companies of all sizes to leverage IoT. From technical deep-dives, to IoT ecosystem overviews, to evergreen resources, IoT For All is the best place to keep up with what's going on in IoT.