On this episode of the IoT For All Podcast, Frederic Werner and Neil Sahota from AI for Good join Ryan Chacon to discuss the global impact of AI. They talk about the AI hype cycle, the state of AI, defining good AI use cases, balancing different perspectives on AI, scaling AI for global impact, current AI trends and use cases, AI and IoT, challenges in AI, AI developments and the future of AI, fear of AI, and the 2023 AI for Good Global Summit.
Interested in AI? We’re launching AI For All!
About Frederic Werner
Frederic Werner is a seasoned Association Management professional with a passion for telecommunications specializing in strategic communications, community building, and international relations. He is the Head of Strategic Engagement for ITU’s standardization bureau and was instrumental in the creation of the landmark AI for Good Global Summit. Frederic is deeply involved with innovation, digital transformation, financial inclusion, 5G, and AI via numerous ICT industry projects and events he has developed.
Interested in connecting with Frederic? Reach out on LinkedIn!
About Neil Sahota
Neil Sahota is an IBM Master Inventor, United Nations Artificial Intelligence Advisor, author of the best-seller “Own the AI Revolution” and sought-after speaker. With 20+ years of business experience, Neil works to inspire clients and business partners to foster innovation and develop next generation products/solutions powered by AI.
Interested in connecting with Neil? Reach out on LinkedIn!
About AI for Good
AI for Good is an organization that identifies practical applications of AI to advance the United Nations’ Sustainable Development Goals (SDGs) and scale AI solutions for global impact. It’s the leading action-oriented, global, and inclusive United Nations platform on AI. AI for Good is organized by ITU in partnership with 40 UN Sister Agencies and co-convened with Switzerland.
Key Questions and Topics from this Episode:
(04:47) AI hype cycle and state of AI
(07:46) What are good AI use cases?
(12:25) Scaling AI for global impact
(16:02) Current AI trends and use cases
(17:36) AI and IoT
(19:39) Challenges in AI
(24:58) AI developments and future of AI
(28:45) Fear of AI
(32:29) AI for Good Global Summit 2023
– [Ryan] Hello everyone, and welcome to another episode of the IoT For All Podcast. I’m Ryan Chacon, and on today’s episode we have Frederic Werner, the Head of Strategic Engagement at the United Nations, and also the Executive Producer at AI for Good. AI for Good is an organization that is focused on identifying practical applications of AI to advance United Nations sustainable development goals and scale those solutions
for global impact. We also have one of the founders of AI for Good, Neil Sahota, on the show with me as well. We’re gonna talk about what AI for Good is. We’re gonna talk about use cases that they are seeing lead the way in the AI space that are having a global impact. We’re also going to talk about how do we know what is good? Especially when it comes to AI. Lots of really interesting kind of topics related to that. So I think you’ll- if you’re really interested in learning more about AI and understanding what’s going on in the space at a very large, more global level, this will be a great podcast to listen to.
If you’re listening to this on a podcast directory, we’d truly appreciate it if you would subscribe. And if you’re listening to this or watching this on YouTube, give this video a thumbs up, subscribe, as well as hit that bell icon so you get the latest episodes as soon as they are out. Other than that, onto the episode.
Welcome Fred and Neil to the IoT For All Podcast. Thank you both for being here this week.
– [Frederic] Yeah. Thank you for having us.
– [Ryan] Absolutely. Excited for this conversation. Neil, I know you’ve been here before, so let me go ahead and pass it over to Fred to give a quick introduction about himself and the company he’s with, and then we’ll have you reintroduce yourself to those who may not be as familiar.
– [Frederic] Yeah, basically I work for the ITU. For those of you who don’t know what the ITU is, it’s the United Nations’ specialized Agency for information and communication technologies. We’re also the organizers of AI for Good. The AI for Good Global Summit, which has- was launched in 2017 in partnership with 40 UN Agencies and co-convened with Switzerland.
So in a way, you could say I wear two hats. One would be my standards making hat and the ICT industry through ITU but also the AI for Good hat, which ITU is organizing.
– [Neil] Just real quick, I’m one of the I guess instigators of this AI wave that we’re in. Part of the original IBM Watson team back in the day. But do a lot of work with the UN and actually one of the people that in a little conversation and reception after one of the UN events came up with the idea to actually do the AI for Good initiative with Fred and everybody else.
So really excited they gave me the opportunity to be part of that.
– [Ryan] And I guess that’s a perfect segue into a question I think would be great to kick this off with is what is AI for Good? And what is that initiative all about? You mentioned how it- where it came from the conversation, but just give me or give our audience some background to understand what exactly AI for Good is and all about
– [Frederic] Sure. AI For Good was basically created on the premise that we now have less than 10 years to achieve the sustainable development goals, and AI holds great promise to advance many of those goals and targets. If you’re looking at anything from climate change to healthcare to education for all, you have gender equity issues or looking at more high tech solutions like autonomous driving or in the context of smart cities.
And of course, partnerships. Virtually all SDGs and targets could be positively impacted by AI. Of course, having said that, we have to be vigilant that many of those targets can also be negatively impacted by AI. Of course, top of mind is job loss through automation, which when we kicked off the summit in 2017, seemed like a far away thing that might happen.
But I think now that AI has been mainstreamed, if you will, I think that is top of mind for everyone when you use or see what ChatGPT can do. Generative AI tools and how I think almost everyone could imagine how that might impact their daily jobs. But going beyond that, also issues of bias and datasets, ethical issues when it comes to machines making decisions or autonomous systems, safety, security, privacy.
And then also, the digital divide. Is- AI has great potential to help developing countries, but also if we’re not careful, it could make the digital divide even bigger. So I think that’s the topics that are on the table. I would still say the good news is that more positive use cases come across our desk daily than negative use cases.
And there’s been some mappings that have been done on this. So the positive does outweigh the negative. But it’s really interesting to see how we’ve gone from a kind of the narrative of the first summit was really much around the hype of AI and then it matured a bit and then was it good and when it’s not good for, and then we had COVID and then here we are now where it’s literally a part of everyone’s daily lives.
– [Ryan] It’s interesting to follow the hype cycle with AI and and maybe Neil, you can weigh in on this on your end, is just where are we when you compare like hype to reality with a lot of this AI stuff that’s out there in the world right now. Like we have, everybody has been focused on and talking about AI ever since for at least more the general public since like ChatGPT came out and really caught attention. But there are a lot of things, Fred, that you just brought up that are very important for us to be thinking about and understanding as AI continues to progress, but what should people out there really be thinking about and focusing on are things that are more realistic now versus things that maybe are a little bit overhyped and not really relevant right this second.
– [Neil] Ryan, you’re alluding to what I call the Huckleberry Finn problem. If you’ve ever read the book, Huckleberry Finn does these ups and downs. He learns a bit, retracts a bit. It’s the same thing with AI. I remember back in the Watson days, the Jeopardy challenge, it was like, whoa, this is amazing.
And talking about going into healthcare, and I’m like advocating let’s do some patient intake. Let’s help do some admin stuff for doctors and nurses. Let’s start off small. And I remember IBM marketing going that’s not sexy enough. Let’s go cure cancer. There’s a lot of challenges with that, a lot of different types of cancer, it’s like, people are going to think that AI’s gonna go out and cure cancer in three months, that’s just not realistic, right? And I think we went through this major hype cycle, not just IBM, but a lot of other organizations came crashing down and everyone’s struggling on what to do, what actually works, and kinda went to that the trough of sorrow, like the hype cycle. And I think we keep doing that every time we see new technology.
Same thing with ChatGPT. It went from 10,000 users to a hundred million users in the span of a month. But then you had people coming out like, oh, that’s great, it’s helping me write resumes and cover letters and helping me do some research, but it can’t tell me what stocks to pick. And it’s like, no one taught ChatGPT stock investing.
AI can only do what we teach it. And I think that’s the real thing. Until people actually realize some of these things that’s important. And then alludes to what Fred’s actually talking about. AI is, like all technology, a tool. It’s about how we use it, we can use it to create, or we can use it to destroy. We’re trying to do a lot of good things, AI for Good, but we also focus on
AI for bad because we have to think about the potential misuses of the technology, and I think that’s something that we also tend to overlook. As technologists, engineers, we’re told we’re trying to get, create the outcome X, right? We’re gonna build a tool to do X, and so we’re focused on X, not thinking about can also be used to do Y and Z.
And that’s the shortcoming that we have right now, and that’s one of the things we’re trying to focus on with AI for Good is let’s think about the other uses and misuses that could happen.
– [Ryan] Fred, let me ask you, tacking onto what Neil’s just saying when it comes to that good question. What is good? How do you all think about that? Like how do we know as people when it comes to different even AI tools or just general things that AI is touching, what are the good applications versus what are the bad applications or what are the good things that are going to come out of this versus the bad things that are going to come out of this, like you mentioned job loss and so forth.
How do you know what is good as technology is moving so quickly?
– [Frederic] That’s a great question, man. One of the things we try to do at AI for Good is bring as many different voices to the table as possible. So whether it’s industry, academia, member states, NGOs, artists, the creatives, and there’s no shortage of, let’s say, positive type AI applications, but how do we know they work equally well on men and women?
How do we know they work equally well on, children and the elderly or persons with different skin colors or persons with disabilities, or would it even work in a country, like in a lower resource setting where basic things like electricity, water supply are actually real problems.
And these are not things that occur naturally to the fast moving tech industry and to startups. It’s more build it and we’ll fix it after. But these are things that we think about deeply at AI for Good because they really need to be addressed and solved if you’re gonna scale AI for good globally.
And I often say in meetings when I’m asked a question, how do we know what’s good? I think, you, Neil, and myself we could spend all day trying to debate and argue what is good. What’s good for me might not be good for you. Different countries, different communities have different priorities.
But the good thing is we have the United Nations Sustainable Development Goals, which is basically a framework for at least deciding what is good, agreed upon by all member states, and organized around 17 goals and supporting targets. And that acts as a kind of lighthouse. So at least whenever there’s a project or a meeting or something needs to be decided, we’re not always going back to the drawing board and thinking, gosh, what is good?
So I like to think of the SDGs as what is good and as long as you use that as a framework for everything, you know you’re gonna do, whether it’s investing or decision making or research, that’s a pretty good starting point. It’s not gonna solve everything, but it does get everyone on the same page and does save a lot of time with all these discussions.
– [Ryan] Yeah, I think it’s interesting, talking about how you’re bringing lots of different minds to the table to discuss and understand different perspectives, to start to build a criteria to what determines something to be good. And I think that’s kind of something that I imagine would evolve over time as society changes, and people change, and the technology evolves.
It seems to be a pretty- that seems to be a pretty big undertaking in doing so. But definitely an important one because we had people on the show before talk about biases in AI and how AI can be used for negative things. And then you mentioned job loss, which to some people, the businesses potentially who might be saving money, becoming more efficient, might not think about it in terms of job loss.
More thinking about it as we’re- the organization is getting more efficient, but then the other perspective is a person losing their job potentially that is potentially is the negative side, right? Neil, how do you think about that same dilemma when it’s something could be good or bad for different people in or different stakeholders within the same kind of situation?
– [Neil] I mean, it’s a juggling act. That’s the honest truth of all this. Nothing’s ever gonna be perfect. I think that’s one of the big expectation issues we have with the technology. We expect AI to be perfect. We expect it to help everybody. Some of these things are unfortunately not possible, but what we have seen is there’s enough common good that can shift the needle. One of the things we’ve learned with the innovation factory as far as with AI for Good is that local problems have global solutions. So you have some of these social impact entrepreneurs like in Malawi or South Korea or wherever, like we’re trying to help people that are audibly impaired, they’re deaf, or we’re trying to help people that are trying to upskill and get better jobs.
And it’s like, these are actually issues that exist everywhere. And so if they can solve for the local community, that’s something that could potentially be extendable to every community. It’s not gonna help everybody, but it can help a lot of people.
– [Ryan] For sure, for sure. And Fred how do you all approach having that global impact? How is one organization able to scale globally to impact different areas that Neil’s mentioning right here. He’s talking about different areas of the world have different needs, have different things going on.
What is- what do you all do, or I guess what are some of potentially the things that you’re working to overcome in order to have more of a global impact with any kind of good initiative that is connected to all this?
– [Frederic] Yeah, so one of our catchphrases is that we’re action oriented, and we’re more than just a talk shop. I think a lot of people might think of AI for Good as an event, as a summit, but it’s actually an all year, always online platform. We do something in the order of 150 online events per year in addition to a physical summit, but more importantly supporting that, we have a number of concrete activities, which I believe are steps that can either help with the building blocks of scaling AI for Good, or getting rid of the bottlenecks that are preventing AI for Good scaling globally. So for example, we have what we call focus groups. These are basically pre-standardization efforts. We have a number of focus groups quite often in partnership with other UN agencies. So for example, AI for Health with WHO, AI for Natural Disaster Management with WMO and UNEP, AI and Digital Agriculture with FAO.
We do work on autonomous driving, 5G networks, environmental efficiency. And if you look at, even though these are different topics, what they’re working on is quite similar, so they’ll be looking at what’s the standardization landscape look like, what are the gaps, what kind of frameworks are needed, what kind of best practices, we need benchmarking to be able to compare apples with apples. Trying to solve things like data sharing, for example, how you- how can you share data at scale in a way that respects privacy, but is still useful?
And all these things are things that need to be solved before you have that scale that we’re talking about. And more importantly, without- within those discussions, it is what I explained before, is things that don’t naturally occur to the tech industry. So it- does a solution, this is gonna work in 54 African countries with all the political, social, economical, environmental challenges that that poses?
Does it work in across the world in different languages? What about persons with disabilities? So these are all the things that are being worked on in these initiatives. So that’s more of a standards angle, but also something Neil’s involved with is our innovation factory. So that’s a yearlong AI startup pitching competition.
So basically any AI startup that has AI that can advance the SDGs is eligible to compete. And these are actual solutions, right? They’re products, they are things that exist here in today, not in five years, that you can use. So actually teasing out and identifying those solutions is very important as well in terms of moving the needle.
And last but not least, we run machine learning challenges. And that’s basically trying to crowdsource solutions to problems that- solutions that don’t yet exist, right, from the crowd. Basically trying to solve machine learning puzzles, whether it’s in 5G networks or analyzing satellite imagery or TinyML.
So all these things combined, standards, real solutions that exist from startups, but also sourcing from the crowd, that’s what I call the action arm of AI for Good, if you will.
– [Ryan] Neil, with the pitch stuff, pitch competitions and different initiatives that you see coming across your plate, are there any kind of trends or things that you’re noticing as far as- what are the main applications that are here today when it comes to AI that companies are really excited about, as opposed to maybe the stuff that’s three to five years down the road that maybe gets a lot of attention in the media but isn’t really here now.
– [Neil] Well, they’re not focused on, I call it the sexy story, it’s actually focused on solving immediate pain points. So making sure they have access to clean drinking water, the ability to improve crop yields. It’s- I know it sounds like basic stuff, but these are things that change the way of life for these communities with a direct impact.
Let’s be honest, again, like I said, local problems have global solutions. Who wouldn’t wanna be able to improve crop yields? We know we have the ability to grow enough food for everybody. We just haven’t gotten there. And Fred alluded to a lot of things about different communities that get impacted positively, negatively, and the important diversity of thought and perspective.
We also have to understand the infrastructure challenges. That we can’t just build like all these great super tools, but they require 5G and supercomputers to use. We have to think about what’s gonna be good for the general population. What could farmers in Bangladesh use, where they have access to?
– [Ryan] Absolutely. Yeah, it’s something we talk about with kind of the power of IoT and AI coming together. IoT is able to collect data for different things like improving crop yield and solving more problems that are around the globe in different environments as IoT continues to progress forward. And then that data now can be put into AI models and AI tools to become more useful and to provide better results. So the marrying of those two technologies is something we talk a lot about on here, and it’s something that I think a lot of people are really understanding how IoT and AI play together, and they don’t have to be always thought of as independent kind of areas of technology or solutions.
– [Frederic] Yeah Ryan, you make a good point there. Most of the high potential solutions I’ve seen are rarely one thing. It’s usually AI in combination with IoT, maybe in combination with satellite imagery, in combination with big data, if you will. And also TinyML, so tiny microprocessors, which can pick up sound and heat and all types of things.
And we’ve had some interesting challenges where, for example, there are more weather stations in Germany than all of Africa. And how can you basically tell the weather there? Instead of building giant weather stations all across Africa, you can use these TinyML devices that can just analyze the sound of rainfall and collect that over different communities and areas.
And then put that sort of in the cloud if you will for analysis using AI. And that’s like a perfect example of IoT combined with AI and big data and cloud and analyzing sound and temperature and different things. We have a TinyML challenge which will be featured at the summit as well.
And some really interesting use cases on how you can use these tiny devices for really impactful things.
– [Ryan] Yeah, absolutely. It’s very exciting to kinda see what people are doing with all these technologies, and how they’re applying them to like the stuff you’re mentioning Neil, more these local problems that people around the world are having. What are some of the challenges you all are seeing?
We’ve talked a lot about kind of different solutions that are out there and different kind of focuses and talking about the good side, but what about challenges that you feel like the industry right now is facing that are important for people that are listening to this to understand? Like this is, that will influence the adoption of these technologies that will influence the progress of these technologies?
What are some of those that stick out to either of you?
– [Frederic] Yeah, I’d say for me, very easy answer. It’s data. Either lack of data or if you have data, you’re not able to share it in a meaningful way. And I see it time and time again. We’re in meetings and someone will ask a question, who here has data? There might be a city or a hospital or a company.
All the hands go up. And then who’s willing to share data? Everyone looks at their shoes, and the room goes quiet. So finding ways to share data in a way that’s useful and meaningful and, but also respecting privacy is a huge challenge. And there are techniques for that, so for example, techniques like homomorphic encryption where, for example, I could, let’s say there’s some app or Google can tell how long I’m gonna live based on my bio data.
I could send complete nonsense data. Like I’m 10 feet tall, two years old, I have purple eyes, and weigh 500 kilos. And they get those numbers, which mean nothing, but they can still manipulate the numbers to give the correct answer back to me, which gives a meaningful result. So there are ways to exchange data, which don’t reveal the actual data, so you can respect privacy, but you can still manipulate calculations on that
But that’s just one of many privacy preserving techniques. And they, some of them, are quite well known, but they need to achieve scale, and they need to really facilitate that kind of data sharing. And also there’s a trust issue behind that. And of course the other issue is lack of data.
So if you’re in developing countries, in order for data, you need to have digitization. For digitization, you need connectivity. And then that’s what Neil was saying, going really back to the basics. If you don’t have that basic infrastructure, there’s no data to play with to begin with.
– [Neil] Hundred percent data infrastructure. Two biggest challenges, but we’re also seeing, thanks to the UN, like their global connectivity initiatives, these are things we’re starting to solve some of those issues. We’re seeing two other issues that are intertwined arise. One ironically is people are trying to figure out what should I be doing?
A lot of organizations trained to their technologists and they’re smart people, but they often don’t understand the domains well enough to understand the pain points and where these capabilities could be applied to. So the most successful AI solutions I’ve seen didn’t start with a smart technologist,
it actually started with a doctor or a lawyer or a marketer. And then the second piece that’s kinda intertwined with that is we’re really starting to experience what we call the interoperability issue. So when it comes to AI, there’s the interpretability issue and the interoperability issue.
Interpretability issue is the AI’s coming up with some recommendations or generating something, and the business people don’t quite understand where that came from. The technologists can at least trace that through. The interoperability issue is where the technologists don’t understand how the AI arrived at that conclusion at all.
And so you have- and part of the reason for that is you have a lot of technologists working on things they don’t fully understand anymore. They may not understand their domain. And as a result, we don’t understand how the neural networks are actually getting wired.
– [Ryan] I think with anything new or grow- with new growth like we’re seeing in AI, these challenges are gonna be- rise to the top pretty fast. And we’re realizing that in not just the AI space, but also the IoT space, as people are deploying new solutions in different environments, they’re- the evolution of the technology is causing potential challenges.
The networks are not present. The ability, the infrastructure is not there. Things are not interoperable like you’re talking about as well. So it’s interesting to dive into that side. Like we could spend tons of time talking about how great a lot of these technologies are and what they’re doing and how exciting this is.
But if we don’t focus on the challenges, then that- those projections that we have and that excitement we have may be tougher to realize because we haven’t- we’re not focusing the time to solve these challenges, which I know you all are very much focused on spotlighting, bringing to the surface, and finding ways to get solutions built in order to help these good initiatives move forward.
– [Neil] Fred’s gonna laugh at me for saying this. Everyone hopes this stuff just works like magic, and it’s all perfect, we all know that never happens.
– [Ryan] I think you look back in history, I don’t know if we’ve had ever had a time that new things come out that we all get excited about and just, they just work. There’s always challenges, but having people dedicated to coming together, sharing their ideas, working on fixing these challenges is the way we progress forward.
– [Neil] A hundred percent. I think that’s one of the great things about the whole AI for Good initiative is that we’ve really built this kind of ecosystem, this community of people that want to get together and move the needle on the SDGs.
– [Ryan] So, Fred, let me ask you from your perspective, with all the conversations that I know you have and the people you meet, what are you most excited about or what do you think some of the big things we should be on the lookout for from just the AI world over the coming months beyond the summit obviously and just throughout the rest of the year, are there things that you’re keeping your eye on?
– [Frederic] We have this summit coming up in just five weeks time, and even though we’re five years into AI for Good, in a way it is the first summit because if you go back to 2017, the things we’re discussing there were preparing for the future. It was this, what’s hype, what’s not hype?
What’s the fear? What’s the promise? What’s AI good for, what’s it not good for? How can we shape a responsible narrative, help move things along. And I’m glad we did that because I don’t think anyone would’ve imagined being where we are today so- happening so quickly. I think people imagined it was sometime in the future, but there’s just been this acceleration of where even in the AI for Good team, it’s our job to stay up to speed on things.
But you could go at a pace of maybe one year at a time, and then it got a little faster, every six months, and now it’s literally week by week. That’s- if you miss a week of developments, you’re already kind of- I don’t wanna say outdated, but that’s how fast things are moving and so it’s gonna be interesting at the upcoming summit is on the one hand, there’s more potential than ever for AI for Good.
You have companies like DeepMind coming, with like breakthroughs on protein folding, which could help with like drug discovery for very tough problems like Parkinson’s or Alzheimer’s or all types of diseases that we just haven’t made a lot of progress on or even energy when it comes to for example, stabilizing fusion, for example.
So these are things that could really affect the future of mankind in a significant way. And then of course, you have what’s top of mind for everyone, which is generative AI, how fast it’s moving, what kind of guardrails do we need. If you- maybe to use an analogy, if you go back to the dot com boom, right?
And before the dot com boom, if you would’ve had five years of really productive discussions, maybe when the dot com boom would’ve happened, the internet would’ve been designed in a more mindful way, right? In terms of privacy, security, maybe even the business models or things like online bullying. And that leads into sort of the advent of social media.
Things took off for better, for worse. Here we are. But I’m sure if we could have turned back the clock, we would’ve been asking some pretty difficult questions. And I’m just glad that we have spent about five years asking those difficult questions. Not that they’re gonna solve everything, but now that we’re- we’ve actually reached this moment in time, there’s a whole community of people, tens of thousands of people, thinking about ethics, thinking about privacy, safety, bias in datasets, how do we manage all of this, governance frameworks. And I think this summit in July is really gonna be critical because you’re gonna have both sides of the coin really. So how do you imagine the future for AI? What kind of guardrails are needed? And then at the same time, I don’t want them to forget the good part because at the same time if you’re solving things like protein folding and all these amazing scientific discoveries, I feel that’s- people have lost sight of that temporarily
I think. So hopefully we come out of that with some kind of balance moving forward. But yeah, I think July is really gonna be critical.
– [Ryan] What do you all think or what do you all say to people who are trying to follow along with the developments in the AI space but have real concerns, hesitations, or are maybe even scared of how fast things are moving? How is that kind of addressed in conversations that you’ve been a part of or how should people be thinking about that? Because obviously there’s good things about moving fast, but then there are obviously are hesitations, concerns, and negative things about moving fast at times. So how is that kind of thought about?
– [Neil] Ryan, it’s an interesting question, right? Because the pace of change has just gotten faster decade over decade, and it’s not just AI. And we’re just reaching a point now where the level of this, the impact from these changes can be huge. And I think what we’ve learned, and we talk a lot about in AI for Good is it’s you can’t hit the pause button.
You can’t hit the stop button. You gotta get every country, every company, every individual to agree to that, and it’s not realistic. It’s gotta be a mindset. Every time we- there’s a change, it’s not to change just the technology and some of these tools that come out, we have to adapt the way we learn, our processes, these types of things to take advantage of these opportunities and try to minimize th negative impacts.
It’s just that that pace of change is so fast, mindset differentials become so fast. I hate to say it this way. Historically, as human beings, we’ve been very reactive. You know something happens, we have time to try and figure it out, take corrective action, prevent this stuff from happening before. That doesn’t work anymore.
We reached this inflection point, and we’ve been talking about this as part of this AI for Good community that the proactive thinking, the anticipation of the different scenarios, what the different uses, misuses has become critical. When you talk about AI ethics, you can’t do that without having this component to it.
That’s just a major like cultural shift. I can’t remember if it was Peter Drucker or somebody else that said it, that culture always eats strategy, right? And you see a lot of people always focus on we have to have the perfect strategy around AI. That’s not gonna get you there. You got to start developing the culture, developing the mindset with people.
So not just like ethical use, but the ability to be proactive thinkers about what could happen. And until we make that shift, we’re gonna have these struggles.
– [Frederic] Yeah, I think, like Neil says, it’s behind every technology, you could part call it development, right? There’s always opportunity and challenge but what really needs to be solved is the people issue, right? The culture. And I like to think of it as if AI is forcing us to think about what it means to be human more and more, that’s probably not a bad thing.
And you see that over and over again where even if you’re trying AI yourself or let’s say you’re developing an AI product or a solution, you’re faced with all these questions along the way, which is basically, oh, what would I do as a human now? And you have to have the answer and to move forward on that.
And not everyone has the same answer. But it does force you to almost look in a mirror and reflect and think about what it means to be human. So the exercise in itself might have value or maybe we’ve been not reflecting on that as deeply and now we’re forced to and that’s really interesting because you just see people really thinking deeply about what it means to be human because they’re faced with these technological puzzles that need to be solved that are- have been presented themself by AI now.
– [Ryan] Last thing I wanna ask you before I let you go here is we talked about the Summit a bunch, in and out of some of these questions and topics we’ve been discussing, but just to round things out, what can people expect from from this upcoming summit? How can they either be involved, how can they follow along, what’s the best way to do that? And just give us some things to look out for.
– [Frederic] I think what they can expect is really both sides of the coin, right? You have the what’s really the hot topic right now of generative AI and how to manage that in the future. And we have some of the leading minds on that coming. For example, Professor Stuart Russell, Yuval Harari, Ray Kurzweil. You have ethicists, philosophers, really people who are in the nitty gritty of all those discussions. And then on the flip side of that, you have all these amazing solutions that are gonna be presented by DeepMind and AWS and Microsoft and startups. And those are really the AI for Good positive use cases. Something that people might not expect that’s happening at the Summit is the robots.
So during COVID, we launched a Robotics for Good program, and we were quite amazed by the uptake and interest in robots. Robots that can positively impact the SDGs. So robots for disaster management, for agriculture, for healthcare, for disabilities, for companionship. No shortage of use cases.
We have about 55 robots coming to the Summit, about nine humanoid robots. Just to give you context, even the biggest robotics conferences in the world might have one or two humanoid robots. We have nine of them. We’re gonna have the world’s first humanoid robot press conference. So don’t ask me how that’s gonna go, that’s an experiment.
But I think, the Summit has always been there to demonstrate the potential. And if it fails, it fails, if it’s great, it’s great, but it sparks discussion and debate around why it was good or bad. And of course there’ll be a strong focus on artists as well. So in 2019, we brought along some amazing artists that use AI to push the limits of their performance and creativity.
And of course now with generative AI, the debate around that, that’s more relevant than ever. So of course there’s issues with intellectual property and who owns art created by AI, but also I think having artists who are actually creating amazing art, using those tools, to see them do it can really help move that narrative and discussion along as well.
So the event is free. If you’re in Geneva, you can attend. Just sign up. Come for free. If you can’t come, you can follow online. So go to aiforgood.itu.int, and we’re just really hoping the online audience will really skyrocket this time because there’s no limit to participation.
– [Ryan] Very exciting event. I know we’re helping push it out to our audience. We think it’s a great event. We’re excited to see what comes from it and continue to find ways for us to work together just to promote the cause and what you all are working on.
Really appreciate you taking the time. Neil, you as well. Thanks for jumping on. I know you’re on the other side of the world right now, but appreciate you spending the time with us and learning more about or allowing us to learn more about AI for Good and all the initiatives going on there.
Very excited to get this out to our audience and thank you again both for your time.
– [Frederic] Yeah. Thanks so much, Ryan. Really appreciate the opportunity. And Neil looking forward to meeting you in a couple weeks.
– [Neil] I am, and for all you people out there, we’re trying to get Fred, a very accomplished drummer, to do a set as part of the AI and art cultural exhibition. So needle him on on social media.
– [Frederic] Yeah, that’s not gonna be streamed, so you have to come in person if you wanna see that and bring earplugs.
– [Ryan] I think we got enough phones probably in the vicinity. Maybe we’ll be able to get some footage. But yeah, thank you, thank you both again.
– [Frederic] Thank you. Bye-bye.