Can AI have emotional intelligence? Martin Cordsmeier and Max Weidemann of millionways join Ryan Chacon on the IoT For All Podcast to discuss generative AI and emotionally-intelligent AI. They cover generative AI versus scientific AI, AI ethics, slowing down AI development, AI for good, AI bias, and how AI can actually help people.

About Martin

Martin Cordsmeier is the co-founder and CEO of millionways, which began as a non-profit organization. As millionways grew, it pivoted to for-profit, and Martin moved to the US with his co-founder. Martin published a book about his belief that mental health starts with understanding yourself and your unconscious traits and patterns.

Interested in connecting with Martin? Reach out on LinkedIn!

About Max

Max Weidemann is the co-founder and CTO of millionways. After graduating from Goethe University in Frankfurt, Germany with a Master’s degree in Mathematics, he became a self-taught full-stack developer primarily interested in Applied Machine Learning and AI. After co-founding the start-up, millionways, with Martin in Germany, he moved to the US to further develop, scale, and market their technology.

Interested in connecting with Max? Reach out on LinkedIn!

About millionways

millionways brings emotional intelligence to AI. millionways’ technology can be used to give any platform the ability to learn its users’ unique personality traits, hopes, fears, and ambitions and to ultimately create a deeply personalized user experience, with explainable outputs and more nuanced and empathetic than the typical AI black box.

millionways’ emotionally intelligent AI provides next generation personality insights and has its roots in science and psychology. It encompasses more than five million proprietary data samples and 1,000+ hours of training from a team of twenty-five psychologists.

Based on first-time-digitalized PSI Theory developed by the millionways team, millionways’ AI replicates real human empathy rather than mimic empathy as now found on some AI platforms. millionways’ emotional AI is continuously learning and evolving through the people who use it.

Key Questions and Topics from this Episode:

(00:50) Introduction to Martin, Max, and millionways

(03:29) Generative AI versus scientific AI

(05:50) Examples of scientific AI

(07:11) AI ethics

(09:33) Challenges of AI ethics

(10:55) How can ethical challenges in AI be solved?

(12:57) Should AI development slow down?

(16:24) What is AI for good?

(17:57) AI bias

(19:58) How can AI actually help people?

(22:29) Learn more and follow up


Transcript:

– [Ryan] Hello everyone and welcome to another episode of the IoT For All Podcast. I’m Ryan Chacon, and on today’s episode we have Martin, the CEO, and Max, the CTO of millionways. They are a company focused on bringing emotional intelligence to AI. Fascinating conversation here that I think you’ll get a lot of value out of. We’re going to talk about generative AI versus scientific AI, AI ethics, biggest issues and challenges, and what can be done when it comes to ethics in AI, AI for good and understanding yourself as a person, and how AI can actually help people. Give this video a thumbs up, hit that bell icon, and subscribe to the channel if you have not done so already.

If you are listening to this on a podcast directory, please subscribe, so you get the latest episodes as soon as they are out. Other than that, let’s get on to the episode.

Welcome Martin and Max to the IoT For All Podcast. Thanks for being here this week.

– [Max] Thanks for having us.

– [Martin] Thanks for having us. Yeah, great to be here.

– [Ryan] Yeah, excited for this conversation. Let’s kick this off by having you give a quick introduction about yourself and the company or your company and kind of what you do there. Maybe Martin, we’ll start with you.

– [Martin] Co-founder of millionways. We are an AI company based in New York City. We just moved here like one year ago. And yeah, I’m extremely excited because of the general AI hype right now, which we didn’t expect last year to be honest. So we have a very exciting time. It’s also awesome to be here.

Yeah, this is me. I basically did that all my life. I’m very focused on that.

– [Ryan] Sounds good, Max.

– [Max] Uh, Yeah, thanks for inviting us. My name’s Max. I’m the Co-founder, CTO, also Martin’s half-brother. And I joined his idea after he finished research on trying to understand how people work, how people behave. We had very interesting data at the tip of our hands, and we transformed this into AI.

I’ve been in the space for three years now. I’m also a full stack developer with a mathematical background. So we compliment each other very well, and that’s also what we want to bring to the world with AI.

– [Ryan] Nice. And tell me a little bit about what millionways does, like what you all do, what the focus is, what kind of role you play in the space.

– [Martin] My high level answer is always helping understand people. This is actually what we want to do because it is fascinating, there is still no scalable way to understand people. It’s actually not. And we have that. And this is what’s very exciting for me. So I was working on understanding people like all my life actually, this is what I really want to do.

Yeah, you can give the more technical answer.

– [Max] Yeah. Technically what we ended up with after this journey that we went through, not really aiming for this product, but it naturally came to us was that our AI is now capable of understanding the personality traits and what really motivates and drives people and their behavior from only listening to their words.

So we can create a pretty accurate personality profile and also psychological backgrounds based on texts, on user generated text. And we also have a whole theory behind it. We are very scientifically driven unlike many of the others out there. And this also leads to matchmaking.

So we also have capabilities of finding good matches between mindsets and how to connect.

– [Ryan] Fantastic. Awesome. It sounds like some very exciting stuff going on over there. So to kick off this conversation, what I wanted to do is talk high level about a few areas that we haven’t had a chance to cover but are super popular right now. And I think there are a lot of people looking for answers to some of these questions.

And the first thing I wanted to ask is just, when we’re thinking about a lot of the terms used to talk about AI right now, we hear about generative AI, we hear about scientific AI, we hear about lots of things like that. But when we’re talking about those two, first off, how would you describe each of them and then how would you compare them for people to be thinking about them independently of each other?

– [Max] Yeah. Great question. Generative AI has been seeing some hype recently. With ChatGPT, of course and also Google’s Bard coming out. And basically NLP, Natural Language Processing has been around for a couple of time now. It’s been used to translate text. It’s been used to put it into perspective.

But now there’s this generative piece which allowed users to first experience how AI can take a prompt, and generate something new on it. There’s lots of applications, of course. We can do image generation, text generation, even video generation. So all of these things are very new now, and the most important piece about it is it’s data driven and based on probabilistic models.

And the AI kind of chooses the best fitting completion in terms of the best fitting video based on your prompt, the best fitting image, or the best fitting text. And on the other hand, with scientific AI, it’s more focused on matching what the AI does with some scientific approach or theory. This gives the AI more of an explainability because usually if it’s only data driven and probabilistic driven, you get this- to this black box character pretty fast.

The outcomes are not really explainable anymore. And with scientific AI, you get this important piece that regulates AI also in this sense.

– [Ryan] And so if we, if- ChatGPT is a good example of the generative AI stuff. When- if someone’s trying to look for a real world example of the scientific AI, is there something you can point to or maybe show or talk to us about the real world use case for more the scientific AI side of things?

– [Max] Absolutely. We have been science-based from day one. There are also other AIs that try to incorporate science behind, image processing or face processing- facial expressions and emotions from text or speech. The applications are mainly in points where you have to have this explainability where it really is crucial for people to understand what the AI is actually doing and have a theory behind it that backs the results up.

This could be, for example, in mental health. We are working in mental health. We have to have this validity and this scientific proof that what the AI does or says is actually accurate and in compliance with some science that backs it up. And also in the HR space, it could be interesting for evaluating profiles.

It’s very crucial to have this scientific background to always be able to explain the outcomes.

– [Ryan] So I wanted to ask then, when people are talking about, especially generative AI conversations, there’s a lot of convers- or a lot of talk about the ethical side of it, and you know what’s going to happen. We’ve seen- there’s been stories about trying to slow down the progression of a lot of this AI work that’s being done.

When you think of AI ethics, what are the- how do you think about that, I guess is the first question? And what is the- what are the most important areas that people should be considering when they’re learning about the ethical side of AI across the board?

– [Martin] Yeah, this is a great question. Also it’s maybe interesting. We are from Europe, so we moved from Germany to the US and there is much more skepticism in Europe generally to new technology. Which is interesting. We moved here because we have a new technology and we didn’t like that. On the other hand, these ethical things are important to discuss. We should never forget that these generative AI only create something.

So right now it’s text, and it looks impressive, but it’s just text. It’s nothing else. I guess many people who don’t really understand what ChatGPT does have a wrong impression. It’s not really thinking, it’s just creating the best next word. That’s all it does. And this is not really dangerous.

But if we combine this ability with other abilities from other AIs, for example, empathy or planning capabilities or whatever it is, then it’s getting dangerous, and then it comes to ethics and for example, in our case, we already talked about that in our research a lot because it always starts with biases and if you have a certain dataset. We had a very diverse dataset, like very diverse people, but you still have a certain, for example, only people who are willing to talk to somebody with an AI.

Others are obviously not in our database. So it is always some kind of a bias, and this is, this already starts to be an ethical issue. So the- I guess the problem is known. We all know what the issue is, and it’s actually- it’s an old science to research how to solve that.

It’s not a short answer. This is very hard actually, and we have to do it.

– [Ryan] Absolutely. It’s such an interesting conversation when you get into the ethical debate about any of these technologies. And you could take this any which way. You could argue it in any kind of angle. Where do- what do you see as the biggest challenges that the industry faces on the ethical side of AI?

Especially from a public perception standpoint or from a adoption standpoint for companies looking to bring these AI technologies in and having concerns about the ethical pieces of it. What do you see as the biggest challenges facing the space on the ethics side of it?

– [Max] Yeah, the problems have been around for a while. There’s this- there was this Microsoft chatbot, Tay, a couple of years ago, which became racist pretty quickly. Only based on the data. And I think it’s one of these situations where humans are not catching up fast enough with the ethics side of things or also the legal implications of things.

Technology is booming right now and it’s exploding and people have to be more aware of this, and we have to become experts in that field. Especially in AI ethics. It’s still not addressed widely. So the challenge I see in most of these companies is- we had, we recently had an article about somebody committing suicide after chatting with a ChatGPT driven chatbot.

And these things really raise this awareness and this issue. This is the most difficult challenge for these generative AIs.

– [Ryan] What do you think can be done about this, about these challenges? Because it seems like the rate at which AI technology is evolving, becoming more popular, getting more mainstream is happening very quickly. And to play catch up on the legal side, on the regulation side is probably quite difficult.

So given how long that process usually takes in a standard day, what do you feel, if anything, can really be done to start combating these challenges on the ethical side of- or to combating these challenges that come up when it comes to the ethical side of AI discussions?

– [Martin] I think it’s similar to other society problems, like when you talk about the current structures that we have in our society, like the economic system, the universities, whatever, you always have some kind of guidelines, but who sets these guidelines and who says what’s right or wrong, what’s wrong in everything? It’s the same discussion like in politics. So it’s very complex, and I think this is what Elon and the other people, that scientist letter, also meant. Like we have to define that first together somehow. So it has to be a mutual agreement. But on the other hand, how should you do that? Is that- this is the CEOs of the big companies? Or is it- this is very hard.

So if you ask like, how could we solve that? It is a multidisciplinary problem. We don’t have the answer obviously. Otherwise it would be very arrogant. Others also don’t have it. This is why the signers said, let’s wait for a moment, but on the other hand, they will not wait.

It’s obviously not possible.

– [Ryan] Yeah it’s, the way somebody framed to me was like, once you take the genie out of the bottle, it’s hard to put it back in. And it’s just this, what are- how are you gonna incentivize these companies who are private companies trying to build value for the owners and the employees and tell them the thing that they’re building, they need to now slow down or stop.

And I’m curious- I, we had- we actually put a poll out around when the letter came out. Just to see what our audience thought, and it was very interesting because it was pretty split. I think it skewed a little bit more towards they don’t think any slowdown should occur, or they don’t think it’s realistic to expect a slowdown to occur.

How do you all feel about the thing? I’m not saying you have to state a full opinion, but just do you think it’s a plausible ask to slow this down and if so, should that be done?

– [Martin] Yeah, I actually read a book from a German author about the the meta intelligence that could happen. If some AIs combined their capabilities and then lift off, I think this is the word actually in the science, they lift off and suddenly become more intelligent than us.

This could actually happen very sudden once they connect each other. But not now. It’s not something that will happen tomorrow, but it could happen. And so it actually makes sense to not think about that, actually come up with a real solution, but the solution is not to- in the European Union, we always have regulations for everything.

This is not the solution because there’s always a way around these regulations. This is not how it works.

– [Max] Yeah. And also, it is important to to focus more on having backup and validation, especially in science. Because right now the thing is all of these models out there, they’re performing amazingly. It even looks like magic to most of the people right now, especially ChatGPT. But all of those things still lack the validation piece.

So it’s not really validated. It’s based on a huge dataset, the whole internet basically. It’s based on what humans have written, but ChatGPT still cannot understand the underlying emotion patterns and psychological traits of humans and personalities. And it is not a personality. We have to be aware.

Even if you give the chatbot a face, even if you give it a nice name, it is still not a person. It will also not start thinking like a person out of nowhere as ChatGPT. And this is the piece that we still have to teach AI. How to be more scientific approach and not really do all these black boxing.

– [Martin] For example, in our- just one last thing, I don’t want to take too long, but we have in our theory, it’s just an example because I know how our AI works. Other AIs also have something like that, but it’s- the theory behind it doesn’t judge people, for example.

So the fundamental training of our AI is only descriptive. Like we describe how a personality works, but we don’t say if it’s good or bad. This is a- I think this is a good direction for other AIs to do it like that because if you don’t judge, it’s also hard to discriminate and but this is- it’s a hard way to get there, but these theories already exist in science since decades or centuries, even sometimes about some things. And we can use them. We shouldn’t only rely on technology, I guess. I think this is the most important thing. Many of these tech companies think that technology solves everything alone, and I don’t think that’s true. You should combine it with science.

– [Ryan] No, super interesting points. I appreciate you guys really diving in there. It’s a conversation we’ve been having internally about a lot of this stuff, and I’m glad to get some expert opinions here that are working on this every day. I wanted to transition here for a second now that we’ve talked about the ethical side of it. Some of the- and I know a lot of what you’re focused on is the people side, doing AI for good’s a topic we wanted to talk about, understanding yourself using AI. Talk to me about when you tell people AI for good, what is- what does that mean exactly? To somebody who maybe doesn’t, maybe can’t connect the dots as well.

– [Martin] Yeah I guess I can get back to what I said at the beginning that I- my goal was, and even 20 years ago when I was like 19, I already wanted to create something scalable that understands people, because I think many of the fundamental issues in our today’s society is that people don’t understand their true self.

And social media really doesn’t help with that. So it’s exactly the opposite. So how can you understand your true emotions, your true goals, your true patterns, especially patterns that you are not aware of, or- and then the patterns of the people around you. How? It’s impossible. If you have a very good psychologist or a coach or a therapist or whatever, maybe, but this is very unlikely that it solves the whole thing. So, AI for good could actually help with that. It is a scalable way to understand people if it’s based on science and not only big data, because otherwise we get to the bias problem again. So this is my answer. I guess AI can help us understand ourselves better, actually, and if that helps, if that happens, people change their lives, and if millions of people change their lives, then we change the world in a good way so.

– [Ryan] It’s super interesting when you talk about the biases because it- there’s a lot of stuff out there now when it comes to people using ChatGPT, using Snapchat’s new AI functionality and different kinds of things and really trying to show the bias that some of these tools may or may not have. And removing that seems quite like a challenge and, but when I think about the biases it has, it feels like the same experience you get when you digest news from different sources. There’s definitely biases in play here and there, and how we weed out that so that a tool that is going to be interacted with millions and million- by millions and millions of people is something that is being used, in hopes, for good, without having a bias come into play, so it’s more factual and things like that is- seems like a very big task. And even when you bring that down at scale into how potentially companies or individuals are using other tools, smaller things that they’re building, removing that bias, I feel like is- seems like from what you’re saying, quite a challenge, but a critical challenge that needs to be overcome.

– [Max] And we also have to remind ourselves that at the end of the day, technology, also modern technology like AI, is basically just another tool in our toolkit, and it should not replace anything. It should not replace a therapist. It should not replace certain professions. That’s not where we should go.

We should view it as an add-on and use it for good.

– [Martin] Not even replace current social media because Instagram may be fun. It’s all good. But if we only have these superficial, and I don’t, it’s not judgemental, it’s just it is superficial, it’s photos. And dating is also very superficial, so it will still- we can’t change everything immediately, obviously, because we work like that, but we can add the deeper emotional layer to some of these platforms.

And that may help people to understand themselves better, yeah. This is AI for good.

– [Ryan] Yeah. So let me ask you, this is the last question I wanted to ask before I let you go here, is how can AI actually help people? Like you’ve talked about high level, but if you were to dive a little deeper and explain to somebody, look, I know there’s a lot of thoughts about AI, there’s a lot of differing opinions, there’s a lot of things that yes, people are concerned about, but there are, there’s also a good side of all this, and AI can be used to help people.

If you were to be asked that question, what would your initial answer be to that.

– [Max] Yeah, I can start from the technical point of view. We’ve seen ChatGPT helping humans in various things, but how I see it is we should see it as a tool that helps you bring out what you had in you all along. It should be like a confirmation anchor. It should not do anything for you and replace who you are.

It should just be used in the most efficient way that you can. For example, I use it also on a daily basis with GitHub Copilot, but I’m still the one, I’m still the architect of the software. It’s not the AI that is doing my job. It’s just a helpful tool for me to help me speed up things.

That’s how I see it, and I think that’s how most people should see it.

– [Martin] Yeah, and then we have concrete use cases like student mental health, for example. Very close to my heart. This is what I really want to do because we all know that in these years, our whole life is shaped basically. And if you have good experiences, you maybe have a better life than if you wouldn’t have these good experiences.

So one example is we are working with one university and right now it’s one, can, will be more, but as a pilot to prove how this works. Like understanding, again, understanding your real goals, then what your real skill set and talents are, and then meet the right complimentary people on the same campus, which you wouldn’t meet otherwise because you don’t know that they are complimentary. And then you can build a team and then you can start a company. This is a very concrete example. Or understanding the mental health of veterans and bringing them back to the job. It’s also a concrete project that we are working on. Bringing veterans back to the job.

Very sensitive topic. It’s all about mental health. So it can help in so many use cases and very practically and yeah.

– [Ryan] Well, guys, this has been a wonderful conversation. I truly appreciate your time. We’re getting into AI content pretty heavily now and this is one of the first conversations that I’ve had with founders of an AI company doing some great things. So I truly appreciate your time. For our audience out there who wants to learn more about what you’re doing or follow up on this conversation in any capacity, what’s the best way they can do that?

– [Max] Just go to our website, millionways.me, we also have a chatbot live on millionways.ai. So these are our two domains that you can try our software from and-

– [Martin] And LinkedIn of course.

– [Max] And LinkedIn, yeah.

– [Ryan] Perfect, perfect. Well, thank you both so much for your time. Really appreciate it.

Hosted By
IoT For All
IoT For All
IoT For All is creating resources to enable companies of all sizes to leverage IoT. From technical deep-dives, to IoT ecosystem overviews, to evergreen resources, IoT For All is the best place to keep up with what's going on in IoT.
IoT For All is creating resources to enable companies of all sizes to leverage IoT. From technical deep-dives, to IoT ecosystem overviews, to evergreen resources, IoT For All is the best place to keep up with what's going on in IoT.