Artificial Intelligence and Machine Learning: The Brain of a Smart City

We must carefully consider the ways in which data can personalize smart city experiences, and the bias and privacy concerns of leveraging AI and ML in a smart city context.

Susan Morrow
Cartoon of a brain and a city in the background
Illustration: © IoT For All

Another post in my series on smart cities and privacy. This continues from the previous post Smart City Data: A Convoluted Web.

The data collected by sensors and IoT devices in the smart city have to, somehow, be used. Information can provide insights to spot patterns and trends. If you have enough of it, aka big data, you can get a pretty accurate picture of whatever it is you’re exploring. For example, smart grids, with enough information at hand, can use data to determine peaks and troughs in electricity need and to then adjust output. This optimizes the use of energy, helping in the drive to sustainability.

Decision Making with AI & ML

Optimization decisions can be enhanced using technology such as Machine Learning (ML), which is a subset of Artificial Intelligence (AI). ML takes the data generated by health apps, smart meters or internet-enabled cars, etc., and uses these data to spot patterns and learn how to optimize the given service. For example, NVIDIA has developed smart video which handles big data analytics and applies machine learning to video streams. They’ve partnered with 50 AI city partners to utilize the technology to improve areas such as smart transport. There are expected to be 1 billion of these intelligent cameras by 2020. That’s an awful lot of data generated, analyzed and acted upon. The system will replace human interpretation, replacing it with machine learning algorithms – with an expected improvement in accuracy and speed. This city brain will process a lot of our personal data, including visual data about our movements.

As mentioned above, Machine Learning requires data to spot patterns and trends. The analysis of big data gives city services the information needed to be highly responsive to the needs of its citizens. It also uses these data in services to build more optimized responses to service use, helping to enhance the experience and improve sustainability. One area that’s being explored as suitable for AI and Machine Learning is in the personalization of services. This requires that personal data is collected and aggregated before being used as a profiling tool.

How AI and ML Can Personalize Smart City Services

ML tools that personalize experiences are already in use in marketing, for example. Here they’re used to tailor online sites, displaying products that users are expected to like from their predicted profile.  In a smart city, the same type of algorithm can be used for other purposes. For example, a study by three UK universities looked at the application of various ML algorithms for cycling and weather as a means of creating personalized services within a smart city. This was based on the collection, aggregation and analysis of big data. The study concluded:

“[a] combination of ML, IoT and Big Data, offers great potential to developers of smart city technologies and services.”

Importantly, this study was done without the need for data that could directly identify an individual. That isn’t to say that with effort, correlated data, perhaps using GPS from mobile devices, could be used to re-identify individuals. Also, it’s not too big of a leap to imagine that even more tailored personalization, or more accurate results, could be obtained by using directly identifiable information.

One of the other concerns about machine learning and AI is the possibility of default bias built into the very algorithms that are supposed to improve accuracy. If the training set itself is skewed towards a specific expected outcome, then the result will itself be skewed – in fact, the resulting bias may well be amplified. There have been several studies in this area including, “Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints”. This study looked at how training sets contain gender bias; this bias then becomes amplified when used in an AI situation.

Bias and Privacy Concerns Around AI and ML

The use of bias in AI might also amplify privacy concerns. An example of where this type of bias and control has crept in was the use of Microsoft’s ‘Tay’ chatbot which was trained using real-world tweets. The problem arose when people started tweeting racist and misogynist comments to Tay who then played back those sentiments. Similarly, privacy issues could arise from biased training sets. Privacy is about more than the exposure of personal data; privacy is about the exposure of our very being – our beliefs, our views, our political leanings and so on.

Privacy in the smart city is about so much more than revealing your name…

In my next post on smart city privacy, I’ll look at the limits of data and privacy in the city and begin the journey to getting smart without giving up too much privacy.

Author
Susan Morrow
Susan Morrow
Having worked in cybersecurity, identity, and data privacy for around 25 years, Susan has seen technology come and go; but one thing is constant - human behaviour. She works to bring technology and humans together.
Having worked in cybersecurity, identity, and data privacy for around 25 years, Susan has seen technology come and go; but one thing is constant - human behaviour. She works to bring technology and humans together.