There’s something that doesn’t let the Internet of Things (IoT) achieve its full awesomeness, and that something is security concerns.
We all have heard blood-curdling stories about virtual assistants turned rogue and calling fake support numbers. Who hasn’t worried, sleepless at night, about some hoody-wearing mysterious youth hacking into their smart coffee machine? Those fears are not unfounded because making IoT appliances safe isn’t as easy as securing a PC.
Enter our good friends, the robots. Well, not literally “enter.” They don’t look like generic anthropomorphic robots from sci-fi. A shame, if you ask me.
AI and machine learning are often hailed as saviors of IoT security. Would you just look at the amount of data they can analyze in no time? Amazing, right?
It surely is amazing, until it’s used against you.
Let’s take a closer look at how AI can be used for IoT security, and how it can be used against it, and we will decide how friendly it’s going to be to us in the long run on the scale from Wall-E to Skynet.
AI That Protects
The most common use of AI in IoT cybersecurity is data analysis. Industrious robots toil all day searching for anomalies in the network. However, this approach brings quite a lot of false positives because whatever AI doesn’t recognize as normal it assumes to be a breach attempt, even if it’s not, as is true in most cases.
So, to help our artificial helpers help us (that’s called a symbiosis), we also train them to recognize known attack patterns. This way, they’re able to sift out the irregularities that are less likely to be hacking attempts.
However, there’s a bit of a problem with this. To teach our machines what a legitimate data breach looks like, we need to show one to it. By “one,” I mean rather more than one, because AI needs to be able to notice similarities and patterns.
The thing is, companies that have been breached are not extremely likely to share the exact data of how that happened for various reasons. First, if they do, there’s a possibility the data may fall to the wrong hands and be used to find yet another vulnerability in their defenses.
The second reason is that sharing the exact information about a data breach is going to have some degree of personal data involved and affected by it. Thus, there arises a conflict between big data and privacy.
While it makes the quality of the analysis that AI can perform somewhat limited, it would be quite impossible to do without it in IoT.
AI That Attacks
Not all AI is as benevolent as the one described above. Sometimes, it can be used with the opposite purpose in mind: to facilitate breaches and not to prevent them.
As it analyzes strange occurrences in a network, machine learning can equally as intelligently analyze and recognize defensive measures employed in the IoT environment because, after all, they are patterns, too.
Something that is detected is much easier to bypass, which is what enemy-AI can be trained to do.
Another thing comes to mind. Remember the big data and privacy conflict that keeps protective AI in the dark? Well, hackers don’t have this problem. They can feed their pet robot everything they get, be it a successful or failed attack, as they obviously don’t have any reservations about violating other people’s privacy. Every bit of delicious data makes their AI smarter.
Which One Is Going to Win?
It’s a trick question, really, because there won’t be any complete victory for either side. Today, the good AI may win. If it does, there are no guarantees it will win again tomorrow.
Is this reason to not use AI in IoT security at all? Heck no! It’s not like hackers are going to be courteous and stop using it as well.
AI has great potential for cybersecurity, but we shouldn’t expect miracles from it. Doing so would put us in a dangerous mindset of relying too much on AI. We must remember that although AI is undeniably useful, it is just a tool, and an imperfect one at that.
Written by Dean Chester