IoT and privacy have been described as incompatible—ideas crafted from different times with different goals. Connection vs. separation. Agility vs. protection.
As I mentioned in the first post in this series on IoT and human rights, there are plenty of reasons to believe that emerging technologies (IoT, AI, and automation more broadly) conflict with larger policy goals. There are inevitable risks in their adoption and unknowable challenges that come with these cutting-edge innovations. They will disrupt and expose. Things will break. Jobs will be eliminated. It’s not always clear how tech like IoT can be made to serve some “common good” in its immediate implementation, even though it’s designed to increase efficiency and broaden technological horizons.
I wrote before that human rights provide a helpful framework for measuring the predicted effects of IoT adoption. This is because human rights present a generalizable global consensus about the conditions for human flourishing—goals that guide legal systems, government policies, NGO work, etc. Rights endowed to people by virtue of their humanity take on utmost importance; their violation is a denial of opportunity for core human wellbeing.
Privacy is one such right. Article 12 of the 1948 Universal Declaration of Human Rights states the following:
“No one should be subjected to arbitrary interference with his [or her] privacy, family, home or correspondence, nor to attacks on his [or her] honour or reputation. Everyone has the right to the protection of the law against such interferences or attacks.”
Since this document was written sixty years ago, technology has skyrocketed—literally. While Article 12 (above) clearly aims to protect against invasions of privacy like breaking into someone’s home and stealing their mail, we need to evolve the idea of a right to privacy in this digital age. The storing and sharing of personal information, for example, is a process worthy of serious consideration. It underpins other foundational rights such as freedom of speech and freedom of association.
Let’s explore how IoT interacts with this seemingly aspirational “right to privacy.”
The first laws governing the use of personal information came out of Europe in the 1970s when it became clear that new forms of communication needed new forms of protection. Since then, the UN has stated, “the rights held by people offline must also be protected online.” In 2013, the UN adopted a resolution to reaffirm and outline the right to privacy in a digital age, which calls upon governments to be transparent and proactive in how they handle two key privacy vulnerabilities: surveillance and misuse of personal data.
The UN has stated that surveillance “inhibits the free functioning of a vibrant civil society”—or rather, that the vulnerability of digital communications to surveillance does so. If IoT systems are vulnerable to (or subject to) surveillance, there are a host of human rights violations that can occur.
On one hand, surveillance can enable corrupt regimes (or corporations) to exercise power more efficiently and more effectively. The dangers of government surveillance are widely discussed. The dangers of non-government surveillance are just as alarming. Surveillance can empower individuals or governments monitor and disarm political dissent. It can concentrate knowledge in the hands of the powerful few and allow commercial entities to influence consumer behavior. It can rub up against the limits of police power, particularly in urban areas. It can allow for restrictions on freedom of movement within state borders, a right guaranteed in Article 13 of the UDHR. Spyware, as in extreme cases like the Syrian Civil War, can be a very powerful weapon on any front.
You could write a book about all the ways IoT (and other information communication technologies) can be abused by political and commercial actors. What’s more important is how it will most likely be adopted and used by these entities. Citizen surveillance and security monitoring are already in place in many areas of our world. IoT will only make these capabilities more advanced and more ubiquitous.
Exposing personal data can be catastrophic. However, even if it isn’t catastrophic, it’s still a violation of rights. Personal data can be gathered digitally, biometrically, genetically, and via video and other media. The kind of data gathered, and what is done with it, affects more than the right to be protected against “arbitrary interference with … [one’s] correspondence.” It bleeds into other foundational rights protected by the UDHR, such as…
As we all may be well aware, there are also subversive effects of “personalized” technologies. The Partnership on AI writes the following: “While technologies that personalize information and that assist people with recommendations can provide people with valuable assistance, they [can] also inadvertently or deliberately manipulate people and influence opinions.”
Influencing consumer behavior is not a rights violation per se, but it can erode the systems that keep other rights protections in place. Even though human rights are mostly individual rights, they require collective systems (like privacy and data use standards) to be upheld in order to secure protections for all.
Encryption and other security measures found within IoT systems can help promote and protect the right to privacy in many cases. Encryption reorganizes information into an unreadable format, accessible only by an encryption key—a process explained in more detail here. Robust encryption can ensure that certain messages and thus certain personal data are safe from prying eyes. In many cases, this can help support the right to freedom of expression, association, and assembly—especially among vulnerable people groups. Oppositional political groups, and those fleeing persecution or abuse, for example, can make critical use of these avenues for privacy and safety.
However, encryption can presumably disguise riots and other public safety concerns as well. Protecting people’s privacy doesn’t always mean protecting the security of all people. Crime is harder to fight when information is not as accessible to the police. It’s often hard to figure out what kind of personal data is covered within a “right to privacy,” and what data is public domain. However, as I hope I’ve demonstrated, it’s worth making that distinction.
Without a sphere of privacy, all information—however intrusive or misleading—is fair game for public and corporate analysis.
Public safety and national security are the upsides to the myriad concerns about privacy infringements and IoT. With more data being collected and transformed within IoT systems, safety and security can (in theory) be more well-monitored by those who have the data. Privacy can then be protected by the processes used to secure data. For example, companies like Protenus are ensuring health systems remain HIPAA compliant using artificial intelligence. IoT tools that can be used to surveil (and thus violate the privacy of citizens) can be used to protect the privacy of a home or workplace. IoT can allow almost anything to communicate, which can be used to protect privacy rather than violate it—it all hinges on data ownership and verifiable standards of use.
IoT could elevate new voices, expose wrongdoings, and make our world safer and more transparent. At the same time, communication between all things doesn’t serve all people.
Privacy and security are often seen in opposition to one another, but here it is clear that with an increase in IoT deployments, both must be prioritized.
System privacy and security often relate directly to the privacy and security of people and/or assets. Data access and data control remain the key drivers of human rights protections and violations in this arena. Vulnerabilities within an IoT system can directly lead to insecurity and lack of privacy because new hands can access and/or control data.
Privacy may be protected for some groups and not others, which allows for further human rights violations to occur. Thus, privacy standards must be introduced to ensure that the right to privacy is protected equally across the board. Otherwise, it will be manipulated or falsely prioritized against other important claims like the right to life, freedom from torture, the right to social protection for children, etc.
The prioritization of rights is a topic for an entirely different post, but it’s clear that protecting the right to privacy requires stringent protocol transparency, system security, government cooperation, and—perhaps ironically—regulation of data protection. As the world becomes increasingly connected, the right to privacy must be continuously asserted and thoughtfully protected.
The next post in the series Human Rights and IoT will focus on IoT’s role in aggregating information on human rights abuses and responding to disasters proactively.
COVID-19 has plunged us into a hyperconnected world—with heightened opportunities for attacks on IoT devices,…