Edge devices are the most common IoT devices deployed each year, yet each edge device adds a potential access point for a hacker. These edge devices are typically sensors that collect data, such as temperature or location, and then connect to local networks to send the data to a cloud service or internal server. With the sheer number of edge devices and the access they provide to networks, the importance of their security cannot be understated.
IoT device security begins with the microcontroller. There are many safeguards that must be integrated into the microcontroller, such as secure immutable boot and tamper resistance. To communicate with a connected device securely another key requirement is device authentication, i.e., proof of identity. This identity must be unique. Once identity has been established and proven, a secure communications link can be constructed. The link is encrypted using cryptographic keys. Both the device’s identity and its cryptographic keys are derived from random numbers that we call ‘seeds’. Together, a secure microcontroller, a unique identity, and cryptographic keys form a Root-of-Trust (RoT). The RoT is the foundation of security within an IoT network.
How the RoT is Embedded in a Semiconductor Chip
There are two fundamental methodologies for creating the RoT within a microcontroller. The most common uses an external computer called a Hardware Security Module (HSM). This is a computer that’s dedicated to generating random numbers and cryptographic keys, and the management of those keys. The keys are generated externally to the microcontroller and they’re programmed into it – a process that is known as ‘key injection’ – using a programming interface. There are typically security issues with key injection in that the interface can often not be encrypted.
The other methodology is where the chip itself generates unique values and converts these into cryptographic keys. Typically, the microcontroller may use random physical variations that arise during the manufacturing process to generate random seeds. These process variations are called Physical Unclonable Functions or PUFs. The PUFs generate random seeds, which then can be converted into identities and cryptographic keys by a key generation accelerator, a peripheral circuit function that’s already integrated into the microcontroller.
More About Key Injection
Key injection can be relatively expensive due to the need for specialist programming equipment. Usually, that’s provided by a specialist programming house that uses programmers linked closely with HSMs. Because you’re dealing with a third party, this introduces a security risk and the methodology goes against the latest recommendations of security experts, which is to take a zero-trust approach to security by avoiding third-party involvement.
Injected keys need to be stored in memory inside the device. They’re generally stored in non-volatile memory and then protected by hardware security technology inside the microcontroller. For example, Arm’s TrustZone technology breaks the execution environment into secure and non-secure memory, peripherals, and functions. Even with such measures in place, keys can be vulnerable to being read by individuals with malicious intent because they’re just sitting in standard flash memory in the device. A further vulnerability is that keys are often transferred to devices over an unencrypted link, which can expose them to attack.
How PUFs Eliminate Some of the Security Risks
Let’s look at PUFs in more detail. The SRAM PUF is a good example of first-generation PUF technology. SRAM is embedded in most microcontrollers and microprocessors. When you power up these chips, the SRAM cells each take on a zero or one state. Which state they settle at depends on tiny physical variations of the silicon wafer. The variations are random and are used to create the seeds that can be used to generate cryptographic keys. The SRAM in a device becomes the fingerprint of that microcontroller and provides it with a unique identity. Because SRAM PUFs use memory technology that is already in a microcontroller, you just need some software to drive the PUF.
Flash memory is another type of PUF. Flash memory is again already available in most microcontrollers. The flash cells are ‘programmed’ by over-stressing them to the point where some breakdown is caused in the silicon oxide insulation layer of the gates of the transistors within each memory cell. Because of the mismatch between the two transistors that comprise each flash cell, you will get either a one or a zero. You need a high voltage to induce the gate oxide rupture in this type of technology, so there is an initial programming phase, but once the program is completed, you get random data in that flash cell, which you can read out. This does need some preparation, but again takes advantage of technology that is already in the microcontroller or microprocessor.
The Pros and Cons of These First-Generation PUF Technologies
Let’s first look at the SRAM variant. A major benefit is that you don’t have to inject keys into the microcontroller. The seeds to generate keys are created by the SRAM itself, which is already in the chip. The keys are not stored in memory but in the physical makeup of the SRAM cells. This makes the chip difficult to hack.
SRAM PUF technology is used by several semiconductor manufacturers, including Intel, Microsemi, NXP, and Xilinx. However, the technology has limitations, one of which is that there generally tends to be only one seed generated. If you want multiple cryptographic keys, you must generate them from this common seed, which means that they are mathematically correlated and inherently less secure than if that relationship did not exist.
A further compromise is that cells don’t always start up in the same preferred state. This means that you need error correction to make sure that the seed that you want to create from the cells is stable and repeatable. The degree of repeatability is dependent on the specific memory manufacturer and sometimes the entropy, or degree of randomness, of an SRAM-based PUF can be poor. There is questionable resistance to attack. Because the identity is inside an SRAM cell, you are seeing currents flowing and therefore the cell could be susceptible to side-channel attack, where measuring current flow, or some other electrical phenomenon, may be used to read the state of each cell. SRAM PUFs also suffer from a relatively long setup time during power-up, leaving the microcontroller and the IoT device that it drives, susceptible to attack during this period.
Let’s now consider flash memory PUFs. Once again, you don’t have to inject keys into the device, and seeds are created in the flash memory that’s already in the microcontroller. Once the seeds have been programmed into the memory, you can extract them with low latency using a simple read. The process doesn’t need error correction because after the flash cell has been programmed its state does not vary.
Of course, storing seeds in memory could mean that they’re susceptible to attack and a further disadvantage of flash memory PUFs is the additional silicon area overhead due to the requirement for a charge pump, which is needed to introduce high voltages into the process to rupture the semiconductor’s oxide layer. Like SRAM PUFs, flash memory PUFs are not based on a self-contained, dedicated security circuit block. Because flash memory is being repurposed, it’s not available for other functions. There is also a relatively long setup time because you must program the memory to cause the oxide layer to rupture.
How Second-Generation PUFs Improve IoT Security
Second-generation PUFs are silicon IP blocks specifically designed for security and optimized for standard CMOS processes. They typically comprise a 64 by 64 cell array that’s baked into the CMOS process of each device from which a fingerprint is extracted. The random nature of that fingerprint is based on the atomic positions and the imperfections of the nanostructures in the silicon oxide layer of the CMOS transistors in the array. Circuits within IP measure the quantum tunneling current across the oxide layer of each transistor pair. The probabilistic, random nature of the tunneling current generates the ones and zeros of the fingerprint. The currents measured are in the order of femto amps. Variations in these currents generate the random numbers that are used to produce unique, immutable, and unclonable chip identities, from which cryptographic keys can also be created on-demand.
These second-generation PUFs provide the highest possible security. First, they exhibit high entropy or randomness. This is guaranteed because you’re measuring a probabilistic quantum effect. They produce multiple uncorrelated keys within the 64 by 64 cell array and keys are generated on demand, so they don’t need to be stored in memory where they would be susceptible to leakage.
Independent testing of the technology shows that it can be secure against all known attack methods and support EAL Level 4 security certification. One version is also PSA Certified as PSA Level Two Ready. Of course, like their SRAM and flash memory counterparts, second-generation PUFs eliminate the need for key injection and the security risks involved in that process. The IP has a small silicon footprint to keep costs to a minimum and low error rates that are easily compensated for with a small error correction algorithm that consumes minimal processor overhead. Proven test chips are available today and major semiconductor companies will be announcing their adoption of this second-generation PUF as their Root-of-Trust technology in the coming months.
The author has created a video explaining more about the Root-of-Trust, which can be viewed here: