When it Comes to Edge, Don’t Overlook the Little Things

Jason Shepherd -
When it Comes to Edge, Don’t Overlook the Little Things
Illustration: © IoT For All

Software continues to eat the world, with the value of software-defined products and services evolving over their cumulative lifespan. It’s a business imperative that organizations move from waterfall software development to an agile model leveraging continuous delivery and integration. The power and elasticity of the cloud have been a massive accelerator for innovation over the past ten years, however, the sheer volume of devices and data – an estimated 1 trillion connected devices by 2030 – are driving a necessary shift towards a distributed computing model. 

Edge as a Multi-Cloud Enabler

Commonly-cited reasons for the rise of edge computing include reduced latency, bandwidth consumption, increased autonomy, security, and privacy. Less talked about is what I believe is an equally important driver – end users investing in multi-cloud strategies to regain control of their data after over-indexing their investments toward the public cloud. After all, the cloud scalers’ model makes it easy and inexpensive to get your data in but very expensive to keep it there and get it out.  In short, their easy button sounds great until you get the bill.

Commonly-cited reasons for the rise of edge computing include reduced latency, bandwidth consumption, increased autonomy, security, and privacy.

-Nubix

Taking control of your data at the edge while continuing to increase your agility and ability to innovate with software requires extending the same cloud-native principles that we’ve perfected in the cloud back out into the field. This involves leveraging loosely-coupled, microservice-based architecture, platform independence, and continuous delivery and integration in as many edge deployments as possible. 

Nail, Meet Hammer

From the perspective of a technology provider, if the edge is a nail, whatever I currently sell is the hammer. Infrastructure OEMs view the edge as racks of servers in a regional or on-prem data center, perhaps down to a smaller PC-like “gateway” device in the field. Telcos see the edge as their central offices and the bases of cell towers. Industrial solutions providers envision the edge as environments such as the manufacturing floors, oil rigs, and supply chains they have served for many years.  

In reality, the edge is a continuum spanning highly-constrained devices in the field to regional data centers, comprising inherently different hammers and lots of nails. It’s about the convergence of IT chops with the unique constraints of the physical, highly distributed world that’s traditionally served by Operational Technology (OT). 

Organizations should think holistically when developing their edge strategy – viewing the edge as a collection of computing paradigms, prioritizing open technologies (including leveraging open source), and building for flexibility.

In the process, it’s critical to spend less time reinventing plumbing and more time creating meaningful differentiation. Ultimately, it’s about dynamically deploying applications anywhere along the edge-to-cloud continuum based on a balance of performance and cost, along with consideration for security and privacy needs. 

From Monolithic to Modular

In the early days, server applications were monolithic and difficult to develop and update. Over the years, we’ve seen technologies like Virtual Machines, Docker, and most recently Kubernetes evolve data center software architectures to be more modular, composable, and dynamic.

We’ve seen significant progress from the IT players over the past several years as they extend these tools and cloud-native development principles to more of the edge continuum, including increasingly smaller compute footprints. However, there’s a practical limit to how deep into the edge these technologies can be used because they require a minimum amount of available system memory and the ability to run Linux. 

The Biggest, Smallest Edge

Meanwhile, the largest part of the total edge footprint by volume is small, resource-constrained devices powered by microcontrollers (MCUs), such as IoT sensors, cameras, controllers, and connected products. In fact, Arm reported in 2020 that their partners have sold 160 billion chips to date, with two-thirds of the 2019 shipments being MCUs. Devices that leverage these chips represent what I call the “biggest, smallest edge.”

Despite this, the MCU world is still predominantly characterized by monolithic, embedded software builds that are difficult and time-consuming to develop, brittle to update, and rigid to innovation. Applications have historically been hard-coded to serve specific functions, typically relying on upstream resources such as the cloud for heavier processing. 

But these devices are getting increasingly powerful, enabling them to do more local processing and evolve in functionality over time. An example technology driver is TinyML which enables more sophisticated, on-device analytics for data filtering, object detection, voice recognition, and the like. 

ABI Research projects that TinyML device shipments will grow to 2.5 billion in 2030, up from 15 million in 2020.  The ML models on these devices will need to be continuously updated as they evolve, as will code serving other functions such as connectivity, security, and privacy. 

As devices get smarter, it will change the dynamics of when and where processing occurs across the edge continuum, with a likely trend of the middle “IoT gateway” layer increasingly being skipped, with smart devices acting locally and pre-filtering data for further processing upstream in locations such as on-prem data centers, 5G MECs, and the cloud. All said, it’s critical not to overlook the massive MCU-based footprint in the physical world as an increasingly important contributor to edge computing solutions. 

Taking Kilobytes Out

The trick is that the same developers that understand modern application development, cloud-native principles, AI, IoT platforms, and so forth generally don’t have the embedded development skills required to program for highly constrained devices.

I chuckle when people from different computing paradigms use ambiguous words like “tiny.” While a data center expert may consider software that fits in 128 Gigabytes of a memory footprint as “tiny,” an embedded developer will likely view having 128 Kilobytes to work with as a luxury. 

The embedded world is ripe for a new development paradigm that balances the benefits of cloud-native principles and the inherent challenges of constrained MCUs. What if we could separate the complexity of embedded firmware from the applications above?

What if this enabled cloud, AI, and IoT application developers to create containerized, interchangeable functions on top of this abstraction as they do in the data center today? This would greatly accelerate time to market and increase the available developer pool to take advantage of the massive device edge footprint. Containerization would also increase security through the separation of concerns, and enable an ecosystem of applications from different developers running on a common, tiny infrastructure.    

1990 called and wants its embedded development tools back. The time is right for us to take a fresh approach to developing MCU-based solutions so we can tap into the biggest, smallest edge footprint out there and further accelerate software-defined innovation. 

Author
Jason Shepherd - CEO, Nubix

Contributors
Nubix
Nubix
Nubix delivers an industry-first container orchestration solution for microcontroller-powered devices, simplifying development, deployment, security and management at scale.
Nubix delivers an industry-first container orchestration solution for microcontroller-powered devices, simplifying development, deployment, security and management at scale.