Unlocking the Full Potential of Edge-AI with a Groundbreaking New Technology Approach
Tim VehlingTim Vehling
Running AI models at the edge rather than in the cloud offers significant advantages for IoT applications. Edge-AI designs can be simpler, data processing is more secure, and the overall user experience is superior. Furthermore, edge-AI processing is more advantageous for applications like video analytics that require low-latency processing of vast amounts of data in real-time.
The challenge for edge-AI original equipment manufacturers (OEMs) – whether they are designing consumer electronics, enterprise equipment, or industrial applications – is to develop low-cost, small form-factor devices with low latency, high performance, and low power. The inherent limitations of digital technology due to bandwidth-constrained memory and high clock speeds have forced OEMs to make tradeoffs that have limited the potential of AI, even today. A new approach – analog compute combined with flash memory – promises to make it affordable for companies to deploy powerful edge-AI applications widely.
A new approach to digital technology, analog computing combined with flash memory, promises to make it affordable for companies to deploy powerful edge-AI applications widely.
Flash memory technology has driven the electronics industry forward thanks to its incredible density and tiny size compared to hard-disk drives and ability to retain information with no power applied. We can all thank flash memory for allowing us to save photos, download apps, and do so much more on the smartphones, laptops, and other devices we use every day. However, flash memory also has a few drawbacks. Its slow speed and high-power draw compared to other memory technologies have limited its use to long storage.
Analog compute is a technology that has shown significant promise but has historically had several implementation challenges. One of the biggest impediments of analog compute has been its size. Analog chips have traditionally been far too big and costly, not to mention being very difficult to develop. Analog compute has been researched for decades, as companies have tried to figure out how to leverage analog’s fast computational speeds and power efficiency (it is much more efficient than digital systems) for today’s computing requirements.
By combining flash memory and analog compute, you get a sum that is far greater than the individual parts. This combination enables incredible density – driving down cost by 20X and enabling processor designs with a compact, single-chip form – along with ultra-low power consumption that is 10X more efficient than digital, and high-performance that rivals the compute power of $700 GPU systems. Companies can take advantage of AI processors with analog compute-in-memory to easily and cost-effectively deploy AI across a wide range of IoT applications.
Of course, analog compute needs to live in a digital world. AI systems connect to sensors and processors that are digital. This means that analog compute processors can require tens of thousands of analog-to-digital (ADCs) and digital-to-analog (DACs) converters. To fit onto a single chip, the ADCs need to be incredibly small and need to be designed with power efficiency.
Leveraging analog computing power combined with flash memory, OEMs can rethink what is possible with AI. Just imagine what exciting innovations we will see without the existing limitations on edge-AI applications' power, cost, and performance. From farm to factory, from data centers to transportation and beyond, the possibilities for edge-AI powered by analog compute-in-memory technology are endless.
New Podcast Episode
Recent Articles