Artificial Intelligence (AI) and Machine Learning (ML) are in the headlines more than ever as viable solutions in the world of data analytics and intelligent prediction. However, training neural network ML models, creating AI pipelines, and developing the framework at the edge to actually perform the work of classifying high-definition data streams is still a monumental task requiring teams of scientific and engineering experts. As a member of the Intel® Partner Alliance, Pratexo now brings companies significantly closer to deriving valuable intelligence from their dark data by integrating the Intel® OpenVINO inferencing toolkit from Intel into our catalog of ready-to-deploy software features in the Pratexo Studio.
What are Pratexo and OpenVINO?
Pratexo is the edge solution acceleration platform that brings the power of cloud computing to the far edge, simplifying and accelerating the ability to deploy and manage valuable solutions to critical operational issues. The Pratexo Design Studio allows solutions architects (SAs) to automatically generate code and physically deploy ‘micro’ clouds to the edge, enabling data collection and analytical processing workflows using a drag and drop ‘whiteboard’ interface. With the click of a button, any number of edge computing nodes in multi-tier architectures are dynamically built, installed, and clustered utilizing open source and/or third-party software components.
OpenVINO is a free-to-use AI inferencing toolkit designed to run deep learning workloads specifically optimized for Intel CPUs, GPUs, VPUs, and FPGAs. Use-cases include computer vision, speech recognition, natural language understanding, audio analytics, and recommendation engines. The OpenVINO ecosystem has an Open Model ‘Zoo’ (or marketplace) of over 200 predefined and ready-to-use AI models.
How Pratexo Integrated OpenVINO
Pratexo started by building a re-deployable OpenVINO Docker container image and incorporated it in a Micro Kubernetes (microK8s) Helm chart. This OpenVINO software feature is then made available in the Pratexo Design Studio as an AI inferencing component to larger edge computing architectures. With the push of a button, deployment of architectures with OpenVINO are triggered using Ansible playbooks which dynamically pass execution values for AI inferencing using specific Intel chips, as well as installing other dependencies. Additional Ansible playbooks comprising the full Pratexo edge architecture are created and the whole deployment is then wrapped and launched from shell scripts.
To test our implementation, OpenVINO was deployed on each edge node within a microK8s cluster which enabled inferencing at the far edge close to the source of images, videos, and audio streams. Upon startup, OpenVINO detected the Intel CPU and Intel Gaussian and Neural Accelerator (GNA) on our edge nodes that were available for inferencing. For testing purposes, we used the ResNet50 (TensorFlow) model provided by Intel’s Open Model Zoo to classify images sent to the OpenVINO model server via API. OpenVINO successfully identified all of the images in our test set and the classification results were loaded to a locally clustered Cassandra database for further data analytics and optimization by either a rules engine or custom application at the edge before sending a light event stream up to a central cloud.
The Pratexo platform brings OpenVINO into the extensive library of custom and open source software components available in the Pratexo Design Studio. Adding the OpenVINO toolkit to any edge architecture with Intel chipsets will automatically combine the dynamically generated Ansible playbooks with other playbooks for a push-button deployment of an entire distributed edge computing solution.
Step 1: Design of a sample edge computing and micro cloud architecture
As a sample project, we rapidly architected a complex distributed edge computing solution with the Pratexo Design Studio for detecting and analyzing vision, image, and audio streams. Device types, data sources, software components, and a micro cloud on the edge were brought together by dragging and dropping elements to the canvas (above) which automatically builds installation scripts and resolves software dependencies that then deploy either to a simulation environment or to physical hardware on the edge.
Step 2: Adding OpenVINO to the architecture
The OpenVINO toolkit feature that Pratexo integrated into the Design Studio was then dragged onto the canvas as the starting point for data collection and AI detection. The path to the machine learning model was defined as one of the parameters of OpenVINO which then performs detection on data streaming from devices in real-time and/or from a repository of data files. Results of the inference operation are then stored in a message bus for the micro cloud to consolidate and perform deeper analytics at the far edge.
Benefits of OpenVINO and Pratexo
By incorporating OpenVINO into the Pratexo ecosystem of software components, architects and systems integrators can radically accelerate the design and reusability of full-stack, deep learning distributed edge infrastructures optimized for AI/ML. Combining Pratexo’s partnership with Intel, the deployment of Intel-based architectures becomes a seamless operation due to Pratexo’s ability to deploy remotely and locally onto compute nodes placed at the far edge.
As with cloud computing, edge computing and micro clouds at the edge are quickly expanding business opportunities and the ecosystem of partners and solutions. In this new computing paradigm, speed-to-value and agility are measured in days (rather than months) requiring moderate-to-minimal expertise and resources. Pratexo takes the ready AI framework of OpenVINO and packages it into a deployable feature to be installed in any edge computing architecture… as easy as installing apps on a network of smartphones.
As a Member of the Intel® Partner Alliance, Pratexo Strongly Supports Intel-Based Solutions.
By Stuart Cowen, Pratexo Solutions Architect