Artificial intelligence (AI) processing today is mostly done in a cloud-based data centre. The majority of AI processing is dominated by training of deep learning models, which requires heavy compute capacity and high costs and this has now forced a rethink of how to deploy some of this AI inferencing locally and thereby reduce costs, prevent sensitive data being sent to the cloud for processing and increase speed of decision making.
 
AI has now become the key driver for the adoption of edge computing to deliver these benefits. When compared to the enterprise data centre and public cloud infrastructure, edge computing has limited resources and computing power. When deep learning models are deployed at the edge, they don’t get the same horsepower as the public cloud which may slow down inferencing, a process where trained models are used for classification and prediction.
 
To bridge the gap between the data centre and edge, chip manufacturers are building niche, purpose-built accelerators that significantly speed up model inferencing. These modern processors assist the CPU of the edge devices by taking over the complex mathematical calculations needed for running deep learning models. While these chips are not comparable to their counterparts - GPUs - running in the cloud, they do accelerate the inferencing process. This results in faster prediction, detection and classification of data ingested to the edge layer.
 
Interworld has a range of BOXER industrial embedded computers using Intel® Movidius™ Myriad™ and NVIDIA Jetson processors to bring intelligent AI solutions to any application. With a wide range of features and profiles, the BOXER edge computing systems are built to fit the needs of any application, from robotics and drones to intelligent security systems and unmanned stores.