Sustainable AI accelerator - For the edge intelligence of tomorrow

Smartphones, robots, drones and industrial cameras - all devices will have some kind of artificial intelligence (AI) integrated into them in the future. But how can such power-hungry technology be used efficiently and sustainably outside of large data centres in small, resource-optimised embedded devices?

The term "edge intelligence" describes a class of devices that can solve inference tasks at the edge of networks ("on-the-edge") with the help of artificial neural networks (CNN) and machine learning algorithms. There are already working approaches and solutions to efficiently accelerate CNNs on edge devices. But only a few are flexible enough to keep pace with the rapidly advancing AI development.

Performing inference tasks on edge devices is not so easy, as neural networks are "actually" unsuitable for embedded use. But what are the challenges to using them effectively "on-the-edge"? Edge computing in general is all about efficiency. Edge devices usually have only limited amounts of computing, storage, and energy resources available. So, computations have to be done with high efficiency, but at the same time they should deliver high performance values and all that with low latency times - which somehow seems incompatible.

With the execution of Convolutional Neural Networks (CNN), we are also dealing with the supreme discipline. CNNs, in particular, are known to be extremely computationally intensive and require billions of calculations to process an input. To be able to execute AI on-the-edge efficiently, a specially tailored computing system with two fundamental properties must be used. In addition to the efficiency already mentioned, the system should be flexible enough to support new developments in CNN architectures. This is important because, especially in the field of AI, new architectures and new layer types leave the development and research area every month. Things that are current and new today may already be obsolete tomorrow.

The various approaches with CPU, GPU or FPGA acceleration or a customised ASIC solution all have their advantages and disadvantages. An FPGA base is the optimal combination of flexibility, performance, energy efficiency and sustainability and at the current stage of AI development is very well suited for implementing a CNN accelerator on edge devices. The ability to adapt it at any time during the runtime of the device by updating it for special applications or CNNs makes it a solution that works in the long term and is therefore suitable for industry. The biggest challenge in using FPGA technology is the fact that programming is very complex and can only be done by specialists.

However, to keep the handling of the FPGA in later use as simple as possible, the "deep ocean core" developed by IDS itself uses only one universally applicable architecture that already supports all important CNN layer types. The accelerator can thus basically run any CNN network. This completely eliminates the problem of difficult programming, because the user does not have to create FPGA configurations himself. In addition, the FPGA Core supports seamless switching between networks on-the-fly and without delay, allowing complex analyses to be split into small, simple, and resource-efficient CNNs within an application. Updates to the IDS NXT camera firmware also keep the deep ocean core constantly updated to support any new developments in the CNN field.

With the IDS NXT ocean all-in-one inference camera solution, users do not need Deep Learning, image processing or application programming expertise to train and run a CNN either. They can start AI-based image processing on the spot. Easy-to-use tools lower the hurdle in order to create inference tasks in minutes and run them immediately on a camera. All components are developed directly by IDS and are designed to work seamlessly together. This simplifies workflows and makes the overall system very powerful. The easy-to-use IDS NXT ocean inference camera system in combination with an FPGA CNN accelerator thus represents a sustainable edge intelligence complete solution today, with which end users no longer have to worry about individual components and AI updates.

Just get started with the IDS NXT ocean Creative Kit

Anyone who wants to test the industrial-grade embedded vision platform IDS NXT ocean and evaluate its potential for their own applications should take a look at the IDS NXT ocean Creative Kit. It provides customers with all the components they need to create, train and run a neural network. In addition to an IDS NXT industrial camera with 1.6 MP Sony sensor, lens, cable and tripod adapter, the package includes six months' access to the AI training software IDS NXT lighthouse. Currently, IDS Imaging Development Systems is offering the set in a special promotion at particularly favourable conditions.

Back to topbutton