CNNs on FPGAs are ideally suited for embedded vision applications due to
the use of smaller but high-performance networks and direct image transmission from the camera to image processing. They can run on frame grabber FPGAs as well as VisualApplets compatible cameras and vision sensors. Small image processing units or intelligent cameras are already performing demanding tasks in the decentralized computing approach of industry 4.0. Here there is a high demand for embedded vision with deep learning.
Since most embedded devices are equipped with an FPGA, they have sufficient
performance for more complex neural networks. Compared to GPUs, FPGAs are particularly energy efficient and well suited for embedded and industrial applications with hard real-time conditions, such as inline inspection, robotics and pick & place, cognitive systems, and human-machine interaction (HMI).
Further applications with high accuracy are in the fields of quality assurance,
medical technology, drones and automotive (autonomous driving).
For embedded vision applications VisualApplets supports frame grabbers as well as FPGA third party devices like cameras and vision sensors. VisualApplets Embedder is used to create a compatibility layer between the hardware and the VisualApplets programming kernel. This reserved part of the FPGA can be programmed with VisualApplets as often as desired.