Our “CNN ready” frame grabber microEnable 5 deepVCL was awarded with the “Vision Systems Design Innovators Award” in Gold in the category “frame grabbers and boards”. The award ceremony took place at the Automate trade show in Chicago. The board is designed for hardware-accelerated CNN applications and processes even the most complex deep-learning applications in real time with a very high data throughput, which simultaneously require a high resolution as well as frame rate. With conventional GPUs, such vision tasks could only be realized with compromises in speed, latency, detection rate, energy efficiency or overall system costs.
Official description of the award ceremony
In order to realize the implementation from deep learning to production, we rely on powerful frame grabbers with FPGA processors for inference, which guarantee a high bandwidth, robustness and guaranteed shortest latency times. In combination with our graphical development environment VisualApplets, the programmable deepVCL board enables the inference of pre-trained deep neural networks with greatly reduced development times of applications. These can be ported to other FPGA hardware platforms such as cameras or vision sensors.
CNN Run Time License — Service Packages
Award for Our Hardware and Software Solutions for the Third Time in a Row
The FPGA enables the processing of image data – from acquisition to output – directly on a frame grabber or embedded vision device without loading the CPU – a quality that is particularly suitable for process-intensive applications such as CNNs. This allows smaller PCs to be used without GPUs, which reduces overall system costs. The higher data bandwidth enables the processing of an entire image or additional image pre- and post-processing on the FPGA. It is high enough to analyze the entire data output of a GigE Vision camera using deep learning, to name one example.
More about the deep learning frame grabber
Award-winning “CNN ready” frame grabber microEnable 5 deepVCL