December 12, 2019

ICP Australia introduces iEi’s Mustang-F100 PCIe FPGA High Performance Accelerator Card with Arria 10 1150GX support, DDR4 2400Hz 8GB and a PCIe Gen3 x8 interface.

The Mustang-F100-A10 is a deep learning convolutional neural network acceleration card for speeding up AI inference, in a flexible and scalable way. Equipped with Intel® Arria® 10 FPGA, 8 GB DDR4 on board RAM, the Mustang-F100-A10 PCIe card can be used with the existing system, enabling high-performance computing without additional integration and expense.

FPGAs can offer the ability to reprogram, this allows developers to implement algorithms in different applications to achieve optimal solutions. Algorithms implemented in FPGA provide deterministic timing, which achieve low latency real-time computation.

 

Furthermore, compared to a CPU or GPU, the power consumption of FPGA is extremely efficient. Those features make the Mustang-F100-A10 suitable for edge computing.

“Open Visual Inference & Neural Network Optimization (OpenVINO™) toolkit” is based on convolutional neural networks (CNN), the toolkit extends workloads across Intel® hardware and maximizes performance. It can optimize pre-trained deep learning models such as Caffe, MXNET, Tensorflow into an IR binary file then execute the inference engine across Intel®-hardware heterogeneously such as CPU, GPU, Intel® Movidius™ Neural Compute Stick, and FPGA.

KEY FEATURES:

  • Half-Height, Half-Length, Double Slot.
  • Power-Efficiency, Low-Latency.
  • Supports OpenVINO™ Toolkit, AI Edge Computing Ready Device.
  • FPGAs can be Optimized for Different Deep Learning Tasks.
  • Intel® FPGAs Supports Multiple Float-Points and Inference Workloads.
Wish to make mining operations productive and safe? Try ICP DAS’s solution applied in Australia

Read more
Integration of Food Factory Monitoring Systems for Comprehensive Production Traceability

Read more
Integrated Vessel Safety and Monitoring System

Read more
DM2 Series

Read more
DNP-211-S

Read more