September 8, 2021

ICP Australia introduces iEi’s Mustang-V100-MX4 Computing Accelerator Card with 4 x Intel® Movidius™ Myriad™ X MA2485 VPU, PCIe Gen 2 x 2 Interface, RoHS.

The Mustang-V100-MX4 is a deep learning convolutional neural network acceleration card for speeding up AI inference, in a flexible and scalable way. Equipped with Intel® Movidius™ Myriad™ X Vision Processing Unit (VPU), the Mustang-V100-MX4 PCIe card can be used with the existing system, enabling high-performance computing without costing a fortune.

VPUs can run AI faster, and is well suited for low power consumption applications such as surveillance, retail and transportation. With the advantage of power efficiency and high performance to dedicate DNN topologies, it is perfect to be implemented in AI edge computing device to reduce total power usage, providing longer duty time for the rechargeable edge computing equipment.

Lastly, the Mustang-V100-MX4 supports Open Visual Inference & Neural Network Optimization (OpenVINO™) toolkit, which is based on convolutional neural networks (CNN), the toolkit extends workloads across Intel® hardware and maximizes performance. It can optimize pre-trained deep learning model such as Caffe, MXNET, Tensorflow into IR binary files then execute the inference engine across Intel®-hardware heterogeneously such as CPU, GPU, Intel® Movidius™ Myriad X VPU, and FPGA.

KEY FEATURES:

  • PCIe Gen 2 x 2 Form Factor
  • 4 x Intel® Movidius™ Myriad™ X VPU MA2485
  • Power Efficiency, Approximate 15W
  • Operating Temperature -20°C~60°C
  • Powered by Intel’s OpenVINO™ Toolkit
  • Multiple Cards Supported

See More:Mustang-V100-MX4

tGW-715 Tiny Modbus/TCP to RTU/ASCII Gateway with PoE and 1-port RS-422/485

Read more
PET-7H24M PoE Ethernet High Speed Data Acquisition Module

Read more
WAFER-JL-N5105 Capturing a share of the Smart Robotics Market

Read more
PDS-782-25/D6 Programmable (7x RS-232 and 1x RS-485) Serial-to-Ethernet Device Server

Read more
PET-7H16M PoE Ethernet High Speed Data Acquisition Module with 8-ch 16-bit Simultaneously Sampled AI, 4-ch DI, 4-ch DO

Read more