COMPANY

About VeriSilicon
Executive Team
Press Release
In the News
Events
Partners
Careers
Trademark
Contact Us

INVESTOR RELATIONS

Board of Directors
Major Investors
Stock Information
IR Contacts
Neural Network Processor IP Series for AI Vision and AI Voice

VeriSilicon’s Vivante VIP9000 processor family offers programmable, scalable and extendable solutions for markets that demand real time and low power AI devices. VIP9000 Series’ patented Neural Network engine and Tensor Processing Fabric deliver superb neural network inference performance with industry-leading power efficiency (TOPS/W) and area efficiency (mm2/W). The VIP9000’s scalable architecture, ranging from 0.5TOPS to 20TOPS, enables AI capability for a wide range of applications, from wearable and IoT devices, IP Cam, surveillance cameras, smart home & appliances, mobile phones and laptops to automotive (ADAS, autonomous driving) and edge servers. In addition to neural network acceleration, VIP9000 Series are equipped with Parallel Processing Units (PPUs), which provide full programmability along with conformance to OpenCL 3.0 and OpenVX 1.2.  

VIP9000 Series IP supports all popular deep learning frameworks (TensorFlow, TensorFlow Lite, PyTorch, Caffe, DarkNet, ONNX, Keras, etc.) and natively accelerates neural network models through optimization techniques such as quantization, pruning, and model compression. AI applications can easily port to VIP9000 platforms through offline conversion by Vivante’s ACUITYTM Tools SDK or through run-time interpretation with Android NN, NNAPI Delegate, ARMNN, or ONNX Runtime.

VIP9000 Architecture

Programmable Engines (PPU)
128-bit vector processing unit (shader + ext)
OpenCL 3.0 shader instruction set
Enhanced vision instruction set (EVIS)
INT 8/16/32b, Float 16/32b
Tensor Processing Fabric
Non-convolution layers
Multi-lane processing for data shuffling, normalization, pooling/unpooling, LUT, etc.
Network pruning support, zero skipping, compression
On-chip SRAM for DDR BW saving
Accepts INT 8/16b and Float16 (Float16 internal)
Unified Programming Model
OpenCL, OpenVX, OpenVX-NN Extensions
Parallel processing between PPU and NN HW accelerators with priority configuration
Supports popular vision and deep learning frameworks: OpenCV, Caffe, TensorFlow, ensorFlowLite, ONNX, PyTorch, Darknet, Keras
SW & Tools
ACUITY Tools: End-to-end Neural Network development tools
Eclipse-based IDE for coding/debugging/Profiling
NNRT: Runtime framework supporting a droid NN, NNAPI Delegate, ONNX Runtime and ARMNN.
Scalability
Number of PPU and NN cores can be configured independently
Same OpenVX/OpenCL code runs on all processor variants; scalable performance
Extendibility
VIP-ConnectTM: HW and SW I/F protocols to plug in customer HW accelerators and expose functionality via CL/VX custom kernels
Reconfigurable EVIS allows user to define own instructions
Easy integration with other VSI IPs

Search

Contact

Language

简体中文

English

日本語

芯原股份 (688521.SH)
Thank You for Subscribing
Thank you for subscripting to receive the latest news of VeriSilicon via email .
While you await our next issue, we invite you to learn more about VeriSilicon through the resources below.
CUSTOM SILICON SERVICE
Embedded Vivante GPU, Vision, and IoT cores
Embedded Vivante Dedicated Vision IP
ZSP Digital Signal Processors
Hantro Video Encoder and Decoder IP
Company Information
Close