ARM SoCs Take Soft Roads to Neural Nets
SAN JOSE, Calif. — NXP is supporting inference jobs such as image recognition in software on its i.MX8 processor. It aims to extend its approach for natural-language processing later this year, claiming that dedicated hardware is not required in resource-constrained systems.
The chip vendor is following in the footsteps of its merger partner, Qualcomm. However, the mobile giant expects to eventually augment its code with dedicated hardware. Their shared IP partner, ARM, is developing neural networking libraries for its cores, although it declined an interview for this article.
NXP’s i.MX8 packs two GPU cores from Vivante, now part of Verisilicon. They use about 20 opcodes that support multiply-accumulates and bit extraction and replacement, originally geared for running computer vision.
“Adding more and more hardware is not the way forward on the power budget of a 5-W SoC,” said Geoff Lees, NXP’s executive vice president for i.MX. “I would like to double the Flops, but we got the image processing acceleration we wanted for facial and gesture recognition and better voice accuracy.”