Pixelated Neural Networks – Hackster.io

Computer vision provides a very dense source of information about the world, so it should come as no surprise that this technology is being used in a wide range of applications, from surveillance to wildlife monitoring to autonomous driving, for example. name a few. But the wealth of this data is a double-edged sword: while it enables the development of many fantastic new technologies, it also requires a great deal of computing power to make sense of it. And that often means high costs, poor energy efficiency, and limited portability. To improve this state of affairs and bring computer vision to more applications, various efforts have been made in recent years to move the processing closer to the image sensor, where it can operate more efficiently.

These efforts have generally been classified into one of three broad categories: near-sensor processing, on-sensor processing, or pixel processing. In the first case, a specialized processing chip is located on the same circuit board as the image sensor, saving a trip to the cloud for processing, but still introducing a bottleneck in transferring data between the image sensor. sensor and processor. On-sensor processing brings processing a step closer by placing it within the image sensor itself, but it does not completely eliminate the data transfer bottleneck seen with near-sensor processing. As a better way forward, pixel-based processing techniques have been developed that move processing directly to each individual pixel on the image sensor, eliminating delays in data transfer.

While this method offers a lot of promise, current implementations tend to rely on emerging technologies that are not yet production-ready, or do not support the types of operations that a real-world machine learning model requires, such as multi-bit, multi-channel. convolution operations, batch normalization and Rectified Linear Units. These solutions look impressive on paper, but where the rubber meets the road, they’re useless for anything other than solving toy problems.

Pixel processing suitable for real-world applications appears to be a few steps closer to becoming a reality as a result of recent work by a team at the University of Southern California, Los Angeles. I call In-memory pixel processing, their method incorporates network weights and activations at the individual pixel level to enable highly parallelized computation within image sensors that is capable of performing operations such as convolutions that many neural networks need to perform. In fact, the sensors that implement these techniques are capable of performing all the operations necessary to process the first layers of a modern deep neural network. No toy problems involving MNIST digit classifications to look at here, folks.

The researchers tested their approach by building a MobileNetV2 model trained on a visual wake word dataset using their methods. Data transfer delays were found to be reduced by 21 times compared to standard on-sensor and near-processing implementations. That efficiency also manifested itself in a lower power budget, and the power lag product was found to be reduced by 11 times. Importantly, these efficiency gains were achieved without any substantial reduction in model accuracy.

Since the first few layers of the model are rendered into pixels, only a small amount of compressed data needs to be sent to a processor outside the sensor. Not only does this eliminate data transfer bottlenecks, but it also means that inexpensive microcontrollers can be combined with these image sensors to allow advanced visual algorithms to run on ever smaller platforms, without sacrificing quality. . Be sure to keep an eye on this work in the future to see what changes it may bring to tinyML applications.

Be the first to comment

Leave a Reply

Your email address will not be published.


*