Labforge releases new 20.5T ops/s AI machine vision camera

Image Sensors World        Go to the original article...

Labforge has designed and developed a smart camera called Bottlenose which supports 20.5 trillion operations/second processing power and on-board AI, depth, feature points & matching, and a powerful ISP. The target audience is robotics and automation. They have built the camera around a Toshiba Visconti-5 processor. The current models are available as both stereo and monocular versions with IMX577 Sony image sensors. For future models there will be a range of resolutions and shutter options available. 






Go to the original article...

Low Power Edge-AI Vision Sensor

Image Sensors World        Go to the original article...

Another interesting article from the upcoming tinyML conference. This one is titled "P2M: A Processing-in-Pixel-in-Memory Paradigm for Resource-Constrained TinyML Applications" and is work done by a team from University of Southern California.

The demand to process vast amounts of data generated from state-of-the-art high resolution cameras has motivated novel energy-efficient on-device AI solutions. Visual data in such cameras are usually captured in the form of analog voltages by a sensor pixel array, and then converted to the digital domain for subsequent AI processing using analog-to-digital converters (ADC). Recent research has tried to take advantage of massively parallel low-power analog/digital computing in the form of near- and in-sensor processing, in which the AI computation is performed partly in the periphery of the pixel array and partly in a separate on-board CPU/accelerator. Unfortunately, high-resolution input images still need to be streamed between the camera and the AI processing unit, frame by frame, causing energy, bandwidth, and security bottlenecks. To mitigate this problem, we propose a novel Processing-in-Pixel-in-memory (P2M) paradigm, that customizes the pixel array by adding support for analog multi-channel, multi-bit convolution and ReLU (Rectified Linear Units). Our solution includes a holistic algorithm-circuit co-design approach and the resulting P2M paradigm can be used as a drop-in replacement for embedding memory-intensive first few layers of convolutional neural network (CNN) models within foundry-manufacturable CMOS image sensor platforms. Our experimental results indicate that P2M reduces data transfer bandwidth from sensors and analog to digital conversions by ~21x, and the energy-delay product (EDP) incurred in processing a MobileNetV2 model on a TinyML use case for visual wake words dataset (VWW) by up to ~11x compared to standard near-processing or in-sensor implementations, without any significant drop in test accuracy.






arXiv preprint: https://arxiv.org/pdf/2203.04737.pdf

tinyML conference information: https://www.tinyml.org/event/summit-2022/

Go to the original article...

Ultra-Low Power Camera for Intrusion Monitoring

Image Sensors World        Go to the original article...

An interesting paper titled "Millimeter-Scale Ultra-Low-Power Imaging System for Intelligent Edge Monitoring"  will be presented at the upcoming tinyML Research Symposium. This symposium is colocated with the tinyML Summit 2022 to be held from March 28-30 in Burlingame, CA (near SFO).

Millimeter-scale embedded sensing systems have unique advantages over larger devices as they are able to capture, analyze, store, and transmit data at the source while being unobtrusive and covert. However, area-constrained systems pose several challenges, including a tight energy budget and peak power, limited data storage, costly wireless communication, and physical integration at a miniature scale. This paper proposes a novel 6.7×7×5mm imaging system with deep-learning and image processing capabilities for intelligent edge applications, and is demonstrated in a home-surveillance scenario. The system is implemented by vertically stacking custom ultra-low-power (ULP) ICs and uses techniques such as dynamic behavior-specific power management, hierarchical event detection, and a combination of data compression methods. It demonstrates a new image-correcting neural network that compensates for non-idealities caused by a mm-scale lens and ULP front-end. The system can store 74 frames or offload data wirelessly, consuming 49.6μW on average for an expected battery lifetime of 7 days.

Preprint is up on arXiv: https://arxiv.org/abs/2203.04496



Personally, I find such work quite fascinating. With recent advances in learning based approaches for computer vision, we're seeing a "race to the top" --- larger neural networks, humongous datasets, and even beefier GPUs drawing 100's of watts of power. But, on the other hand, there's also a "race to the bottom" driven by edge computing/IoT applications that are extremely resource constrained --- microwatts of power, low image resolutions, and splitting hairs over every bit, every byte of data transferred.

Go to the original article...

css.php