Videos du jour: TinyML, Hamamatsu, ADI

Image Sensors World        Go to the original article...

tinyML Asia 2022
In-memory computing and Dynamic Vision Sensors: Recipes for tinyML in Internet of Video Things
Arindam BASU , Professor, Department of Electrical Engineering, City University of Hong Kong

Vision sensors are unique for IoT in that they provide rich information but also require excessive bandwidth and energy which limits scalability of this architecture. In this talk, we will describe our recent work in using event-driven dynamic vision sensors for IoVT applications like unattended ground sensors and intelligent transportation systems. To further reduce the energy of the sensor node, we utilize In-memory computing (IMC)—the SRAM used to store the video frames are used to perform basic image processing operations and trigger the following deep neural networks. Lastly, we introduce a new concept of hybrid IMC combining multiple types of memory.

Photon counting imaging using Hamamatsu's scientific imaging cameras - TechBites Series

With our new photon number resolving mode the ORCA-Quest enables photon counting resolution across a full 9.4 megapixel image. See the camera in action and learn how photon number imaging pushes quantitative imaging to a new frontier.

Accurate, Mobile Object Dimensioning using Time of Flight Technology

ADI's High Resolution 3D Depth Sensing Technology coupled with advanced image stitching algorithms enables the dimensioning of non-conveyable large objects for Logistics applications. Rather than move the object to a fixed dimensioning gantry, ADI's 3D technology enables operators to take the camera to the object to perform the dimensioning function. With the same level of accuracy as fixed dimensioners, making the system mobile reduces time and cost of measurement while enhancing energy efficiency.

Go to the original article...

Lecture by Dr. Tobi Delbruck on the history of silicon retina and event cameras

Image Sensors World        Go to the original article...

Silicon Retina: History, Live Demo, and Whiteboard Pixel Design


Rockwood Memorial Lecture 2023: Tobi Delbruck, Institute of Neuroinformatics, UZH-ETH Zürich

Event Camera Silicon Retina; History, Live Demo, and Whiteboard Circuit Design
Rockwood Memorial Lecture 2023 (11/20/23)
Hosted by: Terry Sejnowski, Ph.D. and Gert Cauwenberghs, Ph.D.
Organized by: Institute for Neural Computation,

Abstract: Event cameras electronically model spike-based sparse output from biological eyes to reduce latency, increase dynamic range, and sparsify activity in comparison to conventional imagers. Driven by the need for more efficient battery powered, always-on machine vision in future wearables, event cameras have emerged as a next step in the continued evolution of electronic vision. This lecture will have 3 parts: 1. A brief history of silicon retina development starting from Fukushima’s Neocognitron and Mahowald and Mead’s earliest spatial retinas; 2: A live demo of a contemporary frame-event DAVIS camera that includes an inertial measurement unit (IMU) vestibular system, 3: (targeted for neuromorphic analog circuit design students in the BENG 216 class), a whiteboard discussion about event camera pixel design at the transistor level, highlighting design aspects of event camera pixels which endow them with fast response even under low lighting, precise threshold matching even under large transistor mismatch, and temperature-independent event threshold.

Go to the original article...

Videos du jour — Sony, onsemi, realme/Samsung [June 16, 2023]

Image Sensors World        Go to the original article...

Stacked CMOS Image Sensor Technology with 2-Layer Transistor Pixel | Sony Official

Sony Semiconductor Solutions Corporation (“SSS”) has succeeded in developing the world’s first* stacked CMOS image sensor technology with 2-Layer Transistor Pixel.
This new technology will prevent underexposure and overexposure in settings with a combination of bright and dim illumination (e.g., backlit settings) and enable high-quality, low-noise images even in low-light (e.g., indoor, nighttime) settings.
LYTIA image sensors are designed to enable smartphone users to express and share their emotions more freely and to bring a creative experience far beyond your imagination. SSS continues to create a future where everyone can enjoy a life full of creativity with LYTIA.
*: As of announcement on December 16, 2021.

New onsemi Hyperlux Image Sensor Family Leads the Way in Next-Generation ADAS to Make Cars Safer
onsemi's new Hyperlux™ image sensors are steering the future of autonomous driving!
Armed with 150db ultra-high dynamic range to capture high-quality images in the most extreme lighting conditions, our Hyperlux™ sensors use up to 30% less power with a footprint that's up to 28% smaller than competing devices.

When realme11Pro+ gets booted with ISOCELL HP3 Super Zoom, a 200MP Image Sensor | realme
The ISOCELL HP3 SuperZoom, a 200MP image sensor, equipped in realme 11 Pro+ combined with realme’s advanced camera technology. What will you capture with this innovation?

Go to the original article...

Videos du jour — onsemi, CEA-Leti, Teledyne e2v [June 7, 2023]

Image Sensors World        Go to the original article...


Overcoming Challenging Lighting Conditions with eHDR: onsemi’s AR0822 is an innovative image sensor that produces high-quality 4K video at 60 frames-per-second.

Discover Wafer-to-wafer process
: Discover CEA-Leti expertise in terms of hybrid bonding: the different stages of Wafer-to-wafer process in CEA-Leti clean room, starting with Chemical Mechanical Planarization (CMP), through wafer-to-wafer bonding, alignment measurement, characterization of bonding quality, grinding and results analysis.


Webinar - Pulsed Time-of-Flight: a complex technology for a simpler and more versatile system: Hosted by Vision Systems Design and presented by Yoann Lochardet, 3D Marketing Manager at Teledyne e2v in June 2022, this webinar discusses how, at first glance, Pulsed Time-of-Flight (ToF) can be seen as a very complex technology that is difficult to understand and use. That is true in the sense that this technology is state-of-the-art and requires the latest technical advancements. However, it is a very flexible technology, with features and capabilities that reduce the complexity of the whole system, allowing for a simpler and more versatile system.

Go to the original article...

Videos of the day [TinyML and WACV]

Image Sensors World        Go to the original article...

Event-based sensing and computing for efficient edge artificial intelligence and TinyML applications
Federico CORRADI, Senior Neuromorphic Researcher, IMEC

The advent of neuro-inspired computing represents a paradigm shift for edge Artificial Intelligence (AI) and TinyML applications. Neurocomputing principles enable the development of neuromorphic systems with strict energy and cost reduction constraints for signal processing applications at the edge. In these applications, the system needs to accurately respond to the data sensed in real-time, with low power, directly in the physical world, and without resorting to cloud-based computing resources.
In this talk, I will introduce key concepts underpinning our research: on-demand computing, sparsity, time-series processing, event-based sensory fusion, and learning. I will then showcase some examples of a new sensing and computing hardware generation that employs these neuro-inspired fundamental principles for achieving efficient and accurate TinyML applications. Specifically, I will present novel computer architectures and event-based sensing systems that employ spiking neural networks with specialized analog and digital circuits. These systems use an entirely different model of computation than our standard computers. Instead of relying upon software stored in memory and fast central processing units, they exploit real-time physical interactions among neurons and synapses and communicate using binary pulses (i.e., spikes). Furthermore, unlike software models, our specialized hardware circuits consume low power and naturally perform on-demand computing only when input stimuli are present. These advancements offer a route toward TinyML systems composed of neuromorphic computing devices for real-world applications.

Improving Single-Image Defocus Deblurring: How Dual-Pixel Images Help Through Multi-Task Learning

Authors: Abdullah Abuolaim (York University)*; Mahmoud Afifi (Apple); Michael S Brown (York University) 
Many camera sensors use a dual-pixel (DP) design that operates as a rudimentary light field providing two sub-aperture views of a scene in a single capture. The DP sensor was developed to improve how cameras perform autofocus. Since the DP sensor's introduction, researchers have found additional uses for the DP data, such as depth estimation, reflection removal, and defocus deblurring. We are interested in the latter task of defocus deblurring. In particular, we propose a single-image deblurring network that incorporates the two sub-aperture views into a multi-task framework. Specifically, we show that jointly learning to predict the two DP views from a single blurry input image improves the network's ability to learn to deblur the image. Our experiments show this multi-task strategy achieves +1dB PSNR improvement over state-of-the-art defocus deblurring methods. In addition, our multi-task framework allows accurate DP-view synthesis (e.g., ~39dB PSNR) from the single input image. These high-quality DP views can be used for other DP-based applications, such as reflection removal. As part of this effort, we have captured a new dataset of 7,059 high-quality images to support our training for the DP-view synthesis task.

Go to the original article...

Videos of the day [AMS-OSRAM, ESPROS, Sony]

Image Sensors World        Go to the original article...

New Mira global shutter image sensor from ams OSRAM advances 2D and 3D sensing with high quantum efficiency at visible and NIR wavelengths. The Mira sensors come supplied in a chip-scale package, with an optimized footprint and an industry-leading ratio of size to resolution empowered by state-of-the-art stacked back-side illumination technology to shrink package footprint, giving greater design flexibility to manufacturers of smart glasses and other space-constrained products. The Mira image sensors are super small and offer superior image quality in low light conditions and with its many on-chip operations, our image sensors open up many new possibilities for developers.


ESPROS Time-of-Flight products were developed for outdoor use and handle background light very well. These outdoor scenes were taken with our TOFcam-660. In this TOFcam-660 a epc660 is installed, which has a resolution of 320x240 pixels and can easily be used for outdoor applications with a lot of ambient light, even in direct sunlight of 100klux. Thanks to the good resolution, HDR mode, with different integration times and the already mentioned outdoor performance, various applications can be developed that require a clean distance image (depth map).

[Read more...]

Go to the original article...

[Video] Smartphones vs. cameras over time

Image Sensors World        Go to the original article...

 Video asks if smartphones have replaced cameras:


Go to the original article...

Videos du jour – June 24, 2022

Image Sensors World        Go to the original article...

CASS Talks 2022 - Jose Lipovetzky, CNEA, Argentina - April 8, 2022. Viewing ionizing radiation with CMOS image sensors.

Distributed On-Sensor Compute System for AR/VR Devices: A Semi-Analytical Simulation Framework for Power Estimation (Jorge GOMEZ, Research Scientist, Reality Labs, Meta)

tinyML Applications and Systems Session: Millimeter-Scale Ultra-Low-Power Imaging System for Intelligent Edge Monitoring (Andrea BEJARNO-CARBO, PhD Student, University of Michigan, Ann Arbor MI)

This video briefly introduces the Global Shutter product line of PixArt. It provides insights into the key competitiveness of PixArt's Global Shutter products by comparing their ultra-low-power consumption rates and advanced built-ins with other similar products in the market.

Go to the original article...

Recent Image Sensor Videos

Image Sensors World        Go to the original article...

Sony presents "Advantages of Large Format Global Shutter and Rolling Shutter Image Sensor"

onsemi presents their CMOS image sensor layer structure consisting of a microlens array, color filter array, photodiode, pixel transistors, bond layer and ASIC:

Newsight presents its enhanced time-of-flight technology for depth sensing:

And finally a cute cat video to wrap it up: Samsung's new 200 megapixel ISOCELL image sensor promotional video:

Go to the original article...

Smartphone imaging trends webinar and whitepaper

Image Sensors World        Go to the original article...

From Counterpoint Research (

Over the last few years, steady upgrades in CMOS image sensor (CIS) technology combined with the evolution of chipsets – and the improvements in AI they enable – are bringing step-change improvements to smartphone camera performance.

Counterpoint Research would like to invite you to join our latest webinar Smartphone Imaging Trends: New Directions Capturing Magic Moments which will be attended by key executives from HONOR, Qualcomm, and DXOMARK as well as a renowned professional photographer and director Eugenio Recuenco.

The webinar is a complement to an upcoming Counterpoint Whitepaper (also to be released on June 8) which will cover smartphone imaging trends, OEM strategy comparisons, the key components of a great camera and show how technology is helping to unlock creative expression.

The accompanying whitepaper can be obtained here:

The camera has always been a major component of the smartphone and a key selling point among consumers. In the past, smartphone cameras lagged far behind even the most basic DSLRs as form factor and size constraints impacted picture and video quality. But technology has now advanced to the point where today’s top flagship devices are capable of delivering DSLR-like performance.

The rise of AI algorithms, advancements in multi-frame/multi-lens computational photography, more powerful processors, the addition of dedicated image signal and neural processing units and, of course, the compounding of R&D experience has resulted in today’s smartphone cameras rivalling dedicated imaging devices.

In fact, the smartphone’s comparatively compact form factor is an advantage, as clicking pictures and recording videos are becoming integrated into our daily lives through the growth of social media. The role of the camera has shifted to become a life tool, as end-users migrate from being simply consumers of content to creators.

This new direction that imaging has taken warrants further advancements in smartphone cameras, as we lean on technology to make the experience easier while allowing all of us to be more creative.

Table of Contents:

Smartphone Imaging Trends
Megapixels: More is not necessarily better
Multi-camera modules: Covering all scenarios
Image processing: Pushing the laws of physics
OEM Imaging Comparisons
As hardware slows, innovation grows
Where the magic happens
New magic, new directions
Measuring Quality
Components of an exceptional smartphone camera
Image processing innovation
Capturing Magic Moments
Powering art through technology

Go to the original article...

Prof. Eric Fossum’s interview at LDV vision summit 2018

Image Sensors World        Go to the original article...

Eric Fossum & Evan Nisselson Discussing The Evolution, Present & Future of Image Sensors

Eric Fossum is the inventor of the CMOS image sensor “camera-on-a-chip” used in billions of cameras, from smartphones to web cameras to pill cameras and many other applications. He is a solid-state image sensor device physicist and engineer, and his career has included academic and government research, and entrepreneurial leadership. He is currently a Professor with the Thayer School of Engineering at Dartmouth in Hanover, New Hampshire where he teaches, performs research on the Quanta Image Sensor (QIS), and directs the School’s Ph.D. Innovation Program. Eric and Evan discussed the evolution of image sensors, challenges and future opportunities.



More about LDV vision summit 2022:

Organized by LDV Capital


[An earlier version of this post incorrectly mentioned this interview is from the 2022 summit. This was in fact from 2018. --AI]

Go to the original article...

"Photon counting cameras for quantum imaging applications" Prof. Edoardo Charbon

Image Sensors World        Go to the original article...

"Photon counting cameras for quantum imaging applications" 

Prof. Edoardo Charbon, Full Professor, Advanced Quantum Architecture Lab


Photon counting has entered the realm of image sensing with the creation of deep-submicron CMOS SPAD technology. The format of SPAD image sensors has expanded from 8×4 pixels in 2004 to the recent megapixel camera in 2019, and the applications have literally exploded in the last few years, with the introduction of proximity sensing and portable telemeters. SPAD image sensors are today in almost every smartphone and will soon be in every car. The introduction of Quanta Burst Photography has created a great opportunity for photon counting cameras, which are ideally suited for it, given its digital nature and speed; it is however computationally intensive. A solution to this problem is the use of 3D stacking, introduced for SPADs in 2015, where large silicon real estate is now available to host deep-learning processors, neural networks directly on chip, thus enabling complex processing in situ and reducing the overall power consumption. But the real opportunity is the emergence of a variety of quantum imaging modalities, including ghost imaging, quantum plenoptic vision, quantum LiDAR, to name a few. The talk will conclude with a technical and economic perspective on SPAD imagers and the vision for massively parallel solid-state photon counting in scientific and consumer applications.

Go to the original article...

Videos du jour – CICC, PhotonicsNXT and EPIC

Image Sensors World        Go to the original article...

IEEE CICC 2022 best paper candidates present their work

Solid-State dToF LiDAR System Using an Eight-Channel Addressable, 20W/Ch Transmitter, and a 128x128 SPAD Receiver with SNR-Based Pixel Binning and Resolution Upscaling
Shenglong Zhuo, Lei Zhao,Tao Xia, Lei Wang, Shi Shi, Yifan Wu, Chang Liu, et al.
Fudan University, PhotonIC Technologies, Southern Univ. of S&T

A 93.7%-Efficiency 5-Ratio Switched-Photovoltaic DC-DC Converter
Sandeep Reddy Kukunuru,Yashar Naeimi, Loai Salem
University of California, Santa Barbara

A 23-37GHz Autonomous Two-Dimensional MIMO Receiver Array with Rapid Full-FoV Spatial Filtering for Unknown Interference Suppression
Boce Lin, Tzu-Yuan Huang,Amr Ahmed, Min-Yu Huang, Hua Wang
Georgia Institute of Technology

PhotonicsNXT Fall Summit keynote discusses automotive lidar

This keynote session by Pierrick Boulay of Yole Developpement at the PhotonicsNXT Fall Summit held on October 28, 2021 provides an overview of the lidar ecosystem and shows how lidar is being used within the auto industry for ranging and imaging.

EPIC Online Technology Meeting on Single Photon Sources and Detectors

The power hidden in one single photon is unprecedented. But we need to find ways to harness that power. This meeting will discuss cutting-edge technologies paving the way for versatile and efficient pure single-photon sources and detection schemes with low dark count rates, high saturation levels, and high detection efficiencies. This meeting will gather the key players in the photonic industry pushing the development of these technologies towards commercializing products that harness the intrinsic properties of photons.

Go to the original article...