A DIY copper oxide camera sensor

Image Sensors World        Go to the original article...

Can we make photosensitive pixels from Copper Oxide? Youtuber "Breaking Taps" answers:



Go to the original article...

One man’s (event camera) noise is another man’s signal

Image Sensors World        Go to the original article...

In a preprint titled "Noise2Image: Noise-Enabled Static Scene Recovery for Event Cameras" Cao et al. propose a method to use the inherent pixel noise present in even camera sensors to recover scene intensity maps.

Abstract:

Event cameras capture changes of intensity over time as a stream of ‘events’ and generally cannot measure intensity itself; hence, they are only used for imaging dynamic scenes. However, fluctuations
due to random photon arrival inevitably trigger noise events, even for static scenes. While previous efforts have been focused on filtering out these undesirable noise events to improve signal quality, we find that,
in the photon-noise regime, these noise events are correlated with the static scene intensity. We analyze the noise event generation and model its relationship to illuminance. Based on this understanding, we propose a method, called Noise2Image, to leverage the illuminance-dependent noise characteristics to recover the static parts of a scene, which are otherwise invisible to event cameras. We experimentally collect a dataset of noise events on static scenes to train and validate Noise2Image. Our results show that Noise2Image can robustly recover intensity images solely from noise events, providing a novel approach for capturing static scenes in event cameras, without additional hardware.

Link: https://arxiv.org/abs/2404.01298






 

Go to the original article...

Photonic-electronic integrated circuit-based coherent LiDAR engine

Image Sensors World        Go to the original article...

Lukashchuk et al. recently published a paper titled "Photonic-electronic integrated circuit-based coherent LiDAR engine" in the journal Nature Communications.

Open access link: https://www.nature.com/articles/s41467-024-47478-z

Abstract: Chip-scale integration is a key enabler for the deployment of photonic technologies. Coherent laser ranging or FMCW LiDAR, a perception technology that benefits from instantaneous velocity and distance detection, eye-safe operation, long-range, and immunity to interference. However, wafer-scale integration of these systems has been challenged by stringent requirements on laser coherence, frequency agility, and the necessity for optical amplifiers. Here, we demonstrate a photonic-electronic LiDAR source composed of a micro-electronic-based high-voltage arbitrary waveform generator, a hybrid photonic circuit-based tunable Vernier laser with piezoelectric actuators, and an erbium-doped waveguide amplifier. Importantly, all systems are realized in a wafer-scale manufacturing-compatible process comprising III-V semiconductors, silicon nitride photonic integrated circuits, and 130-nm SiGe bipolar complementary metal-oxide-semiconductor (CMOS) technology. We conducted ranging experiments at a 10-meter distance with a precision level of 10 cm and a 50 kHz acquisition rate. The laser source is turnkey and linearization-free, and it can be seamlessly integrated with existing focal plane and optical phased array LiDAR approaches.


a Schematics of photonic-electronic LiDAR structure comprising a hybrid integrated laser source, charge-pump based HV-AWG ASIC, photonic integrated erbium-doped waveguide amplifier. b Coherent ranging principle. c Packaged laser source. RSOA is edge coupled to Si3N4 Vernier filter configuration waveguide, whereas the output is glued to the fiber port. PZT and microheater actuators are wirebonded as well as butterfly package thermistor. d Zoom-in view of (c) highlighting a microring with actuators. e Micrograph of the HV-AWG ASIC chip fabricated in a 130 nm SiGe BiCMOS technology. The total size of the chip is 1.17–1.07 mm2. f The Erbium-doped waveguide is optically excited by a 1480 nm pump showing green luminescence due to the transition from a higher lying energy level to the ground state.

a Schematics of the integrated circuit consisting of a 4-stage voltage-controlled differential ring oscillator which drives charge pump stages to generate high-voltage arbitrary waveforms. b Principles of waveform generation demonstrated by the output response to the applied control signals in the time domain. Inset shows the change in oscillation frequency in response to a frequency control input, from 88 MHz to 208 MHz, which modifies the output waveform. c Measured arbitrary waveforms generated by the ASIC with different shapes, amplitudes, periods and offset values. d Generation of the linearized sawtooth electrical waveform used in LiDAR measurements. Digital and analog control signals are modulated in the time domain to fine-tune the output. 

a Electrical waveform generated by the ASIC. Blue circles highlight the segment of ~ 16 μs used for ranging and linearity analysis. The red curve is a linear fit to the given segment. b Time-frequency map of the laser chirp obtained via heterodyne detection with auxiliary laser. RBW is set to 10 MHz. c Optical spectrum of Vernier laser output featuring 50 dB side mode suppression ratio. d Optical spectrum after EDWA with >20 mW optical power. e Instantaneous frequency of the optical chirp obtained via delayed homodyne measurement (inset: experimental setup). The red dashed line corresponds to the linear fit. The excursion of the chirp equates to 1.78 GHz over a 16 μs period. f Nonlinearity of the laser chirp inferred from (e). RMSE nonlinearity equates to 0.057% with the major chirp deviation from the linear fit lying in the window ± 2 MHz. g The frequency beatnote in the delayed homodyne measurement corresponds to the reference MZI delay ~10 m. The 90% fraction of the beatnote signal is taken for the Fourier transformation. h LiDAR resolution inferred from the FWHM of the MZI beatnotes over >20,000 realizations. The most probable resolution value is 11.5 cm, while the native resolution is 9.3 cm corresponding to 1.61 GHz (90% of 1.78 GHz).

a Schematics of the experimental setup for ranging experiments. The amplified laser chirp scans the target scene via a set of galvo mirrors. A digital sampling oscilloscope (DSO) records the balanced detected beating of the reflected and reference optical signals. CIRC - circulator, COL - collimator, BPD - balanced photodetector. b Point cloud consisting of ~ 104 pixels featuring the doughnut on a cone and C, S letters as a target 10 m away from the collimator. c The Fourier transform over one period, highlighting collimator, circulator and target reflection beatnotes. Blackman-Harris window function was applied to the time trace prior to the Fourier transformation. d Detection histogram of (b). e Single point imaging depth histogram indicating 1.5 cm precision of the LiDAR source.
 

Go to the original article...

SI Sensors introduces custom CIS design services

Image Sensors World        Go to the original article...

Custom CMOS image sensor design on a budget
 
Specialised Imaging Ltd reports on the recent market launch of SI Sensors (Cambridge, UK) - a new division of the company focused on the development of advanced CMOS image sensors.
 
Drawing upon a team of specialists with a broad range of experience in image sensor design – SI Sensors is creating custom image sensor designs with cutting edge performance. In particular, the company’s in-house experts have specialist knowledge of visible and non-visible imaging technologies, optimised light detection and charge transfer, radiation-hard sensor design, and creating CCD-in-CMOS pixels to enable novel imaging techniques such as ultra-fast burst mode imaging.
 
Philip Brown, General Manager of SI Sensors said, “In addition to developing new sensors for Specialised Imaging’s next generation of ultra-fast imaging cameras utilising the latest foundry technologies, we are developing solutions for other customers with unique image sensor design requirements including for space and defence applications”.
 
He added “SI Sensors team also use their skills and experience to develop bespoke image sensor packages that accommodate custom electrical, mechanical, and thermal interface requirements. Our aim is always to achieve the best balance between image sensor performance and cost (optimised value) for customers. To ensure performance and consistent quality and reliability we perform detailed electro-optical testing from characterisation through to mass production testing adhering to industry standards such as EMVA 1288”.
 
For further information on custom CMOS image sensor design and production please visit www.si-sensors.com or contact SI Sensors on +44-1442-827728 or info@si-sensors.com.
 
Specialised Imaging Ltd is a dynamic company focused on niche imaging markets and applications, with particular emphasis on high-speed image capture and analysis. Drawing upon over 20 years’ experience, Specialised Imaging Ltd today are market leaders in the design and manufacture of ultra-fast framing cameras and ultra high-speed video cameras.

Go to the original article...

NASA develops a 36 pixel sensor

Image Sensors World        Go to the original article...

From PetaPixel: https://petapixel.com/2024/04/30/nasa-develops-tiny-yet-mighty-36-pixel-sensor/

NASA Develops Tiny Yet Mighty 36-Pixel Sensor


 

While NASA’s James Webb Space Telescope is helping astronomers craft 122-megapixel photos 1.5 million kilometers from Earth, the agency’s newest camera performs groundbreaking space science with just 36 pixels. Yes, 36 pixels, not 36 megapixels.

The X-ray Imaging and Spectroscopy Mission (XRISM), pronounced “crism,” is a collaboration between NASA and the Japan Aerospace Exploration Agency (JAXA). The mission’s satellite launched into orbit last September and has been scouring the cosmos for answers to some of science’s most complex questions ever since. The mission’s imaging instrument, Resolve, has a 36-pixel image sensor.

This six-by-six pixel array measures 0.2 inches (five millimeters) per side, which is not so different from the image sensor in the Apple iPhone 15 and 15 Plus. The main camera in those smartphones is eight by six millimeters, albeit with 48 megapixels. That’s 48,000,000 pixels, just a handful more than 36.

How about a full-frame camera, like the Sony a7R V, the go-to high-resolution mirrorless camera? That camera has over 60 megapixels and captures images that are 9,504 by 6,336 pixels. The image sensor has a total of 60,217,344 pixels, 1,672,704 times the number of pixels in XRISM’s Resolve imager.

At this point, it is reasonable to wonder, “What could scientists possibly see with just 36 pixels?” As it turns out, quite a lot.

Resolve detects “soft” X-rays, which are about 5,000 times more energetic than visible light wavelengths. It examines the Universe’s hottest regions, largest structures, and most massive cosmic objects, like supermassive black holes. While it may not have many pixels, its pixels are extraordinary and can produce a rich spectrum of visual data from 400 to 12,000 electron volts.

“Resolve is more than a camera. Its detector takes the temperature of each X-ray that strikes it,” explains Brian Williams, NASA’s XRISM project scientist at Goddard. “We call Resolve a microcalorimeter spectrometer because each of its 36 pixels is measuring tiny amounts of heat delivered by each incoming X-ray, allowing us to see the chemical fingerprints of elements making up the sources in unprecedented detail.”

Put another way, each of the sensor’s 36 pixels can independently and accurately measure changes in temperature of specific wavelengths of light. The sensor measures how the temperature of each pixel changes based on the X-ray it absorbs, allowing it to measure the energy of a single particle of electromagnetic radiation.

There is a lot of information in this data, and scientists can learn an incredible amount about very distant objects based using these X-rays.

Resolve can detect particular wavelengths of light so precisely that it can detect the motions of individual elements within a target, “effectively providing a 3D view.” The camera can detect the flow of gas within distant galaxy clusters and track how different elements behave within the debris of supernova explosions.

The 36-pixel image sensor must be extremely cold during scientific operations to pull off this incredible feat.

Videographers may attach a fan to their mirrorless camera to keep it cool during high-resolution video recording. However, for an instrument like Resolve, a fan just won’t cut it.
Using a six-stage cooling system, the sensor is chilled to -459.58 degrees Fahrenheit (-273.1 degrees Celsius), which is just 0.09 degrees Fahrenheit (0.05 degrees Celsius) above absolute zero. By the way, the average temperature of the Universe itself is about -454.8 degrees Fahrenheit (-270.4 degrees Celsius).

While a 36-pixel camera helping scientists learn new things about the cosmos may sound unbelievable, “It’s actually true,” says Richard Kelley, the U.S. principal investigator for XRISM at NASA’s Goddard Space Flight Center in Greenbelt, Maryland.

“The Resolve instrument gives us a deeper look at the makeup and motion of X-ray-emitting objects using technology invented and refined at Goddard over the past several decades,” Kelley continues.

XRISM and Resolve offer the most detailed and precise X-ray spectrum data in the history of astrophysics. With just three dozen pixels, they are charting a new course of human understanding through the cosmos (and putting an end to the megapixel race).

Go to the original article...

Talk on Digital Camera Myths and Misunderstandings – Part II

Image Sensors World        Go to the original article...

In a follow-up to the talk that was previously shared on this blog, here's Digital Camera Myths, Misstatements and Misunderstandings Part II, a presentation by Wayne Prentice to Rochester, NY chapter of IS&T (Society for imaging Science and Tech.) on 17 April. 2024. 



00:00 - Introduction
5:51 - Revisiting ISO sensitivity
9:12 - 12 ISO 10/Ha - really independent of camera and illuminant?
13:49 - "It's official: ISO 51,200 is the new 6400". Really?
22:44 - RCCB (Red, clear, clear Blue) sensors yield better SNR. Really?
25:35 - Depth of field: should you always use a longer focal length?
28:18 - sRGB, gamma, CRT display, and Human Vision
31:00 - Questions

Go to the original article...

NIT announces new full HD SWIR sensor – NSC2101

Image Sensors World        Go to the original article...

New High-Resolution, SWIR Sensor with High Performance

NIT (New Imaging Technologies) introduces its latest innovation in SWIR imaging technology: a high-resolution Short-Wave Infrared (SWIR) InGaAs sensor designed for the most demanding challenges in the field.

Overview
The new SWIR sensor – NSC2101 boasts remarkable features, including a high-performance InGaAs sensor with an 8µm pixel pitch, delivering an impressive 2MPIX resolution at 1920x1080px. Its ultra-low noise of only 25e- ensures exceptional image clarity, even in challenging environments. Additionally, with a dynamic range of 64dB, the sensor captures a wide spectrum of light intensities with precision and accuracy.

•    0.9µm to 1.7µm spectrum
•    2MPix – 1920x1080px @8µm pixel pitch
•    25e- readout noise
•    64dB dynamic range
This cutting-edge sensor is designed and manufactured by NIT in France and promises unparalleled performance and reliability. Leveraging advanced technology and expertise, NIT has crafted a sensor that meets the rigorous standards of ISR applications, offering crucial insights and intelligence in various scenarios.

Image examples


Applications
The applications of this SWIR sensor are vast and diverse, catering to the needs of defense, security, and surveillance industries. The sensor’s capabilities are indispensable for enhancing situational awareness and decision-making, from monitoring border security to providing critical intelligence in tactical operations.

Extension
Moreover, NIT’s commitment to innovation extends beyond the sensor itself. The camera version, integrating the NSC2101 sensor, will be released soon, this summer

Go to the original article...

Foveon sensor development "still in design stage"

Image Sensors World        Go to the original article...

https://www.dpreview.com/interviews/6004010220/sigma-full-frame-foveon

Full-frame Foveon sensor "still at design stage" says Sigma CEO, "but I'm still passionate"

"Unfortunately, we have not made any significant progress since last year," says Sigma owner and CEO Kazuto Yamaki, when asked about the planned full-frame Foveon camera. But he still believes in the project and discussed what such a camera could still offer.

"We made a prototype sensor but found some design errors," he says: "It worked but there are some issues, so we re-wrote the schematics and submitted them to the manufacturer and are waiting for the next generation of prototypes." This isn't quite a return to 'square one,' but it means there's still a long road ahead.

"We are still in the design phase for the image sensor," he acknowledges: "When it comes to the sensor, the manufacturing process is very important: we need to develop a new manufacturing process for the new sensor. But as far as that’s concerned, we’re still doing the research. So it may require additional time to complete the development of the new sensor."

The Foveon design, which Sigma now owns, collects charge at three different depths in the silicon of each pixel, with longer wavelengths of light able to penetrate further into the chip. This means full-color data can be derived at each pixel location rather than having to reconstruct the color information based on neighboring pixels, as happens with conventional 'Bayer' sensors. Yamaki says the company's thinking about the benefits of Foveon have changed.

"When we launched the SD9 and SD10 cameras featuring the first-generation Foveon sensor, we believed the biggest advantage was its resolution, because you can capture contrast data at every location. Thus we believed resolution was the key." he says: "Today there are so many very high pixel-count image sensors: 60MP so, resolution-wise there’s not so much difference."

But, despite the advances made elsewhere, Yamaki says there's still a benefit to the Foveon design "I’ve used a lot of Foveon sensor cameras, I’ve taken a bunch of pictures, and when I look back at those pictures, I find a noticeable difference," he says. And, he says, this appeal may stem from what might otherwise be seen as a disadvantage of the design.

"It could be color because the Foveon sensor has lots of cross-talk between R, B and G," he suggests: "In contrast, Bayer sensors only capture R, B and G, so if you look at the spectral response a Bayer sensor has a very sharp response for each color, but when it comes to Foveon there’s lots of crosstalk and we amplify the images. There’s lots of cross-talk, meaning there’s lots of gradation between the colors R, B and G. When combined with very high resolution and lots of gradation in color, it creates a remarkably realistic, special look of quality that is challenging to describe."

The complexity of separating the color information that the sensor has captured is part of what makes noise such a challenge for the Foveon design, and this is likely to limit the market, Yamaki concedes:
"We are trying to make our cameras with the Foveon X3 sensor more user-friendly, but still, compared to the Bayer sensor cameras, it won’t be easy to use. We’re trying to improve the performance, but low-light performance can’t be as good as Bayer sensor. We will do our best to make a more easy-to-use camera, but still, a camera with Foveon sensor technology may not be the camera for everybody."

But this doesn't dissuade him. "Even if we successfully develop a new X3 sensor, we may not be able to sell tons of cameras. But I believe it will still mean a lot," he says: "despite significant technology advancements there hasn't been much progress in image quality in recent years. There’s a lot of progress in terms of burst rate or video functionality, but whe
n you talk just about image quality, about resolution, tonality or dynamic range, there hasn’t been so much progress."

"If we release the Foveon X3 sensor today and people see the quality, it means a lot for the industry, that’s the reason I’m still passionate about the project."

Go to the original article...

Nexchip mass produces 55nm and 90nm BSI CIS

Image Sensors World        Go to the original article...

Google translation of a news article:

Jinghe integrates 50-megapixel image sensors into mass production and plans to double its CIS production capacity within the year

According to Jinghe Integration news, after the mass production of 90nm CIS and 55nm stacked CIS, Jinghe Integration (688249) CIS has added new products. Recently, Jinghe's integrated 55nm single-chip, 50-megapixel back-illuminated image sensor (BSI) has entered mass production, greatly empowering different application scenarios of smartphones and achieving a leapfrog move from mid- to low-end to mid-to-high-end applications. Jinghe Integration plans to see a doubling of CIS production capacity this year, and its share of shipments will increase significantly, becoming the second largest product axis after display driver chips.

Nexchip's website shows the following technologies.

 https://www.nexchip.com.cn/en-us/Service/Roadmap


Go to the original article...

Blackmagic releases new 12K and 17K cine cameras

Image Sensors World        Go to the original article...

From: https://www.newsshooter.com/2024/04/12/blackmagic-design-ursa-cine-ursa-cine-17k/

Blackmagic Design URSA Cine & URSA Cine 17K





Go to the original article...

Airy3D – Teledyne e2v collaboration

Image Sensors World        Go to the original article...

Link: https://www.airy3d.com/airy3d-e2v-collaboration/

Teledyne e2v and Airy3D collaboration delivers more affordable 3D vision solutions 

Grenoble, FRANCE, April 23, 2024 —Teledyne e2v, a Teledyne Technologies [NYSE: TDY] company and global innovator of imaging solutions, is pleased to announce a new technology and design collaboration with Airy3D (Montreal, Canada), a leading 3D vision solution provider. The first result of this partnership is the co-engineering of the recently announced Topaz5D™, a low-cost, low power, passive, 2 megapixel global shutter sensor which produces 2D images and 3D depth maps.

Arnaud Foucher, Business Team Director at Teledyne e2v, said, “We’re very excited to have collaborated with Airy3D on the development of Topaz5D, our latest unique CMOS sensor. The need to deploy alternative 3D vision solutions in different industries is crucial. Teledyne e2v’s image sensor design capability coupled with Airy3D’s proven 3D technology has allowed us to develop more 3D vision products for several market segments with a reduced cost of ownership.”

Chris Barrett, CEO of Airy3D, commented, “Airy3D uniquely combines our patented optically Transmissive Diffraction Mask (TDM) design and deep software processing know-how, enabling our partners to add value to their products. Teledyne e2v’s image sensor design, production and supply chain expertise are paramount in introducing these novel 3D solutions to the market and this initiative is a key milestone for us.”

A Topaz5D Evaluation Kit and monochrome and color sensor samples are available now for evaluations and design. Please contact Teledyne e2v for more information.

Go to the original article...

Lecture on Noise in Event Cameras and "SciDVS" camera

Image Sensors World        Go to the original article...

 


Talk title:   "Noise Limits of Event Cameras" presented at the Cambridge Huawei Frontiers in image Sensing 2024

Speaker: Prof. Tobi Delbruck

Abstract: "Cameras that mimic biological eyes have a 50 year history and the DVS event camera pixel is now nearly 20 years old. Event camera academic and industrial development is active, but it is only in the last few years that we understand more about the ultimate limits on their noise performance. This talk will be about those limits: What are the smallest changes that we can detect at a particular light intensity and particular speed? What are the main sources of noise in event cameras and what are the limits on these? I will discuss these results in the context of our PhD student Rui Graca’s work on SciDVS, a large-pixel DVS that targets scientific applications such as neural imaging and space domain awareness."

Go to the original article...

A review of event cameras for automotive applications

Image Sensors World        Go to the original article...

Event Cameras in Automotive Sensing: A Review
Shariff et al.
IEEE Access

DOI: https://doi.org/10.1109/ACCESS.2024.3386032

Abstract:
Event cameras (EC) represent a paradigm shift and are emerging as valuable tools in the automotive industry, particularly for in-cabin and out-of-cabin monitoring. These cameras capture pixel intensity changes as ”events” with ultra-low latency, making them suitable for real- time applications. In the context of in-cabin monitoring, EC offer solution for driver and passenger tracking, enhancing safety and comfort. For out-of-cabin monitoring, they excel in tracking objects and detecting potential hazards on the road. This article explores the applications, benefits, and challenges of event cameras in these two critical domains within the automotive industry. This review also highlights relevant datasets and methodologies, enabling researchers to make informed decisions tailored to their specific vehicular-technology and place their work in the broader context of EC sensing. Through an exploration of the hardware, the complexities of data processing, and customized algorithms for both in-cabin and out-of-cabin surveillance, this paper outlines a framework encompassing methodologies, tools, and datasets critical for the implementation of event camera sensing in automotive systems. 














Go to the original article...

TriEye and Vertilas 1.3μm VCSEL-Driven SWIR Sensing Solutions

Image Sensors World        Go to the original article...

TriEye and Vertilas Partner to Demonstrate 1.3μm VCSEL-Driven SWIR Sensing Solutions

TEL AVIV, Israel, April 16, 2024/ – TriEye, pioneer of the world's first cost-effective mass-market Short-Wave Infrared (SWIR) sensing technology, and Vertilas GmbH, a leader in InP VCSEL products, announced today the joint demonstration of a 1.3μm VCSEL-powered SWIR sensing system.

TriEye and Vertilas announce their collaboration in advanced imaging technology. This partnership has led to the development of a technology demonstrator that integrates TriEye's state-of-the-art Short-Wave Infrared (SWIR) Raven image sensor with Vertilas’ innovative Indium Phosphide (InP) Vertical-Cavity Surface-Emitting Laser (VCSEL) technology. Adopting high-volume, scalable manufacturing strategies, these technologies provide cost-effective solutions for both consumer and industrial
markets.

The system highlights the capabilities of TriEye's CMOS-based SWIR sensor, noted for its high sensitivity and 1.3MP resolution. Designed to enhance imaging in various industries, including automotive, consumer, biometrics, and mobile robots, this solution represents a significant step forward in sensing technology. Alongside, Vertilas introduces its InP SWIR VCSEL technology that provides high output power with high power efficiency. This new VCSEL technology is a complementary innovation that enhances the SWIR camera's functionality. Deploying 1.3μm VCSEL arrays enables greatly improved eye safety and signal quality while minimizing sunlight distortion. Vertilas InP VCSEL array technology also
offers wavelengths at 1.55μm up to 2μm. This new technology is expected to broaden the scope of applications in imaging and illumination across multiple industries.

"Vertilas is thrilled to expand our efforts with TriEye in this groundbreaking initiative. Our InP VCSEL technology, combined with TriEye's exceptional SWIR sensor, marks a significant advancement in the realm of imaging and illumination solutions”, said Christian Neumeyr, CEO at Vertilas. “This collaboration is more than just a technological achievement; it represents our shared vision of innovating for a better, more efficient future in both consumer and industrial applications."

"At TriEye, our commitment has always been to bring revolutionary SWIR technology to the forefront of the market. The integration of our SWIR sensor with Vertilas InP VCSEL technology in this collaborative venture is a testament to this mission”, said Avi Bakal, CEO of TriEye. “We are proud to unveil a solution that not only enhances imaging capabilities across various industries but also does so in a cost-effective and scalable manner, making advanced sensing technology more accessible than ever."

Go to the original article...

Camera identification from retroreflection signatures

Image Sensors World        Go to the original article...

In a recent article in Optics Express titled "Watching the watchers: camera identification and characterization using retro-reflections,", Seets et al. from University of Wisconsin-Madison write:

A focused imaging system such as a camera will reflect light directly back at a light source in a retro-reflection (RR) or cat-eye reflection. RRs provide a signal that is largely independent of distance providing a way to probe cameras at very long ranges. We find that RRs provide a rich source of information on a target camera that can be used for a variety of remote sensing tasks to characterize a target camera including predictions of rotation and camera focusing depth as well as cell phone model classification. We capture three RR datasets to explore these problems with both large commercial lenses and a variety of cell phones. We then train machine learning models that take as input a RR and predict different parameters of the target camera. Our work has applications as an input device, in privacy protection, identification, and image validation.

 Link: https://opg.optica.org/oe/fulltext.cfm?uri=oe-32-8-13836&id=548474













Go to the original article...

Canon releases LI5030 sensor

Image Sensors World        Go to the original article...

 

Canon’s 2.8 MP LI7060 CMOS sensor is equipped with an HDR drive function that achieves a wide 120 dB range at low noise levels. This wide range results in a greater ability to extract usable information even where there is a substantial difference between the lightest and darkest areas of an image. Even when using the sensor during normal drive operation, the sensor can achieve a dynamic range of 75 dB.


The LI5040 and 3U5MGXSBA global shutter image sensor employs an advanced pixel design introducing drive readout and gathering structures which help significantly reduce noise, and contributing to a wide dynamic range with a power consumption of 500mW. Equipped with a 3.4μm pixel size and all pixel progressive reading at 120fps, the 2/3” sensor size with 5.33 million effective pixels (2592 x 2056) easily allows for applications in machine vision and other industrial environments where smaller size and high performance are required. It is available in RGB, Monochrome, and a specialized RGB‐NIR color filter.


LI5030SA is a CMOS type of solid-state image sensor with a 35mm full frame effective pixel array of 19 Megapixels. It uses a global shutter function instead of a conventional rolling shutter. It enables simultaneous exposure timing for all 19 megapixels. It can output an effective 5688 x 3334 pixels of video at 57.99 fps and 12bit via 24 channels of digital signal output. LI5030SA series consists of LI5030SAC (color), LI5030SAI (RGBIR), LI5030SAM (monochrome), and LI5030SAN (Naked). LI5030SAN does not have a microlens or color filter.
The high sensitivity, resolution, and global shutter of this sensor along with multiple color filter variations makes the LI5030 a great choice for a wide array of applications such as microscopes, factory automation, traffic surveillance, drone vision, etc.


Go to the original article...

Sony IMX900 videos

Image Sensors World        Go to the original article...



This video presents Sony's 1/3.1" global shutter image sensor IMX900 with approx. 3.2 effective megapixels that is compact, high-resolution, and has improved near-infrared sensitivity. Here are its three features.
=================================
Chapters
 =================================
0:00 Opening
0:40 Compact, high resolution(1/3.1" 3.2MP)
1:12 Improved incident light angle dependency
3:00 Enhanced NIR region sensitivity
3:41 Ending
------------------------------------------------------------



This video presents Sony's IMX900 global shutter image sensor, which is ideal for industrial applications such as barcode reading, picking robot, and AMR (autonomous mobile robots). 
Here are the three functions that support optimal imaging for different scenarios.
=================================
Chapters
 =================================
0:00 Opening
0:27 Fast Auto Exposure
1:42 Quad HDR(High Dynamic Range)
2:22 Quad Shutter Control

Go to the original article...

EgisTec to acquire Curious

Image Sensors World        Go to the original article...

Taiwan-based IC design house Egis Technology (EgisTec) has announced plans to acquire Curious, a Japan-based IP and fabless chipmaker, in a share swap transaction valued at NT$525 million (US$16.4 million). 

Link: https://www.digitimes.com/news/a20240402PD213/egistec-inpsytech-ip-licensing-mergers-and-acquisitions.html

Curious designs IP for image sensors:
http://www.curious-jp.com/en/

Egis makes fingerprint sensors including under-display optical sensors, including those used in older Samsung Galaxy S9 and S9+ phones:
https://www.egistec.com/en/



Go to the original article...

X-FAB annonces BSI process for next gen image sensors

Image Sensors World        Go to the original article...

Link: https://www.xfab.com/news/details/article/x-fab-enhances-image-sensor-performance-through-back-side-illumination

X-FAB Enhances Image Sensor Performance Through Back-Side Illumination

NEWS – Tessenderlo, Belgium – Apr 03, 2024

Presenting a foundry route to medical, automotive and industrial customers that combines boosted sensitivity, larger pixel size and more extensive sensor area

X-FAB Silicon Foundries SE, the leading analog/mixed-signal and specialty foundry, has just announced a major addition to its optical sensor offering. Aimed at use in next generation image sensor fabrication, the company is now able to provide a back-side illumination (BSI) capability in relation to its popular XS018 180nm CMOS semiconductor process.

Through BSI, imaging devices’ performance characteristics can be significantly enhanced. It means that the back-end process metal layers do not block the incident light from reaching the pixels, increasing fill factors by up to 100%. This is highly beneficial in situations of low-level illumination – as higher pixel light sensitivity can be achieved. BSI also offers the added advantage of significantly reducing the crosstalk between neighboring pixels, due to shorter light paths, leading to better image quality. Though small-pixel BSI solutions for 300mm wafers with high-volume consumer usage are commonplace, there are very few options available for image sensors with stitched large-pixel arrangements for 200mm wafers, especially when additional customizations are required. The new X-FAB BSI capability brings new possibilities, allowing customers with even the most demanding application expectations to be served - such as those involved in X-Ray diagnostic equipment, industrial automation systems, astronomical research, robotic navigation, vehicle front cameras, etc.

Leveraging the XS018 platform, which offers high readout speeds and exhibits low dark currents, image sensors with multiple epi options will be produced. An ARC layer can be added and then tuned in accordance with particular customer requirements. The accompanying X-FAB support package covers a full workflow from initial design through to the shipment of engineering samples, with a comprehensive PDK included.

“BSI technology has become increasingly prevalent in modern imaging devices, thanks to its ability to boost image quality by placing light-sensitive elements closer to the light source and avoiding unwanted circuitry obstructions. This is proving very useful in environments where light is limited,” states Heming Wei, Technical Marketing Manager for Optical Sensors at X-FAB. “Though much of this uptake has been within the consumer electronics sector, there are now numerous opportunities emerging in the industrial, automotive and medical markets. Via access to X-FAB’s BSI foundry solution, it will now be possible for these to be properly attended to, with a compelling offering being provided that brings together heightened sensitivity, enlarged image sensor dimensions and bigger pixel capacities too.”



Go to the original article...

Sony news and videos

Image Sensors World        Go to the original article...

From https://www.sony-semicon.com/en/news/2024/2024032801.html

New Fab Expansion at Sony Device Technology (Thailand) Co., Ltd.

Atsugi, Japan — Sony Semiconductor Solutions Corporation today announced that, starting in February 2024, it has begun the operations with several production lines at the new fab built on the premises of Sony Device Technology (Thailand) Co., Ltd. (“SDT”), a production center responsible for assembly processes of semiconductors. The opening ceremony was held today, officiated by top executives from Sony Semiconductor Solutions Group led by Terushi Shimizu, President and CEO of Sony Semiconductor Solutions and Yoshihiro Yamaguchi, President of Sony Semiconductor Manufacturing, SDT Managing Director Takeshi Matsuda. It was witnessed by guests including Japan’s Ambassador to Thailand, Mr. Masato Otaka, Mr. Wirat Tatsaringkansakul, BOI Deputy Secretary General and other VIPs. 

SDT serves as a production center for the assembly of the main product line within Sony’s Imaging & Sensing Solutions business. The new fab, dubbed “Building 4,” will be utilized for the assembly of image sensors for automotive applications and display devices as well as the mass production of laser diodes for data center application.

Going forward, SDT plans to expand production facilities at Building 4 in line with market trends, while also planning to create approximately 2,000 new jobs with this new operation, thereby contributing to local employment and expanding the semiconductor industry in Thailand.

In addition, SDT has been operating its facilities on 100% renewable energy since fiscal year 2021. In the clean room of Building 4, the air conditioning system controls cleanliness, temperature and humidity by focusing on areas of need, and recycling technology for waste heat and hot water has also been adopted. Furthermore, SDT plans to cover the roof area of Building 4 with solar panels, with operations scheduled by the end of 2024 (calendar year). By accelerating the initiatives to reduce energy consumption and adoption of renewable energy, SDT will continue to run on 100% renewable energy even after Building 4 goes into full operation.

“With the completion of Building 4, we are very pleased to be able to deliver to more customers, a product line-up whose market is expected to expand over the medium to long term,” said Takeshi Matsuda, Managing Director of Sony Device Technology (Thailand) Co., Ltd. “As an overseas manufacturing site of Sony Semiconductor Solutions Group, SDT will contribute to the sustainable evolution of Sony’s business as well as society.”

-----------------------------

Two new videos about IMX900 sensor on YouTube:


This video presents Sony's 1/3.1" global shutter image sensor IMX900 with approx. 3.2 effective megapixels that is compact, high-resolution, and has improved near-infrared sensitivity. Here are its three features.



This video presents Sony's IMX900 global shutter image sensor, which is ideal for industrial applications such as barcode reading, picking robot, and AMR (autonomous mobile robots).  Here are the three functions that support optimal imaging for different scenarios.

Go to the original article...

CIS shipments for smartphones shrank in 2022 and 2023

Image Sensors World        Go to the original article...

 


Go to the original article...

Toppan shifts image sensor production to China

Image Sensors World        Go to the original article...

From Nikkei Asia news: https://asia.nikkei.com/Business/Tech/Semiconductors/Japan-s-Toppan-shifts-image-sensor-component-production-to-China

Japan's Toppan shifts image sensor component production to China

TOKYO -- Japan's Toppan Holdings has moved production of components for CMOS image sensors from Japan to China, aiming to boost local production 40% as Beijing looks to bolster its supply chains for related technologies.

CMOS -- complementary metal-oxide semiconductor -- sensors convert light captured by camera lenses into electrical signals. Among CMOS components, Toppan produces on-chip color filters (OCF) that colorize captured images and microlenses that increase light-gathering power. Without OCF, CMOS can only detect differences in light level.

Toppan brought related equipment from its plant in Japan's Kumamoto prefecture to a facility in Shanghai and increased production lines from five to seven. They do not fall under U.S. export restrictions targeting China for advanced chipmaking equipment.

The Kumamoto plant will be used for research and development and its approximately 370 employees will be maintained.

The global CMOS market in 2022 was around $21.2 billion, according to French research firm Yole Intelligence. Sony Group leads with a 42% market share, followed by Samsung Electronics with 19% and U.S.-based Omnivision with 11%.

China's presence among top companies is limited to seventh-place GalaxyCore with 4% and eighth-place SmartSens with 2%. While Sony and Samsung manufacture OCF in-house, Chinese manufacturers mainly procure from outside sources.
In China, demand is increasing for CMOS related to automobiles, smartphones, surveillance cameras and other fields. Toppan will strengthen sales to local CMOS sensor manufacturers by producing near areas of demand.

Toppan's move comes as Beijing is spending more than $1.75 billion a year on subsidies to boost domestic semiconductor production, according to the South China Morning Post.
The U.S. has placed restrictions on the export of chipmaking equipment to China out of concerns the technology could be diverted for military purposes, making it difficult for the country to produce advanced chips.

Among chips in practical use, the most advanced level is currently said to be 3 nanometers. In general, the smaller the nanometer level for a logic chip, the more powerful it is.

Amid U.S. restrictions, China is focusing on CMOS, which differ from logic chips in manufacturing method and the definition of advanced products. Most CMOS can be manufactured using mature technology of 28 nm or greater, with the production equipment falling outside of the U.S. restrictions.
China's share based on production capacity of all 28-nm or greater so-called legacy chips is expected to reach 33% of the world's total in 2027, up 4 percentage points from 2023, according to Taiwan research firm TrendForce.

Beijing announced its "Made in China 2025" high-tech industry development plan in 2015, choosing semiconductors as a priority area. Two large government funds have been set up so far to help boost the domestic chip industry.

Plans for a third phase have recently emerged. Bloomberg reported this month that China was raising more than $27 billion from local governments and state-owned enterprises for a chip fund, the biggest of its kind.

Investment in mature technology other than CMOS is also increasing in China, and Japanese and U.S. manufacturing equipment makers are increasing sales in this field as well.
    
The value of chipmaking equipment shipments to China reached a record high of over $30 billion in 2023, up 6% from 2022 and putting the country first ahead of Taiwan and South Korea in imports, according to industry group SEMI.

Go to the original article...

STMicroelectronics and CEA LETI develop microlenses for SPADs

Image Sensors World        Go to the original article...

In a preprint titled "Metasurface-based planar microlenses for SPAD pixels", J. Vaillant et al. of STMicroelectronics and CEA LETI write:

In this paper we present two design generations of metasurface-based planar microlenses implemented on Front-Side Illumination SPAD pixels. This kind of microlens is an alternative to conventional reflow microlens. It offers more degrees of freedom in term of design, especially the capability to design off-axis microlens to gather light around the SPAD photodiode. The two generations of microlenses have been fabricated on STMicroelectronics SPAD and characterized. We validated the sensitivity improvement offered by extended metasurface-based microlens. We also confirmed the impact of lithography capability on metasurface performances, highlighting the need have access to advance deep-UV lithography.




 

Go to the original article...

Paper on SPADs at the NATO Science & Technology organization meeting

Image Sensors World        Go to the original article...

A paper titled "SPAD Image Sensors for Quantum and Classical Imaging" by Prof. Edoardo Charbon was published in the STO Meetings proceedings in January 2024.

Paper link: https://www.sto.nato.int/publications/STO%20Meeting%20Proceedings/STO-MP-IST-SET-198/MP-IST-SET-198-C1-03.pdf

Abstract:
Single-photon avalanche diodes (SPADs) have been demonstrated on a variety of CMOS technologies since the early 2000s. While initially inferior to their counterparts implemented dedicated technologies, modern CMOS SPADs have recently matched them in sensitivity, noise, and timing jitter. Indeed, high time resolution, enabled by low jitter, has helped demonstrate the most impressive developments in fields of imaging and detection, including fluorescence lifetime imaging microscopy (FLIM), Förster resonance energy transfer (FRET), fluorescence correlation spectroscopy (FCS), time-of-flight positron emission tomography (TOF-PET), and light detection and ranging (LiDAR), just to name a few. The SPAD’s power of detecting single photons in pixels that can be replicated in great numbers, typically in the millions, is currently having a major impact in computational imaging and quantum imaging. These two emerging
disciplines stand to take advantage of larger and larger SPAD image sensors with increasingly low jitter and noise, and high sensitivity. Finally, due to the computational power required at pixel level, power consumption must be reduced; we thus advocate the use of in situ computational engines, which, thanks of CMOS’ economy of scale and 3D-stacking, enable vast computation density. Some examples of this trend are given, along with a general perspective on SPAD image sensors. 



Go to the original article...

Sony releases 247MP sensors

Image Sensors World        Go to the original article...

Sony recently released a new 247MP rolling shutter CIS available in monochrome and color variants: IMX811-AAMR and IMX811-AAQR.







Go to the original article...

Four new videos about the industry

Image Sensors World        Go to the original article...

Here are few new videos from image sensor companies.

Two about new hardware built around image sensors:

  • Trinamix-ST under-OLED face recognition camera

 


  • Prophesee AR glassses demo

 


One about new facilities:

  • An official opening of TSMC-Sony plant in Kumamoto where Sony will manufacture its new image sensors:

 


And one about a new sensor series:

  • Omnivision presents its new generation of automotive HDR sensors:

 

Go to the original article...

Artilux announces room temperature GeSi SPAD

Image Sensors World        Go to the original article...

 
HSINCHU, Feb. 22, 2024 /PRNewswire/ -- Artilux, the renowned leader of GeSi (germanium-silicon) photonics technology for CMOS (complementary metal-oxide-semiconductor) based SWIR (short-wavelength infrared) sensing and imaging, announced today that the research team at Artilux has made a breakthrough in advancing SWIR GeSi SPAD (single-photon avalanche diode) technology, which has been recognized and published by Nature, one of the world's most prestigious scientific journals. The paper, titled "Room temperature operation of germanium-silicon single-photon avalanche diode," presented the Geiger-mode operation of a high-performing GeSi avalanche photodiode at room temperature, which in the past was limited to operation at a low temperature below at least 200 Kelvin. Nature's rigorous peer-review process ensures that only research of the highest caliber and broadest interest is published, and the acceptance and publication of the paper in Nature is another pivotal mark in exemplifying Artilux's leadership in CMOS-based SWIR sensing and imaging.

The research work, led by Dr. Neil Na, CTO of Artilux, has unveiled a CMOS-compatible GeSi SPAD operated at room temperature and elevated temperatures, featuring a noise-equivalent power improvement over previously demonstrated Ge-based SPADs by several orders of magnitude. The paper showcases key parameters of the GeSi SPAD, including dark count rate, single-photon detection probability at SWIR spectrum, timing jitter, after-pulsing characteristic time, and after-pulsing probability, at a low breakdown voltage and a small excess bias. As a proof of concept, three-dimensional point-cloud images were captured with TOF (direct time-of-flight) technique using the GeSi SPAD. "When we started the project, there were overwhelming evidence in the literature indicating that a room-temperature operation of GeSi SPAD is simply not possible," said Dr. Na, "and I am proud of our team turning the scientific research into a commercial reality against all odds."

The findings set a new milestone in CMOS photonics. The potential deployment of single-photon sensitive SWIR sensors, imagers, and photonic integrated circuits shall unlock critical applications in TOF sensors and imagers, LiDAR (light detection and ranging), bio-photonics, quantum computing and communication, artificial intelligence, robotics, and more. Artilux is committed to continuing its leadership in CMOS photonics technology, aiming to further contribute to the scientific community and photonics industry.

Abstract of article in Nature (Feb 2024): https://www.nature.com/articles/s41586-024-07076-x
The ability to detect single photons has led to the advancement of numerous research fields. Although various types of single-photon detector have been developed, because of two main factors—that is, (1) the need for operating at cryogenic temperature and (2) the incompatibility with complementary metal–oxide–semiconductor (CMOS) fabrication processes—so far, to our knowledge, only Si-based single-photon avalanche diode (SPAD) has gained mainstream success and has been used in consumer electronics. With the growing demand to shift the operation wavelength from near-infrared to short-wavelength infrared (SWIR) for better safety and performance, an alternative solution is required because Si has negligible optical absorption for wavelengths beyond 1 µm. Here we report a CMOS-compatible, high-performing germanium–silicon SPAD operated at room temperature, featuring a noise-equivalent power improvement over the previous Ge-based SPADs by 2–3.5 orders of magnitude. Key parameters such as dark count rate, single-photon detection probability at 1,310 nm, timing jitter, after-pulsing characteristic time and after-pulsing probability are, respectively, measured as 19 kHz µm−2, 12%, 188 ps, ~90 ns and <1%, with a low breakdown voltage of 10.26 V and a small excess bias of 0.75 V. Three-dimensional point-cloud images are captured with direct time-of-flight technique as proof of concept. This work paves the way towards using single-photon-sensitive SWIR sensors, imagers and photonic integrated circuits in everyday life.


Go to the original article...

Nikon to acquire RED.com

Image Sensors World        Go to the original article...

From Nikon newsroom: https://www.nikon.com/company/news/2024/0307_01.html

Nikon to Acquire US Cinema Camera Manufacturer RED.com, LLC

March 7, 2024

TOKYO - Nikon Corporation (Nikon) hereby announces its entry into an agreement to acquire 100% of the outstanding membership interests of RED.com, LLC (RED) whereby RED will become a wholly-owned subsidiary of Nikon, pursuant to a Membership Interest Purchase Agreement with Mr. James Jannard, its founder, and Mr. Jarred Land, its current President, subject to the satisfaction of certain closing conditions thereunder.

Since its establishment in 2005, RED has been at the forefront of digital cinema cameras, introducing industry-defining products such as the original RED ONE 4K to the cutting-edge V-RAPTOR [X] with its proprietary RAW compression technology. RED's contributions to the film industry have not only earned it an Academy Award but have also made it the camera of choice for numerous Hollywood productions, celebrated by directors and cinematographers worldwide for its commitment to innovation and image quality optimized for the highest levels of filmmaking and video production.

This agreement was reached as a result of the mutual desires of Nikon and RED to meet the customers’ needs and offer exceptional user experiences that exceed expectations, merging the strengths of both companies. Nikon's expertise in product development, exceptional reliability, and know-how in image processing, as well as optical technology and user interface along with RED’s knowledge in cinema cameras, including unique image compression technology and color science, will enable the development of distinctive products in the professional digital cinema camera market.

Nikon will leverage this acquisition to expand the fast-growing professional digital cinema camera market, building on both companies' business foundations and networks, promising an exciting future of product development that will continue to push the boundaries of what is possible in film and video production.

Go to the original article...

IEEE ICCP 2024 Call for Papers, Submission Deadline March 22, 2024

Image Sensors World        Go to the original article...

Call for Papers: IEEE International Conference on Computational Photography (ICCP) 2024
https://iccp-conference.org/iccp2024/call-for-papers/
Submission Deadline: March 22, 2024 @ 23:59 CET

ICCP is an international venue for disseminating and discussing new scholarly work in computational photography, novel imaging, sensors and optics techniques. This year, ICCP will take place in EPFL, Lausanne Switzerland, on July 22-24th!

As in previous years, ICCP is coordinating with the IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) for a special issue on Computational Photography to be published after the conference.

 ICCP 2024 seeks novel and high-quality submissions in all areas of computational photography, including, but not limited to:

  •  High-performance imaging.
  •  Computational cameras, illumination, and displays.
  •  Advanced image and video processing.
  •  Integration of imaging, physics, and machine learning.
  •  Organizing and exploiting photo / video collections.
  •  Structured light and time-of-flight imaging.
  •  Appearance, shape, and illumination capture.
  •  Computational optics (wavefront coding, digital holography, compressive sensing, etc.).
  •  Sensor and illumination hardware.
  •  Imaging models and limits
  •  Physics-based rendering, neural rendering, and differentiable rendering.
  •  Applications: imaging on mobile platforms, scientific imaging, medicine and biology, user interfaces, AR/VR systems.

Learn more on the ICCP 2024 website, and submit your latest advancements by Friday, 22nd March, 2024.

The call for posters and demo will be published soon with a deadline end of April. It will also be a great opportunity to advertise your work.

 



Go to the original article...

Prophesee Qualcomm demo at Mobile World Congress

Image Sensors World        Go to the original article...

Prophesee and Qualcomm recently showcased their "blur free" mobile photography technology at the Mobile World Congress in Barcelona.

Press release: https://prophesee-1.reportablenews.com/pr/prophesee-s-metavision-image-deblur-solution-for-smartphones-is-now-production-ready-seamlessly-optimized-for-the-snapdragon-8-gen-3-mobile-platform

February 27, 2024 – Paris, France - Prophesee SA, inventor of the most advanced neuromorphic vision systems, today announced that the progress achieved through its collaboration with Qualcomm Technologies, Inc. has now reached production stage. A live demo during Mobile World Congress Barcelona is showcasing Prophesee’s native compatibility with premium Snapdragon® mobile platforms, bringing the speed, efficiency, and quality of neuromorphic-enabled vision to cameras in mobile devices.

Prophesee’s event-based Metavision sensors and AI, optimized for use with Snapdragon platforms now brings motion blur cancellation and overall image quality to unprecedented levels, especially in the most challenging scenarios faced by conventional frame-based RGB sensors, fast-moving and low-light scenes.

“We have made significant progress since we announced this collaboration in February 2023, achieving the technical milestones that demonstrate the impressive impact on image quality our event-based technology has in mobile devices containing Snapdragon mobile platforms. As a result, our Metavision Deblur solution has now reached production readiness,” said Luca Verre, CEO and co-founder of Prophesee. “We look forward to unleashing the next generation of Smartphone's photography and video with Prophesee's Metavision.”

“Qualcomm Technologies is thrilled to continue our strong collaboration with Prophesee, joining efforts to efficiently optimize Prophesee’s event-based Metavision technology for use with our flagship Snapdragon 8 Gen 3 Mobile Platform. This will deliver significant enhancements to image quality and bring new features enabled by event cameras’ shutter-free capability to devices powered by Snapdragon mobile platforms,” said Judd Heape, VP of Product Management at Qualcomm Technologies, Inc.

How it works
Prophesee’s breakthrough sensors add a new sensing dimension to mobile photography. They change the paradigm in traditional image capture by focusing only on changes in a scene, pixel by pixel, continuously, at extreme speeds.

Each pixel in the Metavision sensor embeds a logic core, enabling it to act as a neuron.
They each activate themselves intelligently and asynchronously depending on the amount of photons they sense. A pixel activating itself is called an event. In essence, events are driven by the scene’s dynamics, not an arbitrary clock anymore, so the acquisition speed always matches the actual scene dynamics.

High-performance event-based deblurring is achieved by synchronizing a frame-based and Prophesee’s event-based sensor. The system then fills the gaps between and inside the frames with microsecond events to algorithmically extract pure motion information and repair motion blur.
Learn more: https://www.prophesee.ai/event-based-vision-mobile/

Go to the original article...

css.php