New Author Introduction – Atul Ingle

Image Sensors World        Go to the original article...

Atul Ingle has kindly agreed to help me publishing the posts and also give his unique view on image sensor from computer vision developer point of view.

Atul Ingle is an Assistant Professor in the Department of Computer Science at Portland State University. His research interests are in the fields of computational imaging, computer vision and signal processing. His current research involves co-design of imaging hardware and algorithms for single-photon image sensors. More broadly, he is interested in both passive and active 3D imaging applications that are severely resource-constrained in terms of power, bandwidth, and compute. Atul holds a PhD in Electrical and Computer Engineering from University of Wisconsin-Madison.

Go to the original article...

New Author Introduction – Saleh Masoodian

Image Sensors World        Go to the original article...

I'd guess many of you know Saleh Masoodian, CEO of Gigajot. I'm happy to announce that Saleh has kindly agreed to join the authors of the blog.

With Mark and Saleh, the new blog would offer quite diverse views on the industry.

Go to the original article...

New Author Introduction – Mark Sapp

Image Sensors World        Go to the original article...

Dear Image Sensors World Blog readers,

Let me introduce Mark Sapp who kindly offered a help with posting the image sensor news in the blog. Mark is an electrical engineer working in the industry for 15 years and an enthusiast for cutting edge imaging technology, located in Austin, Texas. Mark, welcome to the community!

If somebody wants to post more news in the blog, please let me know and I'd gladly add you to the list of authors. I hope that this enriches the blog content and add more diverse views from the different branches of the industry.

Go to the original article...

Suspension of the Blog

Image Sensors World        Go to the original article...

Due to a large workload, I'm unable to continue publishing the blog. So, the blog is suspended for the time being.

Go to the original article...

303-Megaframes-per-Second Image Sensor

Image Sensors World        Go to the original article...

MDPI starts publishing a Special Issue on Recent Advances in CMOS Image Sensor with a paper "A Dual-Mode 303-Megaframes-per-Second Charge-Domain Time-Compressive Computational CMOS Image Sensor" by Keiichiro Kagawa, Masaya Horio, Anh Ngoc Pham, Thoriq Ibrahim, Shin-ichiro Okihara, Tatsuki Furuhashi, Taishi Takasawa, Keita Yasutomi, Shoji Kawahito, and Hajime Nagahara from Shizuoka University and Osaka University.

"An ultra-high-speed computational CMOS image sensor with a burst frame rate of 303 megaframes per second, which is the fastest among the solid-state image sensors, to our knowledge, is demonstrated. This image sensor is compatible with ordinary single-aperture lenses and can operate in dual modes, such as single-event filming mode or multi-exposure imaging mode, by reconfiguring the number of exposure cycles. To realize this frame rate, the charge modulator drivers were adequately designed to suppress the peak driving current taking advantage of the operational constraint of the multi-tap charge modulator. The pixel array is composed of macropixels with 2 × 2 4-tap subpixels. Because temporal compressive sensing is performed in the charge domain without any analog circuit, ultrafast frame rates, small pixel size, low noise, and low power consumption are achieved. In the experiments, single-event imaging of plasma emission in laser processing and multi-exposure transient imaging of light reflections to extend the depth range and to decompose multiple reflections for time-of-flight (TOF) depth imaging with a compression ratio of 8× were demonstrated. Time-resolved images similar to those obtained by the direct-type TOF were reproduced in a single shot, while the charge modulator for the indirect TOF was utilized."

Go to the original article...

303-Megaframes-per-Second Image Sensor

Image Sensors World        Go to the original article...

MDPI starts publishing a Special Issue on Recent Advances in CMOS Image Sensor with a paper "A Dual-Mode 303-Megaframes-per-Second Charge-Domain Time-Compressive Computational CMOS Image Sensor" by Keiichiro Kagawa, Masaya Horio, Anh Ngoc Pham, Thoriq Ibrahim, Shin-ichiro Okihara, Tatsuki Furuhashi, Taishi Takasawa, Keita Yasutomi, Shoji Kawahito, and Hajime Nagahara from Shizuoka University and Osaka University.

"An ultra-high-speed computational CMOS image sensor with a burst frame rate of 303 megaframes per second, which is the fastest among the solid-state image sensors, to our knowledge, is demonstrated. This image sensor is compatible with ordinary single-aperture lenses and can operate in dual modes, such as single-event filming mode or multi-exposure imaging mode, by reconfiguring the number of exposure cycles. To realize this frame rate, the charge modulator drivers were adequately designed to suppress the peak driving current taking advantage of the operational constraint of the multi-tap charge modulator. The pixel array is composed of macropixels with 2 × 2 4-tap subpixels. Because temporal compressive sensing is performed in the charge domain without any analog circuit, ultrafast frame rates, small pixel size, low noise, and low power consumption are achieved. In the experiments, single-event imaging of plasma emission in laser processing and multi-exposure transient imaging of light reflections to extend the depth range and to decompose multiple reflections for time-of-flight (TOF) depth imaging with a compression ratio of 8× were demonstrated. Time-resolved images similar to those obtained by the direct-type TOF were reproduced in a single shot, while the charge modulator for the indirect TOF was utilized."

Go to the original article...

Pixel Crosstalk in 2-Layer Sensors

Image Sensors World        Go to the original article...

MPDP publishes a paper "Parasitic Coupling in 3D Sequential Integration: The Example of a Two-Layer 3D Pixel" by Petros Sideris, Arnaud Peizerat, Perrine Batude, Gilles Sicard, and Christoforos Theodorou from University Grenoble Alpes which is the extended version of the paper presented at 10th International Conference on Modern Circuits and Systems Technologies (MOCAST), Thessaloniki, Greece, 5–7 July 2021.

"In this paper, we present a thorough analysis of parasitic coupling effects between different electrodes for a 3D Sequential Integration circuit example comprising stacked devices. More specifically, this study is performed for a Back-Side Illuminated, 4T–APS, 3D Sequential Integration pixel with both its photodiode and Transfer Gate at the bottom tier and the other parts of the circuit on the top tier. The effects of voltage bias and 3D inter-tier contacts are studied by using TCAD simulations. Coupling-induced electrical parameter variations are compared against variations due to temperature change, revealing that these two effects can cause similar levels of readout error for the top-tier readout circuit. On the bright side, we also demonstrate that in the case of a rolling shutter pixel readout, the coupling effect becomes nearly negligible. Therefore, we estimate that the presence of an inter-tier ground plane, normally used for electrical isolation, is not strictly mandatory for Monolithic 3D pixels."

Go to the original article...

Sony UV Image Sensor Video

Image Sensors World        Go to the original article...

Sony publishes a promotional video about its IMX487 UV sensor:

Go to the original article...

Article about Peter Noble and his Early Image Sensors

Image Sensors World        Go to the original article...

DoresetEcho publishes an article about Emmy Awardee Peter Noble and his early works including the first TV image by 4096 MOS sensor 001:


"Currently, Mr Noble is writing an anthology of the origins of image-sensor array with buried-photodiode structure, which features the original papers and includes alternative methods to achieve the same result."

Go to the original article...

PreAct Announces Software-Definable Automotive Flash LiDAR

Image Sensors World        Go to the original article...

Oregon-based PreAct Technologies announces T30P flash LiDAR said to be the industry’s first software-definable LiDAR.  Vehicles with software-defined architectures require sensor technology that can support over-the-air updates throughout the life of the vehicle, allowing OEMs to generate ongoing revenue by offering powerful new features and functionality.

We are excited to bring our software-definable flash LiDAR to market, furthering the advancement of autonomous mobility across multiple industries,” said Paul Drysch, CEO of PreAct Technologies.  “We’ve spent the last three years creating a solution that fulfills the need of software-defined vehicles, providing the most value for Tier 1s and OEMs over the long term by making any ADAS application relevant for the entire life of the vehicle.

PreAct’s flash LiDAR architecture is based on modulated waveforms that can be optimized for different applications via over-the-air updates, along with an application stack that resides on the edge.  The flexibility of a software defined LiDAR allows Tier 1 suppliers and OEMs to package one sensor for multiple use cases – everything from true curb detection and gesture recognition to self-parking and automatic door actuation – that can update to meet their changing needs as more user and sensor data become available.

Near field automotive sensors have either been low-precision and low-cost, or high-precision and high-cost,” said Ian Riches, VP for the Global Automotive Practice at Strategy Analytics. “By bringing a high-precision, low-cost sensor to market, PreAct is enabling a huge range of safety and convenience features.  The software-defined characteristics of the T30P will allow these features to improve during the lifetime of the vehicle, unlocking new revenue streams for automakers.

T30P, with a frame rate of 200 fps and QVGA resolution, is also the fastest flash Lidar on the market making it well suited for ground and air robotics or industrial applications – systems which all share a need for fast, accurate and high-resolution sensors that can reliably define and track objects in all environmental conditions.

PreAct’s T30P Flash LiDAR sensor suite will be available in July 2022.


Go to the original article...

Counterpoint Forecasts Sony Market Share to Shrink to 39% in 2022

Image Sensors World        Go to the original article...

Counterpoint forecasts: 

"The global Camera Images Sensor (CIS) market revenue is expected to grow 7% in 2022 to reach $21.9 billion, largely driven by increasing demand from the smartphone, automotive, industrial and other applications, according to the latest findings by Counterpoint’s Camera Supply Chain Research.

Commenting on the performance of different segments, Associate Director Ethan Qi said, “As the largest CIS end market, the mobile phone segment is expected to contribute 71.4% of the total market revenue in 2022, followed by automobile (8.6%) and surveillance (5.6%).”

Qi added, “With the continued rebound of global smartphone shipments and further upgrades of image sensors, particularly in resolution, the mobile phone segment is expected to see a mid-single-digit YoY increase in CIS revenue. Meanwhile, as vehicles become more intelligent, connected and autonomous, the implementation of view and sensing cameras for ADAS and ADS functions will proliferate, leading to increased CIS content in new vehicles in the coming years. Besides, the surveillance segment is expected to maintain a low-single-digit growth, partially driven by the lasting social distance impact of COVID-19.”

Looking from the vendor perspective, Sony is expected to capture a 39.1% revenue share in 2022, followed by Samsung (24.9%) and OmniVision (12.9%).

Sony has been actively expanding and diversifying its CIS customer base as the largest supplier of image and ToF sensors, both consisting of large-sized pixels, pushing the trend of raising mobile photography to a pro-level DSLR quality. Sony’s CIS revenue is expected to increase 3% YoY in 2022.

Meanwhile, the gap between Sony and Samsung is expected to narrow further as the latter will benefit from its first-mover advantage in providing cost-competitive super-high-resolution image sensors for mid-to-high smartphones and aggressive production capacity expansion.

OmniVision is also expected to see a big jump in CIS revenue in 2022, benefitting from a diversified product portfolio, breakthroughs in super-high-resolution sensors for smartphones and increasing demand from the automobile, surveillance and industrial segments."

Go to the original article...

Sony in Search for Killer Applications for its ToF Sensor

Image Sensors World        Go to the original article...

Sony publishes an interview with ist ToF application team members "Time-of-flight (ToF) image sensor for mobile phone applications revolutionizes mobile entertainment content with its capability to accurately capture not only figures and backgrounds, but also body gestures." Few quotes:

"While the contexts were steadily growing for leveraging the technology, there were no definite killer apps for it which people would put to everyday use. This situation resulted in a chicken or egg situation, that smartphone manufacturers were not keen to integrate ToF image sensors for the lack of killer apps while app developers had little incentive to develop apps for it because it was not adopted in many smartphones.

Given this situation, we thought that we should encourage the development of apps that leveraged ToF image sensors to incentivize both smartphone manufactures and app developers.

Sony Semiconductor Solutions Group (hereafter “the Group”) faced the challenge and sought for a solution in developing ground-breaking apps for the ToF image sensor for mobile applications. A large-scale project was launched, connecting teams in Japan and four Chinese cities—Shanghai, Beijing, Shenzhen, and Chengdu. We asked what the project aimed to achieve and how the apps were created over the great distances.

There were also obstacles from the development point of view. Laser emission increases power consumption, and so does depth sensing and processing. For the smartphone manufacturers, it also means more space needed to accommodate the sensor. There are, of course, additional advantages ToF image sensors can bring, but these advantages did not add enough value to extend the scope of application to all smartphone models. This resulted in the current situation that the sensor is installed in some high-end models, but not in other, more popular ones.

That is true, but we have smartphone manufacturers who are interested in integrating the ToF image sensor if there are interesting apps to use it. This was our incentive to take up the challenge and develop apps in order to topple the first domino piece to establish and expand an app market for the sensor."

Go to the original article...

ST Unveils its First iToF Sensor with 0.5MP Resolution

Image Sensors World        Go to the original article...

GlobeNewswireST announces a new family of high-resolution iToF sensors for smartphones and other devices.

The 3D family debuts with the VD55H1. This sensor maps three-dimensional surfaces by measuring the distance to over half a million points. Objects can be detected up to five meters from the sensor, and even further with patterned illumination. VD55H1 addresses emerging AR/VR market use cases including room mapping, gaming, and 3D avatars. In smartphones, the sensor enhances the performance of camera-system features including bokeh effect, multi-camera selection, and video segmentation. Face-authentication security is also improved with higher resolution and more accurate 3D images to protect phone unlocking, mobile payment, and any smart system involving secure transactions and access control.

The innovative VD55H1 3D depth sensor reinforces ST’s leadership in Time-of-Flight, and complements our full range of depth sensing technologies,” said Eric Aussedat, ST’s EVP, Imaging Sub-Group GM. “The FlightSense portfolio now comprises direct and indirect ToF products from single-point ranging all-in-one sensors to sophisticated high-resolution 3D imagers enabling future generations of intuitive, smart, and autonomous devices.

VD55H1’s pixel leverages in-house 40nm stacked wafer technology, ensures low power consumption, low noise, and optimized die area. The die contains 75% more pixels than existing VGA sensors, within a smaller die size.

The VD55H1 sensor is now available for lead customers to sample. Volume production maturity is scheduled for the second half of 2022. A reference design and complete software package are available to help accelerate sensor evaluation and project development.

Featuring a 672 x 804 BSI pixel array for iToF depth sensing, the VD55H1 is able to operate with a modulation frequency of 200MHz with more than 85% demodulation contrast at 940nm. This reduces the depth noise by a factor of two over incumbent sensors that typically operate around 100MHz. In addition, multi-frequency operation, an advanced depth-unwrapping algorithm, low pixel noise floor, and high pixel dynamic range ensure measurement accuracy over long ranging distance. Depth accuracy is better than 1% and typical precision is 0.1% of distance.

Other features include a short capture sequence that supports a frame rate up to 120 fps and improves motion-blur robustness. In addition, advanced clock and phase management including spread spectrum clock generator (SSCG) provides multi-device interference mitigation and optimized electromagnetic compatibility.

The power consumption can be reduced to less than 100mW in some streaming modes, to help prolong the runtime of battery-operated devices.

A consumer device form factor reference design for the VD55H1 has been created that includes the illumination system. A supporting fully featured software driver and a library containing an advanced depth-reconstruction image-signal-processing pipeline compatible with Android embedded platforms is also provided.


Go to the original article...

Sigma Updates on the Next Generation Foveon Sensor Development

Image Sensors World        Go to the original article...

Sigma publishes an official statement "Development status of the three-layer image sensor:"

Dear SIGMA customers,

First of all, thank you very much for your continued support and interest in our products.
SIGMA would like to share the development status of the three-layer image sensor as of February 2022 by the following.

The development of the three-layer image sensor is currently underway with the strong leadership of SIGMA’s headquarters in collaboration with research institutes in Japan. The stages of development can be roughly divided into the following:
  • Stage 1: Repeated design simulations of the new three-layer structure to confirm that it will function as intended.
  • Stage 2: Prototype evaluation using a small image sensor with the same pixel size as the product specifications but with a reduced total pixel count to verify the performance characteristics of the image sensor in practice.
  • Stage 3: Final prototype evaluation using a full-frame image sensor with the same specifications as the mass products including the AD converter etc…
We believe that these three stages are necessary in the development, and we are currently in the process of creating the prototype sensor for Stage 2.

Based on the evaluation results of the prototype sensor, we will decide whether to proceed to Stage 3 or to review the design data and re-prototype “Stage 2”. When we proceed to Stage 3, we will verify the mass-producibility of the sensor with research institutes and manufacturing vendors based on the evaluation results, and then make a final decision on whether or not to mass-produce the image sensor.

Although we have not yet reached the stage where we can announce a specific schedule for the mass production of the image sensor, we are determined to do our best to realize a camera that will truly please our customers who are waiting for it, as soon as possible.

Once again, I would like to thank all of you for your continued support of SIGMA.
We will continue to strive for technological development to meet your expectations and trust.

Kazuto Yamaki
Chief Executive Officer, SIGMA Corporation

Go to the original article...

Vision Sensor-Processor with In-Pixel Memory

Image Sensors World        Go to the original article...

KAIST and Samsung foundry publish a Nature paper "Mnemonic-opto-synaptic transistor for in-sensor vision system" by Joon-Kyu Han, Young-Woo Chung, Jaeho Sim, Ji-Man Yu, Geon-Beom Lee, Sang-Hyeon Kim, and Yang-Kyu Choi.

"A mnemonic-opto-synaptic transistor (MOST) that has triple functions is demonstrated for an in-sensor vision system. It memorizes a photoresponsivity that corresponds to a synaptic weight as a memory cell, senses light as a photodetector, and performs weight updates as a synapse for machine vision with an artificial neural network (ANN). Herein the memory function added to a previous photodetecting device combined with a photodetector and a synapse provides a technical breakthrough for realizing in-sensor processing that is able to perform image sensing and signal processing in a sensor. A charge trap layer (CTL) was intercalated to gate dielectrics of a vertical pillar-shaped transistor for the memory function. Weight memorized in the CTL makes photoresponsivity tunable for real-time multiplication of the image with a memorized photoresponsivity matrix. Therefore, these multi-faceted features can allow in-sensor processing without external memory for the in-sensor vision system. In particular, the in-sensor vision system can enhance speed and energy efficiency compared to a conventional vision system due to the simultaneous preprocessing of massive data at sensor nodes prior to ANN nodes. Recognition of a simple pattern was demonstrated with full sets of the fabricated MOSTs. Furthermore, recognition of complex hand-written digits in the MNIST database was also demonstrated with software simulations."

Go to the original article...

High-Throughput SPAD Signal Processing

Image Sensors World        Go to the original article...

Edinburgh University and ST publish an open access IEEE JSSC paper "A High-Throughput Photon Processing Technique for Range Extension of SPAD-based LiDAR Receivers" by Sarrah M. Patanwala, Istvan Gyongy, Hanning Mai, Andreas Aßmann, Neale A. W. Dutton, Bruce R. Rae, and Robert K. Henderson.

"There has recently been a keen interest in developing LiDAR systems using SPAD sensors. This has led to a variety of implementations in pixel combining techniques and TDC architectures for such sensors. This paper presents a comparison of these approaches and demonstrates a technique capable of extending the range of LiDAR systems with improved resilience to background conditions. A LiDAR system emulator using a reconfigurable SPAD array and FPGA interface is used to compare these different techniques. A Monte Carlo simulation model leveraging synthetic 3D data is presented to visualize the sensor performance on realistic automotive LiDAR scenes."

Go to the original article...

dToF Tutorial from Edinburgh University and ST

Image Sensors World        Go to the original article...

Edinburgh University publishes "Direct Time-of-Flight Single-Photon Imaging" by Istvan Gyongy, Neale A. W. Dutton, and Robert K. Henderson, also published by IEEE TED.

"This article provides a tutorial introduction to the direct Time-of-Flight (dToF) signal chain and typical artifacts introduced due to detector and processing electronic limitations. We outline the memory requirements of embedded histograms related to desired precision and detectability, which are often the limiting factor in the array resolution. A survey of integrated CMOS dToF arrays is provided highlighting future prospects to further scaling through process optimization or smart embedded processing."

Go to the original article...

Recent Videos: IIT Delhi, ADI, Omnivision, FLIR, Hamamatsu

Image Sensors World        Go to the original article...

IIT Delhi publishes a lecture "From light waves to images: Advancing Science with Pictures" by Kedar Khare:


Analog Devices publish a video on use case of its ADSD3100 platform based on Microsoft ToF sensor:


Omnivision publishes a promotional video for its 200MP OVB0B sensor with 0.61um pixels:

 

Teledyne FLIR demos the usefulness of thermal cameras in automatic emergency braking systems for cars:

 

Hamamatsu publishes a demo of its 8 x 128 pixel ToF sensor:

 

Go to the original article...

Himax Reports 2021 Results

Image Sensors World        Go to the original article...

GlobeNewswire: Himax updates on its imaging business in 2021:

"Himax is pleased to report that the company’s ultralow power AI image sensing total solution successfully entered into mass production in Q4 last year for a major tech name over a mainstream application. The company reached this major milestone just one year after it delivered the first samples, a remarkable achievement and an illustration of the robustness of AI solution. [I'd guess that this major customer is Amazon Ring and the product is video doorbell.]

The company is highly encouraged by the early success it has seen with ultralow power AI image sensing business thus far after a leading customer adopted it for a mainstream application. Himax expects to see more design-wins awarded across a broad customer base and a high variety of applications leading to robust sales growth for this new high margin product line.

Himax’s ultralow power AI image sensing total solution incorporates its ultralow power CMOS image sensor, proprietary AI processor and CNN-based AI algorithm. As reported earlier, the sizable order for a top-tier name customer’s mainstream application successfully entered production in Q4 last year, marking another impressive milestone for company’s new AI business within just one year since its initial release. The company will give further details after the end customer’s official announcement. Himax has also made good progress on this mainstream application with other leading vendors where the number of design-in projects is increasing. In addition to the success story, the second application Himax expects to see significant volume is in automatic meter reading (AMR) where AI total solution has been widely adopted by numerous customers across a wide geographical area in China. Himax’s power-saving AI cameras, deployed over the existing installed base of traditional water meters, enable the water meter to automatically collect consumption data with AI operating locally on the edge. The device transmits only byte-sized metadata to the server for billing and in-time detection of abnormal consumption or leakage, eliminating the need for manual reading. The battery pack has a lifetime of over 5 years, greatly outperforming conventional AMR solutions which usually are in a bulky form with large battery packs and, without local AI capability, have to transmit massive image data to the cloud to perform meter reading.

The company is already seeing accelerated deployment of AI solutions to a wide range of applications, including notebook, home appliances, utility meter, automotive, battery-powered surveillance camera, panoramic video conferencing, and medical, among other things. Moreover, new design-in sockets are on the way as it looks to leverage the collaboration with leading cloud service partners, such as Microsoft Azure and Google TensorFlow, on their edge-to-cloud platform to drive further adoption on applications such as smart home, smart office, healthcare, agriculture, retail and factory automation. Last but not least, Himax is seeing numerous design-in activities of AI solution for endoscope, an area the company is extremely excited about that may represent an extraordinary game changer for the health examination industry. Himax will report more detail in due course. Himax is very encouraged by the traction this relatively new product line has generated in a short amount of time and expect to see increasing sales contribution through 2022 and beyond."

Go to the original article...

Intel Heritage in Image Sensors

Image Sensors World        Go to the original article...

It turns out that well before the Tower acquisition, in 90s, Intel already manufactured image sensors. Photobit designed it for Intel, Intel manufactured it, and then later Intel decided CMOS image sensors would be a commodity business and got out. Intel was Photobit’s first partner/customer. Intel Capital was an investor in Photobit for strategic purposes.

Go to the original article...

Yole Predicts that Sony and ST Will Capture 95% of SWIR Imagers Market

Image Sensors World        Go to the original article...

Yole Developpement believes that ST and Sony could disrupt the technological landscape with their SWIR imagers:

"In 2021, the SWIR industry’s leading players were SCD, Sensors Unlimited, and Teledyne FLIR, sharing more than 50% of the 11,000 units shipped in the year. These leaders are subsidiaries of leading defense companies that started developing SWIR technology with the support of governments for strategic purposes. They constitute the legacy side of the SWIR industry.

On the other side, STMicroelectronics and Sony, two leaders in the consumer imaging industry started being active players in SWIR with new technologies including quantum dots. Their entrance might be explained by the growing demand from consumer OEM for new integration designs such as under-display 3D sensing in smartphones. If SWIR imagers reach a low price point, shipments could skyrocket to hundreds of millions within a few years. The SWIR industry could emulate the current 3D imaging industry, where STMicroelectronics and Sony share nearly 95% of the 225 million shipments (2020 data)."

Go to the original article...

Peter Noble, Marvin White, and Northrop Grumman Win 2021 Emmy Awards

Image Sensors World        Go to the original article...

Peter Noble and Marvin White win 2021 Technology & Engineering Emmy Awards:
  • Correlated Double Sampling for Image Sensors
    • Marvin H. White
    • Northrop Grumman Mission Systems Group
  • Pioneering Development of an Image-Sensor Array with Buried-Photodiode Structure
    • Peter J. W. Noble

Go to the original article...

Peter Noble, Marvin White, and Northrop Grumman Win 2021 Emmi Awards

Image Sensors World        Go to the original article...

Peter Noble and Marvin White win 2021 Technology & Engineering Emmy Awards:
  • Correlated Double Sampling for Image Sensors
    • Marvin H. White
    • Northrop Grumman Mission Systems Group
  • Pioneering Development of an Image-Sensor Array with Buried-Photodiode Structure
    • Peter J. W. Noble

Go to the original article...

Sony "Sense the Wonder" Day

Image Sensors World        Go to the original article...

Sony publishes videos from its "Sense the Wonder" Day:

Go to the original article...

Omnivision Unveils 0.56um Pixel

Image Sensors World        Go to the original article...

BusinessWire: OMNIVISION announces a major pixel technology breakthrough―the world’s smallest 0.56-µm pixel with high QE, excellent quad phase detection (QPD) autofocus and low power consumption. This ultra-small pixel technology will address the demand for high-resolution and small pixel pitch image sensors for multi-camera mobile devices.

With a pixel size now smaller than the wavelength of red light, OMNIVISION’s R&D team has validated that pixel shrink is no longer limited by the wavelength of light. The 0.56µm pixel design is enabled by a CIS-dedicated 28nm process node and 22nm logic process node at TSMC, with a new pixel transistor layout and 2x4 shared pixel architecture. The pixel is based on OMNIVISION’s PureCel Plus-S stacking technology, and deep photodiode technology is applied to embed the photodiode deeper into the silicon.

It takes great R&D innovation to advance pixel technology, especially at this level where we are going beyond the wavelength of light,” said Lindsay Grant, SVP of Process Engineering at OMNIVISION. “We have not compromised high performance with the smaller die size. In fact, we have demonstrated comparable QPD and QE performance to our 0.61µm pixel in the visible light range.

Grant adds, “OMNIVISION invests heavily in R&D and almost 50 percent of our employees comprise R&D engineers. As a global fabless semiconductor provider, we also work closely with our foundry partners, such as TSMC, to develop new process technology approaches that enable industry-leading innovation like this. This is a remarkable achievement, and I applaud our talented R&D team and our foundry partner for their ability to continuously lead the pixel shrink race.

We are pleased with the results of our deep collaboration with OMNIVISION in delivery of the world’s smallest 0.56-µm pixel using our industry-leading CIS technology,” said Sajiv Dalal, EVP of Business Management, TSMC North America. “TSMC strives to advance semiconductor manufacturing technologies and services to enable the most advanced, state-of-the-art CIS designs. We look forward to our continued partnership with OMNIVISION to help them achieve high performance, superior resolution, and low power consumption goals and accelerate innovation for their differentiated products.

The first 0.56µm pixel die will be implemented in 200MP image sensors for smartphones in Q2 2022, with samples targeted for Q3. Consumers can expect to see new smartphones that contain the world’s smallest pixel available on the market in early 2023.

Go to the original article...

Intel Gets into CIS Foundry Business through the Acquisition of Tower for $5.4B

Image Sensors World        Go to the original article...

BusinessWire: Intel and Tower Semiconductor announce a definitive agreement under which Intel will acquire Tower for approximately $5.4 billion.

Tower’s specialty technology portfolio, geographic reach, deep customer relationships and services-first operations will help scale Intel’s foundry services and advance our goal of becoming a major provider of foundry capacity globally,” said Pat Gelsinger, Intel CEO. “This deal will enable Intel to offer a compelling breadth of leading-edge nodes and differentiated specialty technologies on mature nodes – unlocking new opportunities for existing and future customers in an era of unprecedented demand for semiconductors.

Tower owns 5 fabs directly and another 3 through a joint venture with Nuvoton. 6 of them manufacture image sensors, among other products. For some reason, Tower does not mention BSI processing joint venture with GPixel in China.


Update: Intel Investors Day presentation already shows CIS in the list of its foundry offerings:

Go to the original article...

Hybrid ToF (hToF) Image Sensor Paper

Image Sensors World        Go to the original article...

Shizuoka University publishes a IEEE Open JSSC paper "Hybrid Time-of-Flight Image Sensors for Middle-Range Outdoor Applications" by S. Kawahito, K. Yasutomi, and K. Mars.

"This paper introduces a new series of time-of-flight (TOF) range image sensors that can be used for outdoor middle-range (10m to 100m) applications by employing a small duty-cycle modulated light pulse with a relatively high optical peak power. This set of TOF sensors is referred to here as a hybrid TOF (hTOF) image sensor. The hTOF image sensor is based on the indirect TOF measurement principle but simultaneously uses the direct TOF concept for coarse measurements. Compared to conventional indirect TOF image sensors for outdoor middle-range applications, the hTOF image sensor has a distinct advantage due to the reduction of capturing ambient light charge. To show the potential of the hTOF image sensor for outdoor middle-range operation, a model of estimating distance precision of hTOF image sensors is built and applied it by using possible sensor specifications to estimate the distance precision of the hTOF range camera in 10m, 20m and 40m measurements under the ambient-light condition of 100klux and its feasibility is discussed. In outdoor 10m-range measurements, the advantage of hTOF image sensors compared to the conventional indirect TOF image sensors is discussed by considering the amount of captured ambient-light charge in pixels."

Go to the original article...

e2v Lecture on Image Sensor Performance Comparison

Image Sensors World        Go to the original article...

Vision Systems Design publishes a Youtube channel Vision Learning with quite a few interesting presentations. One of the recent presentations is 66min-long Teledyne e2v's "Understanding Image Sensor Performance - Interpret Key Parameters and Effective Comparison:"

Go to the original article...

Adaps dToF Paper

Image Sensors World        Go to the original article...

Shenzhen, China-based startup company Adaps Photonics publishes an open-access IEEE paper "A 240 x 160 3D Stacked SPAD dToF Image Sensor with Rolling Shutter and In Pixel Histogram for Mobile Devices" by Chao Zhang, Ning Zhang, Zhijie Ma, Letian Wang, Yu Qin, Jieyang Jia, and Kai Zang.

"A 240 x 160 single-photon avalanche diode (SPAD) sensor integrated with a 3D-stacked 65nm/65nm CMOS technology is reported for direct time-of-flight (dToF) 3D imaging in mobile devices. The top tier is occupied by backside illuminated SPADs with 16um pitch and 49.7% fill-factor. The SPADS consists of multiple 16x16 SPADs top groups, in which each of 8 x 8 SPADs sub-group shares a 10-bit, 97.65ps and 100ns range time-to-digital converter (TDC) in a quad-partition rolling shutter mode. During the exposure of each rolling stage, partial histogramming readout (PHR) approach is implemented to compress photon events to in-pixel histograms. Since the fine histograms is incomplete, for the first time we propose histogram distortion correction (HDC) algorithm to solve the linearity discontinuity at the coarse bin edges. With this algorithm, depth measurement up to 9.5m achieves an accuracy of 1cm and precision of 9mm in office lighting condition. Outdoor measurement with 10 klux sunlight achieves a maximum distance detection of 4m at 20 fps, using a VCSEL laser with the average power of 90 mW and peak power of 15 W."

Go to the original article...

Event Guided Depth Sensing

Image Sensors World        Go to the original article...

University of Zurich and ETH Zurich publish a paper "Event Guided Depth Sensing" by Manasi Muglikar, Diederik Paul Moeys, and Davide Scaramuzza.

"Active depth sensors like structured light, lidar, and time-of-flight systems sample the depth of the entire scene uniformly at a fixed scan rate. This leads to limited spatio-temporal resolution where redundant static information is over-sampled and precious motion information might be under-sampled. In this paper, we present an efficient bio-inspired event-camera-driven depth estimation algorithm. In our approach, we dynamically illuminate areas of interest densely, depending on the scene activity detected by the event camera, and sparsely illuminate areas in the field of view with no motion. The depth estimation is achieved by an event-based structured light system consisting of a laser point projector coupled with a second event-based sensor tuned to detect the reflection of the laser from the scene. We show the feasibility of our approach in a simulated autonomous driving scenario and real indoor sequences using our prototype. We show that, in natural scenes like autonomous driving and indoor environments, moving edges correspond to less than 10% of the scene on average. Thus our setup requires the sensor to scan only 10% of the scene, which could lead to almost 90% less power consumption by the illumination source. While we present the evaluation and proof-of-concept for an event-based structured-light system, the ideas presented here are applicable for a wide range of depth-sensing modalities like LIDAR, time-of-flight, and standard stereo. Video is available at https://www.youtube.com/watch?v=Rvv9IQLYjCQ"

Go to the original article...

css.php