Counterpoint: More than 5 Billion CIS for Smartphones Will Be Shipped this Year

Image Sensors World        Go to the original article...

Counterpoint Research estimates that the sales volume of CIS for smartphones increased eightfold over the past decade, reaching more than 4.5b units in 2019. Senior Analyst Ethan Qi, notes, “Although the strong growth momentum is expected to soften amid the pandemic fallout, thanks to the irreversible trend towards multi-camera setups and the spreading adoption of 3D sensing systems, the smartphone CIS segment will likely still register high-single-digit shipment growth in 2020, hitting an all-time high of close to 5.0 billion units.

According to the findings of Counterpoint’s Component Tracker, each smartphone shipped in 1Q20 packed more than 3.5 image sensors on average. The growth is primarily driven by the rising penetration of quad-camera designs in the high- to mid-end smartphones, which jumped to nearly 20% during the period.

Commenting on the importance of camera in smartphones, Research Director, Tom Kang, says, “As camera function has become a key differentiator in smartphones, we expect the quad-camera feature will become a standard moving forward. Leading smartphone brands will continue enriching and enhancing the photography and video capture experiences, as well as exploring AR applications, by leveraging diversified lens and sensor combinations along with the increasing AI computing power.

Go to the original article...

Airy3D Talk about Marketing Strategy

Image Sensors World        Go to the original article...

Paul Gallagher, VP of Strategic Marketing at Airy3D, talks about challenges and solutions in marketing of the company's 3D platform in "Episode 4: Depth Perception" of The Launch podcast.


Go to the original article...

Insight to Velodyne LiDAR Business

Image Sensors World        Go to the original article...

It's official now: Velodyne LiDAR becomes a public company through reverse merging with Graf Industrial.

"Graf Industrial Corp. (NYSE: GRAF). GRAF and Velodyne have successfully raised $150MM in equity from a group of institutional investors, subject to completion of the transaction. This transaction is being structured as a reverse merger where existing Velodyne shareholders will own the majority of the go-forward company. The funds raised will be combined with up to $117M that GRAF had already raised from its existing investors. We are targeting to finalize the combination toward the end of Q3 2020, following approval by GRAF’s shareholders. At that time, Velodyne will become a publicly-traded company and we expect to be listed on the NYSE under a new ticker symbol, VLDR."

Why should it be interesting for everybody? Because Velodyne becomes the first pure-play LiDAR company that would disclose its financial results every quarter. Now, everybody can see how LiDAR business looks from inside and dynamics of the LiDAR market. The first public disclosure is below:


Go to the original article...

Optical Readout Thermal Imager

Image Sensors World        Go to the original article...

ResearchGate publishes a paper "Design and Optical Simulation of a Sensor Pixel for an Optical Readout-Based Thermal Imager" by Ambali Odebowale and Mohamed Ramy Abdelrahman from King Saud University.

"In this paper, we present an optical design and analysis of a single pixel element detector in an optical readout-based infrared imaging system. The proposed thermal imaging system contains no readout integrated circuitry and thus can be considered as a low cost alternative to typical thermal imaging systems. In this paper, we present the design and optical simulation details for a fabry perot cavity filter (FPCF)-based sensor configuration operating in the transmission mode at 650nm and as a Long Wave Infrared (LWIR) absorber in the 8000nm-12000nm band. The temperature tuning of the FPCF resonant frequency is dependent on the thermo-optic sensitivity of its cavity layer. The performance of the FPCF sensor is considered at different cavity layer thermo-optic coefficients (TOCs) and for different thermal scene temperature variations. The proposed sensor was found to be sensitive to 25mK thermal scene temperature variations."

Go to the original article...

SWIR Upconverting Camera

Image Sensors World        Go to the original article...

MDPI paper "Up-Conversion Sensing of 2D Spatially-Modulated Infrared Information-Carrying Beams with Si-Based Cameras" by Adrián J. Torregrosa, Emir Karamehmedović, Haroldo Maestre, María Luisa Rico, and Juan Capmany from Universidad Miguel Hernández, Spain, Universidad de Alicante, Spain, and International University of Sarajevo, Bosnia and Herzegovina proposes 1550nm imaging with Si-based sensor:

"Up-conversion sensing based on optical heterodyning of an IR (infrared) image with a local oscillator laser wave in a nonlinear optical sum-frequency mixing (SFM) process is a practical solution to circumvent some limitations of IR image sensors in terms of signal-to-noise ratio, speed, resolution, or cooling needs in some demanding applications. In this way, the spectral content of an IR image can become spectrally shifted to the visible/near infrared (VIS/NWIR) and then detected with silicon focal plane arrayed sensors (Si-FPA), such as CCD/CMOS (charge-coupled and complementary metal-oxide-semiconductor devices). This work is an extension of a previous study where we recently introduced this technique in the context of optical communications, in particular in FSOC (free-space optical communications). Herein, we present an image up-conversion system based on a 1064 nm Nd3+ : YVO4 solid-state laser with a KTP (potassium titanyl phosphate) nonlinear crystal located intra-cavity where a laser beam at 1550 nm 2D spatially-modulated with a binary Quick Response (QR) code is mixed, giving an up-converted code image at 631 nm that is detected with an Si-based camera. The underlying technology allows for the extension of other IR spectral allocations, construction of compact receivers at low cost, and provides a natural way for increased protection against eavesdropping."


"The system can be miniaturized down to a quasi-monolithic robust architecture around 4 cm3 and built at a low cost with standard commercial components, resulting lightweight, and favoring field-deployable IR eye-safe links, although it is easily extensible to the MWIR and LWIR spectral regions."

Go to the original article...

SWIR Upconverting Camera

Image Sensors World        Go to the original article...

MDPI paper "Up-Conversion Sensing of 2D Spatially-Modulated Infrared Information-Carrying Beams with Si-Based Cameras" by Adrián J. Torregrosa, Emir Karamehmedović, Haroldo Maestre, María Luisa Rico, and Juan Capmany from Universidad Miguel Hernández, Spain, Universidad de Alicante, Spain, and International University of Sarajevo, Bosnia and Herzegovina proposes 1550nm imaging with Si-based sensor:

"Up-conversion sensing based on optical heterodyning of an IR (infrared) image with a local oscillator laser wave in a nonlinear optical sum-frequency mixing (SFM) process is a practical solution to circumvent some limitations of IR image sensors in terms of signal-to-noise ratio, speed, resolution, or cooling needs in some demanding applications. In this way, the spectral content of an IR image can become spectrally shifted to the visible/near infrared (VIS/NWIR) and then detected with silicon focal plane arrayed sensors (Si-FPA), such as CCD/CMOS (charge-coupled and complementary metal-oxide-semiconductor devices). This work is an extension of a previous study where we recently introduced this technique in the context of optical communications, in particular in FSOC (free-space optical communications). Herein, we present an image up-conversion system based on a 1064 nm Nd3+ : YVO4 solid-state laser with a KTP (potassium titanyl phosphate) nonlinear crystal located intra-cavity where a laser beam at 1550 nm 2D spatially-modulated with a binary Quick Response (QR) code is mixed, giving an up-converted code image at 631 nm that is detected with an Si-based camera. The underlying technology allows for the extension of other IR spectral allocations, construction of compact receivers at low cost, and provides a natural way for increased protection against eavesdropping."


"The system can be miniaturized down to a quasi-monolithic robust architecture around 4 cm3 and built at a low cost with standard commercial components, resulting lightweight, and favoring field-deployable IR eye-safe links, although it is easily extensible to the MWIR and LWIR spectral regions."

Go to the original article...

Yole Forecasts Gold Rush in Thermal Cameras

Image Sensors World        Go to the original article...

i-Micronews: "The Covid-19 pandemic has induced a gold rush in the thermal imaging and sensing industry. All over the world, various media outlets, smaller or larger – even media behemoths – have written pieces about this technology.

We thought that it wouldn’t be too outrageous for people to measure their body temperature frequently using a smartphone that happens to be constantly in their hands. At other times, this would sound like a niche smartphone feature. But in the new era during and after the pandemic, it could prove as a helpful tool to have.

Therefore, at the beginning of June 2020, Huawei subsidiary Honor announced the Honor Play 4 smartphone, which integrates an infrared temperature sensor. According to Honor, the infra-red (IR) detector has a measurement range of -20°C to 100°C, which is more than enough to cover the human body’s range of potential temperatures. It promises an accuracy of 0.2°C, considered to be well within fever detection requirements. This looks like a medical-grade sensor. From the photo shown here in Figure 2, we believe that there is a possibility that the detector might be the newest Melexis thermopile sensor MLX90632. The specifications also fit with the product sheet. Or at least, it could be a sensor from another manufacturer that has very similar specs with the Melexis one.

The question however, remains: Is consumerization of thermal imaging/sensing technology imminent? We would dare to answer yes, but only when it’s a simple sensing function, if only temperature is read, for example from the forehead, using a cheap, robust and qualified IR detector. Thermopile technology could work just fine. This wouldn’t differ much from usual forehead thermometers. It’s just that the measurement guidelines are slightly changed by using a smartphone. On the other hand, thermal imaging would take some time. It’s a matter of educating properly consumers on how to interpret and read a thermal image. People might not be ready yet, and costs for this technology to reach the masses for daily use might still be high. Nevertheless, thermal imaging and sensing technology can surely continue to be, among others, one line of defense against Covid-19, regardless of implementation.
"

Go to the original article...

Smartphone 3D Sensing Modules Comparison

Image Sensors World        Go to the original article...

SystemPlus Consulting publishes "Smartphone 3D Sensing Modules Comparison 2020."

"The consumer 3D sensing module market is expected to reach $8.1B in 2025 from $2B in 2019, according to the “3D Imaging & Sensing 2020” report from Yole Développement. The main driver technologies are Time-of-Flight (ToF) for photography enhancement and Structured Light (SL) for facial recognition. From 2016 to 2019, a total of 22 smartphones integrating a 3D sensing module have been released, 13 with SL and 9 with ToF.

In this dynamic context, System Plus Consulting provides a deep comparative review of technology and cost of 11 3D sensing modules found in flagship smartphones, with a focus on Vertical Cavity Surface Emitting Lasers (VCSELs) and Near Infra-Red CMOS Image Sensors (NIR CIS).
"

Go to the original article...

Qualcomm Smartwatch Platform Supports 16MP Camera with 1080p30 Video

Image Sensors World        Go to the original article...

Qualcomm announces Snapdragon Wear 4100 smartwatch platform that features dual ISP with support of 16MP camera with 1080p30 video:

Go to the original article...

Assorted News: Brookman, Smartsens, AIStorm, Cista, Prophesee, Unispectral, SiLC, Velodyne, Himax

Image Sensors World        Go to the original article...

Brookman demos the absence of interference between its 4 pToF cameras working simultaneously:



Smartsens reports it has garnered three awards from the 2020 China IC Design Award Ceremony and Leaders Summit —co-presented by EE Times China, EDN China, and ESMC China. SmartSens won awards in three categories: Outstanding Technical Support: IC Design Companies, Popular IC Products of the Year: Sensors/MEMS, and Silicon 100.


Other imaging companies on EE Times Silicon 100 list of Emerging Startups to Watch are AIStorm, Cista Systems, Prophesee, Unispectral, SiLC


Bloomberg reports that a blank-check company Graf Industrial Corp. is in talks to merge with Velodyne Lidar in a deal that would take Velodyne public. Graf Industrial Corp. has been established in 2018 as as a blank check company with an aim to acquire one and more businesses and assets, via a merger, capital stock exchange, asset acquisition, stock purchase, and reorganization. Merging with a blank-check company has become a popular way for companies to go public, as the coronavirus pandemic upends the markets.

GlobeNewswire: Himax launches of WiseEye WE-I Plus HX6537-A AI platform that supports Google’s TensorFlow Lite for Microcontrollers.

The Himax WiseEye solution is composed of the Himax HX6537-A processor and Himax Always-on sensor. With support to TensorFlow Lite for Microcontrollers, developers are able to take advantage of the WE-I Plus platform as well as the integrated ecosystem from TensorFlow Lite for Microcontrollers to develop their NN based edge AI applications targeted for Notebook, TV, Home Appliance, Battery Camera and IP Surveillance edge computing markets.

The processor remains in low power mode until a movement/object is identified by accelerators. Afterwards, DSP coped with the running NN inference on TensorFlow Lite for Microcontrollers kernel will be able to perform the needed CV operation to send out the metadata results over TLS (Transport Level Security) protocol to main SOC and/or cloud service for application level operation. The average power consumption for Google Person Detection example inference could be under 5mW. Additionally, average Himax Always-on sensor power consumption can be less than 1mW.

Himax WE-I Plus, coupled with Himax AoS image sensors, broadens TensorFlow Lite ecosystem offering and provides developers with possibilities of high performance and ultra low power,” said Pete Warden, Technical Lead of TensorFlow Lite for Microcontrollers at Google.

Go to the original article...

RevoRing filter adapter review

Cameralabs        Go to the original article...

The RevoRing is a cunning filter adapter from H&Y that saves you from carrying, fitting and swapping multiple step-down rings. A variable mechanism allows it to adapt one larger filter to multiple lenses, and fit it quickly too. Check out our review!…

The post RevoRing filter adapter review appeared first on Cameralabs.

Go to the original article...

Sony Prepares Subscription Service for its AI-Integrated Sensors

Image Sensors World        Go to the original article...

Reuters, Bloomberg, Yahoo: Sony plans to sell software by subscription for data-analyzing sensors with integrated AI processor like the recently announced IMX500.

We have a solid position in the market for image sensors, which serve as a gateway for imaging data,” said Sony’s Hideki Somemiya, who heads a new team developing sensor applications. Analysis of such data with AI “would form a market larger than the growth potential of the sensor market itself in terms of value,” Somemiya said in an interview, pointing to the recurring nature of software-dependent data processing versus a hardware-only business.

Most of our sensor business today can be explained only by revenues from our five biggest customers, who would buy our latest sensors as we develop,” Somemiya said. “In order to be successful in the solution business, we need to step outside that product-oriented approach.

Customer support is currently included in the one-time price of Sony sensors. But Somemiya said Sony would provide the service via separate subscription in the future. Made-by-Sony software tools would initially focus on supporting the company’s own sensors and the coverage may later expand to retain customers even if they decide to switch to non-Sony sensors, he added.

We often get queries from customers about how they can use our exotic products such as polarization sensors, short-wavelength infrared sensors and dynamic vision sensors,” Somemiya said. “So we offer them hands-on support and customized tools.

Sony will seek business partnerships and acquisitions to build out its software engineering expertise and offer seamless support anywhere in the world. Somemiya said the sensor unit’s subscription offering is a long-term plan and shouldn’t be expected to become profitable anytime soon, at least not at meaningful scale.


Go to the original article...

LFoundry Data Shows that BSI Sensors are Less Reliable than FSI

Image Sensors World        Go to the original article...

LFoundry and Sapienza University of Rome, Italy, publish an open source paper in IEEE Journal of the Electron Devices Society "Performance and reliability degradation of CMOS Image Sensors in Back-Side Illuminated configuration" by Andrea Vici, Felice Russo, Nicola Lovisi, Aldo Marchioni, Antonio Casella, and Fernanda Irrera. The data shows that BSI sensors' lifetime in a specific discussed failure mechanism is 150-1,000 times shorter than FSI. Of course, there can be many other failure sources that mask this huge difference.

"We present a systematic characterization of wafer-level reliability dedicated test structures in Back-Side-Illuminated CMOS Image Sensors. Noise and electrical measurements performed at different steps of the fabrication process flow, definitely demonstrate that the wafer flipping/bonding/thinning and VIA opening proper of the Back-Side-Illuminated configuration cause the creation of oxide donor-like border traps. Respect to conventional Front-Side-Illuminated CMOS Image Sensors, the presence of these traps causes degradation of the transistors electrical performance, altering the oxide electric field and shifting the flat-band voltage, and strongly degrades also reliability. Results from Time-Dependent Dielectric Breakdown and Negative Bias Temperature Instability measurements outline the impact of those border traps on the lifetime prediction."


"TDDB measurements were performed on n-channel Tx at 125C, applying a gate stress voltage Vstress in the range +7 to +7.6V. For each Vstress several samples were tested and the time-to-breakdown was measured adopting the three criteria defined in the JEDEC standard JESD92 [21]. For each stress condition, the fit of the Weibull distribution of the time-to-breakdown values gave the corresponding Time-to Failure (TTF). Then, the TTFs were plotted vs. Vstress in a log-log scale and the lifetime at the operating gate voltage was extrapolated with a power law (E-model [22]).

NBTI measurements were performed on p-channel Tx at 125C, applying Vstress in the range -3 to -4V. Again, several Tx were tested. Following the JEDEC standard JESD90 [23], in this case, lifetime is defined as the stress time required to have a 10% shift of the nominal VT. The VT shift has a power law dependence on the stress time and the lifetime value at the operating gate voltage could be extrapolated.
"


"Noise and charge pumping measurements denoted the presence of donor-like border traps in the gate oxide, which were absent in the Front-Side Illuminated configuration. The trap density follows an exponential dependence on the distance from the interface and reaches the value 2x10e17 cm-3 at 1.8 nm. Electrical measurements performed at different steps during the manufacturing process demonstrated that those border traps are created during the process loop of the Back-Side configuration, consisting of wafer upside flipping, bonding, thinning and VIA opening.

Traps warp the oxide electric field and shift the flat-band voltage with respect to the Front-Side configuration, as if a positive charge centroid of 1.6x10e-8 C/cm2 at 1.7 nm was present in Back-Side configuration, altering the drain and gate current curves.

We found that the donor-like border traps affect also the Back-Side device long term performance. Time Dependent Dielectric Breakdown and Negative Bias Temperature Instability measurements were performed to evaluate lifetime. As expected, the role of border traps in the lifetime prediction is different in the two cases, but the reliability degradation of Back-Side with respect to Front-Side-Illuminated CMOS Image Sensors is evident in any case.
"

Update: Here is comment from Felice Russo:

The following comments intend to clarify the scope of the paper “Performance and reliability degradation of CMOS Image Sensors in Back-Side Illuminated configuration”.

The title reported in the Image Sensor Blog, “LFoundry Data shows that BSI Sensors are Less Reliable than FSI”, leads to a conclusion different from the intent of the authors. The purpose of the paper was to evaluate potential reliability failure mechanisms, intrinsic to a particular BSI process flow, rather than highlighting a general BSI reliability weakness. BSI sensors produced at LFoundry incorporate numerous process techniques to exceed all product reliability requirements.

It is widely accepted [Ref.1-3] that the BSI process is sensitive to charging effects, independent of the specific process flow and production line. It may cause an oxide degradation, mainly related to the presence of additional distributions of donor-like traps in the oxide, located within a tunneling distance from the silicon-oxide interface (border/slow traps) and likely linked to an oxygen vacancy.

The work, published by the University, was based on wafer level characterization data, collected in 2018 using dedicated test structures fabricated with process conditions properly modified to emphasize the influence of the main BSI process steps on the trap generation.

To address these potential intrinsic failure mechanisms, several engineering solutions have been implemented to meet all reliability requirements up to automotive grade. Our earlier published work, [Ref.4], shows BSI can match FSI TDDB lifetime with the properly engineered solutions. Understandably not all solutions can be published.

Results have been used to further improve the performance of BSI products and to identify subsequent innovative solutions for the future generations of BSI sensors.

References:
[1] J. P. Gambino et al., “Device reliability for CMOS image sensors with backside through-silicon vias”, in Proceedings of the IEEE International Reliability Physics Symposium (IRPS), 2018
[2] Lahav et al., “BSI complementary metal-oxide-semiconductor (CMOS) imager sensors”, in High performance Silicon Imaging, Second Edition, Edited by D. Durini, 2014
[3] S. G. Wuu et al., “A manufacturing BSI illumination technology using bulk-Si substrate for Advanced CMOS Image sensors”, in Proceedings of the International Image Sensor Workshop, 2009
[4] A Vici et al., “Through-silicon-trench in back-side-illuminated cmos image sensors for the improvement of gate oxide long term performance,” in Proceedings of the International Electron Devices Meeting, 2018.

Go to the original article...

Imec Presentation on Low-Cost NIR and SWIR Imaging

Image Sensors World        Go to the original article...

SPIE publishes an Imec presentation "Image sensors for low cost infrared imaging and 3D sensing" by Jiwon Lee, Epimetheas Georgitzikis, Edward Van Sieleghem, Yun Tzu Chang, Olga Syshchyk, Yunlong Li, Pierre Boulenc, Gauri Karve, Orges Furxhi, David Cheyns, and Pawel Malinowski (available after free SPIE account registration.)

"Thanks to state-of-the-art III-V and thin-film (organics or quantum dots) material integration experience combined with imager design and manufacturing, imec is proposing a set of research activities which ambition is to innovate in the field of low cost and high resolution NIR/SWIR uncooled sensors as well as 3D sensing in NIR with Silicon-based Time-of-Flight pixels. This work will present the recent integration achievements with demonstration examples as well as development prospects in this research framework."

Go to the original article...

1/f and RTS Noise Model

Image Sensors World        Go to the original article...

IEEE open source Journal of the Electron Devices Society publishes Hong Kong University of Science and Technology paper "1/f Low Frequency Noise Model for Buried Channel MOSFET" by Shi Shen and Jie Yuan.

"The Low Frequency Noise (LFN) in MOSFETs is critical to Signal-to-Noise Ratio (SNR) demanding circuits. Buried Channel (BC) MOSFETs are commonly used as the source-follower transistors for CCDs and CMOS image sensors (CIS) for lower LFN. It is essential to understand the BC MOSFETs noise mechanism based on trap parameters with different transistor biasing conditions. In this paper, we have designed and fabricated deep BC MOSFETs in a CIS-compatible process with 5 V rating. The 1/f Y LFN is found due to non-uniform space and energy distributed oxide traps. To comprehensively explain the BC MOSFETs noise spectrum, we developed a LFN model based on the Shockley-Read-Hall (SRH) theory with WKB tunneling approximation. This is the first time that the 1/f Y LFN spectrum of BC MOSFET has been numerically analyzed and modeled. The Random Telegraph Signal (RTS) amplitudes of each oxide traps are extracted efficiently with an Impedance Field Method (IFM). Our new model counts the noise contribution from each discretized oxide trap in oxide mesh grids. Experiments verify that the new model matches well the noise power spectrum from 10 to 10k Hz with various gate biasing conditions from accumulation to weak inversion."

Go to the original article...

ST ToF Products Tour

Image Sensors World        Go to the original article...

ST publishes a nice presentation "Going further with FlightSense" at Sensor+Test 2020 virtual exhibition. There is also a short presentation about Flightsense applications.

Go to the original article...

v2e and Event-Driven Camera Nonidealities

Image Sensors World        Go to the original article...

ETH Zurich publishes an Arxiv.org paper "V2E: From video frames to realistic DVS event camera streams" by Tobi Delbruck, Yuhuang Hu, and Zhe He. The V2E open source tool is available here.

"To help meet the increasing need for dynamic vision sensor (DVS) event camera data, we developed the v2e toolbox, which generates synthetic DVS event streams from intensity frame videos. Videos can be of any type, either real or synthetic. v2e optionally uses synthetic slow motion to upsample the video frame rate and then generates DVS events from these frames using a realistic pixel model that includes event threshold mismatch, finite illumination-dependent bandwidth, and several types of noise. v2e includes an algorithm that determines the DVS thresholds and bandwidth so that the synthetic event stream statistics match a given reference DVS recording. v2e is the first toolbox that can synthesize realistic low light DVS data. This paper also clarifies misleading claims about DVS characteristics in some of the computer vision literature. The v2e website is this https URL and code is hosted at this https URL."


The paper also explains some of the misconceptions about DVS sensors:

"Debunking myths of event cameras: Computer vision papers about event cameras have made rather misleading claims such as “Event cameras [have] no motion blur” and have “latency on the order of microseconds” [7]–[9], which were perhaps fueled by the titles (though not the content) of papers like [1], [10], [11]. Review papers like [5] are more accurate in their descriptions of DVS limitations, but are not very explicit about the actual behavior.

DVS cameras must obey the laws of physics like any other vision sensor: They must count photons. Under low illumination conditions, photons become scarce and therefore counting them becomes noisy and slow. v2e is aimed at realistic modeling of these conditions, which are crucial for deployment of event cameras in uncontrolled natural lighting.
"

Go to the original article...

v2e and Event-Driven Camera Nonidealities

Image Sensors World        Go to the original article...

ETH Zurich publishes an Arxiv.org paper "V2E: From video frames to realistic DVS event camera streams" by Tobi Delbruck, Yuhuang Hu, and Zhe He. The V2E open source tool is available here.

"To help meet the increasing need for dynamic vision sensor (DVS) event camera data, we developed the v2e toolbox, which generates synthetic DVS event streams from intensity frame videos. Videos can be of any type, either real or synthetic. v2e optionally uses synthetic slow motion to upsample the video frame rate and then generates DVS events from these frames using a realistic pixel model that includes event threshold mismatch, finite illumination-dependent bandwidth, and several types of noise. v2e includes an algorithm that determines the DVS thresholds and bandwidth so that the synthetic event stream statistics match a given reference DVS recording. v2e is the first toolbox that can synthesize realistic low light DVS data. This paper also clarifies misleading claims about DVS characteristics in some of the computer vision literature. The v2e website is this https URL and code is hosted at this https URL."


The paper also explains some of the misconceptions about DVS sensors:

"Debunking myths of event cameras: Computer vision papers about event cameras have made rather misleading claims such as “Event cameras [have] no motion blur” and have “latency on the order of microseconds” [7]–[9], which were perhaps fueled by the titles (though not the content) of papers like [1], [10], [11]. Review papers like [5] are more accurate in their descriptions of DVS limitations, but are not very explicit about the actual behavior.

DVS cameras must obey the laws of physics like any other vision sensor: They must count photons. Under low illumination conditions, photons become scarce and therefore counting them becomes noisy and slow. v2e is aimed at realistic modeling of these conditions, which are crucial for deployment of event cameras in uncontrolled natural lighting.
"

Go to the original article...

LiDAR News: Trioptics, Blickfeld, Apple

Image Sensors World        Go to the original article...

Trioptics publishes ats presentation at the recent Autosens On-Line conference "From Lab to Fab – Assembly and testing of optical components for LiDAR sensors in prototyping and serial production" by Dirk Seebaum:



Blickfeld publishes a datasheet for its Cube LiDAR based MEMS mirror scanning and SPAD array. The datasheet includes performance in bright sunlight:


BusinessWire: Apple announces iPadOS 14 that features support for iPad Pro LiDAR: "ARKit 4 delivers a brand new Depth API that allows developers to access even more precise depth information captured by the new LiDAR Scanner on iPad Pro®. Developers can use the Depth API to drive powerful new features in their apps, like taking body measurements for more accurate virtual try-on, or testing how paint colors will look before painting a room."


Hong Kong University and Cornell University publish a paper "Depth Sensing Beyond LiDAR Range" by Kai Zhang, Jiaxin Xie, Noah Snavely, and Qifeng Chen.

Go to the original article...

LiDAR News: Trioptics, Blickfeld, Apple

Image Sensors World        Go to the original article...

Trioptics publishes ats presentation at the recent Autosens On-Line conference "From Lab to Fab – Assembly and testing of optical components for LiDAR sensors in prototyping and serial production" by Dirk Seebaum:



Blickfeld publishes a datasheet for its Cube LiDAR based MEMS mirror scanning and SPAD array. The datasheet includes performance in bright sunlight:


BusinessWire: Apple announces iPadOS 14 that features support for iPad Pro LiDAR: "ARKit 4 delivers a brand new Depth API that allows developers to access even more precise depth information captured by the new LiDAR Scanner on iPad Pro®. Developers can use the Depth API to drive powerful new features in their apps, like taking body measurements for more accurate virtual try-on, or testing how paint colors will look before painting a room."


Hong Kong University and Cornell University publish a paper "Depth Sensing Beyond LiDAR Range" by Kai Zhang, Jiaxin Xie, Noah Snavely, and Qifeng Chen.

Go to the original article...

GPixel Announces 103MP, 28fps, 12b Global Shutter Sensor

Image Sensors World        Go to the original article...

Gpixel announces the GMAX32103, a large format Global Shutter CMOS sensor for industrial applications. The sensor is based on 3.2 µm charge domain GS pixel, provides 11276(H) x 9200(V) resolution (103 MP), and supports up to 28fps with 12bit output. GMAX32103 is aimed to the demanding machine vision applications and aerial imaging.

The 3.2 um pixel achieves a full well capacity of 10k e-, read noise less than 2 e- and maximum DR of 66dB. With the implementation of micro lens and light pipe technologies, the sensor provides a peak QE of 65%, a shutter efficiency of 1/15,000 and excellent angular response. GMAX32103 offers a large FOV to expand single-shot capabilities and a nearly square aspect ratio (1.27:1), which is optimal for inspection applications.

GMAX32103 uses 52 pairs of sub-LVDS channels, each run at a maximum speed of 960MHz. The sensor supports channel multiplexing for lower data rate implementations, and integrates a variety of read-out functions including up to 32 regions of horizontal windowing (region of interest), sub sampling and image flipping. GMAX32103’s is packaged in 209-pin uPGA ceramic package with an outer dimension of 49.5 mm x 48.1 mm.

We are very thrilled with the introduction of GMAX32103. The further expansion of Gpixel’s line up of extremely high-resolution sensors based on an industry proven and widely accepted platform, empowers our customers to tackle demanding applications and to address the industry’s needs for ever increasing image accuracy and throughput. This product is part of our fast growing GMAX product family, which will be further expanded in the very near future with other exciting products,” says Wim Wuyts, CCO of Gpixel.

GMAX32103 engineering samples are expected in November 2020.

Go to the original article...

Canon Presents 1MP SPAD Imager Prototype

Image Sensors World        Go to the original article...

Canon has developed a prototype of what it calls "the world’s first single photon avalanche diode (SPAD) image sensor with signal-amplifying pixels capable of capturing 1-megapixel images."

The SPAD image sensor developed by Canon overcomes the longstanding difficulties of achieving high SPAD pixel counts. By adopting a new circuit technology, Canon was able to realize a digital image resolution of 1MP. Exposure time can be shortened to as little as 3.8ns. In addition, the sensor is capable of up to 24,000 fps with 1 bit output, thus enabling slow-motion capture of fast movement within an extremely short time frame.

The sensor also features a high time resolution as precise as 100 ps. With a high resolution of 1MP and high-speed image capture, it is also able to accurately perform 3D distance measurements.

The camera was jointly developed with scientists at the Swiss Federal Institute of Technology in Lausanne and published in OSA Optica.

Go to the original article...

Miscellaneous News: UMC, Innoviz, Samsung

Image Sensors World        Go to the original article...

Semiconductor Engineering quotes said David Uriu, technical director of product management at UMC, saying that CIS are the drivers at 65nm and 40nm process nodes. "CIS use 65nm/55nm. Some CIS devices will start to use 40nm, but this is not a significant part of the current CIS volume yet. 40nm will expand for some high-end pixel designs, but it is not expected to be a widely accepted node due to costs."

Electronics360 reports that fabs' investments into image sensor manufacturing equipment rise 60% over 2020 and add another 35% rise in 2021.

Innoviz publishes its webinar comparing LiDAR, camera, and radar in ADAS and AV applications:



Samsung publishes a promotional video of its 50MP, 1.2um pixel ISOCELL GN1 image sensor:


Go to the original article...

Panasonic Lumix G100 review

Cameralabs        Go to the original article...

The Panasonic Lumix G100 is a compact mirrorless camera designed for vlogging and creative video, as well as photography. I tried it out for my first-looks review!…

The post Panasonic Lumix G100 review appeared first on Cameralabs.

Go to the original article...

Stratio Unveils Ge-Based SWIR Camera

Image Sensors World        Go to the original article...

After 7 years in development, Stratio unveils BeyonSense, said to be "the world’s first germanium-based smartphone-compatible camera." The 11 x 8 pixel BeyonSense Pre camera is expected to be available for sale in a month from now, if COVID situation allows. The company says:

"Due to COVID-19, our fabrication facilities in Silicon Valley have been closed for the past few months and there is no clear timeline for when they will reopen.

As this is an incredibly dynamic situation, we can only expect to ship BeyonSense® Pre with 11x8 pixels in a month following the reopening of our facilities. You can be assured our team is working around the clock to make it possible to deliver BeyonSense® to you.
"

"The Stratio idea was conceived by three PhD students in a small corner desk at Stanford University.

As PhD students in Electrical Engineering, they knew about the myriad of advantages with infrared imaging – from material analysis to night vision. However, the technology was prohibitively expensive so that only a few could benefit from it. One day, they discussed how a new sensor material called germanium (Ge) could be responsive to infrared light waves in real life. They began digging deeper, consulted experts, and conducted countless experiments to find out how they would achieve low cost, small size, and low power consumption. It turned out to be a years-long journey, but a fruitful one. Hence Stratio was born, in January 2013.
"


Stratio shows a short demo video of its new camera:

Go to the original article...

Yole: Image Sensor Market Keeps Growing, Defies Coronavirus Troubles

Image Sensors World        Go to the original article...

i-Micronews: At the end of 2019, the CIS price rose of nearly 10% because of production reaching maximum worldwide capacity.

"Even though the COVID-19 lockdown led to a drop in smartphone shipments, the demand for mobile camera modules will maintain a 7% year-over-year (YoY) growth in 2020. In the COVID-19 situation, no evident substantial impact on the CIS supply chain has been identified, including on the purchase of raw materials by giant players. The overall impact will be slower growth this year, with respect to the 25% YoY growth last year.

Demand from mobile devices will keep thriving. The overall attachment rate for CIS cameras per phone will move beyond 3.4 in 2020. Also, the growth rate for CIS attachment is still expected to be over 10% in the automotive space. The short term impact of COVID-19 has led to a substantial decrease of car production in the range of -30%. The end point for 2020 is very uncertain, and the long-term horizon is at best flat. The downturn in car production will be mitigated by increased attachment rates for automotive cameras. Looking at all markets the demand is still growing. The expansion of investment in CIS and capacity transition from DRAM to CIS continues for most players.
"

Go to the original article...

Event-Based Camera Tutorial

Image Sensors World        Go to the original article...

ETH Zurich Robotics and Perception Group publishes a video presentation "Event Cameras: Opportunities and the Road Ahead (CVPR 2020)" by Davide Scaramuzza


Go to the original article...

Demosaicing First or Denoising First?

Image Sensors World        Go to the original article...

University of Inner Mongolia, China, and CNRS, France, publish a paper "A Review of an Old Dilemma: Demosaicking First, or Denoising First?" by Qiyu Jin, Gabriele Facciolo, and Jean-Michel Morel.

"Image denoising and demosaicking are the first two crucial steps in digital camera pipelines. In most of the literature, denoising and demosaicking are treated as two independent problems, without considering their interaction, or asking which should be applied first. Several recent works have started addressing them jointly in works that involve heavy weight neural networks, thus incompatible with low power portable imaging devices. Hence, the question of how to combine denoising and demosaicking to reconstruct full color images remains very relevant: Is denoising to be applied first, or should that be demosaicking first? In this paper, we review the main variants of these strategies and carry-out an extensive evaluation to find the best way to reconstruct full color images from a noisy mosaic. We conclude that demosaicking should applied first, followed by denoising. Yet we prove that this requires an adaptation of classic denoising algorithms to demosaicked noise, which we justify and specify."

Go to the original article...

Few More iPad LiDAR Pictures

Image Sensors World        Go to the original article...

SystemPlus Consulting publishes Apple iPAD Pro 2020 LiDAR module reverse engineering report with few more pictures in addition to many that have already been published:

"This rear 3D sensing module is using the first ever consumer direct Time-of-Flight (dToF) CMOS Image Sensor (CIS) product with in-pixel connection.

The 3D sensing module includes a new generation of Near Infrared (NIR) CIS from Sony with a Single Photon Avalanche Diode (SPAD) array. The sensor features 10 µm long pixels and a resolution of 30 kilopixels. The in-pixel connection is realized between the NIR CIS and the logic wafer using hybrid Direct Bonding Interconnect technology, which is the first time Sony has used 3D stacking for its ToF sensors.

The LiDAR uses a vertical cavity surface emitting laser (VCSEL) coming from Lumentum. The laser is designed to have multiple electrodes connected separately to the emitter array. A new design with mesa contact is used to enhance wafer probe testing.

A wafer level chip scale packaging (WLCSP), five-side molded driver integrated circuit from Texas Instruments generates the pulse and drives the VCSEL power and beam shape. Finally, a new Diffractive Optical Element (DOE) from Himax is assembled on top of the VCSEL to generate a dot pattern.
"

Go to the original article...

3D News: MIT, Intel, Sharp

Image Sensors World        Go to the original article...

MIT Vivienne Sze's presentation on energy efficient processing has a part about low power ToF imaging:


Intel announces an long range version of its active stereo 3D camera Realsense D455:

"The D455 camera increases the optimal range to 6 meters, making it twice as accurate as the current D400 cameras without sacrificing field of view. The D455 also includes global shutters for the depth and RGB sensors to improve correspondence between the two different data streams and to match the field of view between the depth sensors and the RGB sensor. In addition, this camera also integrates an IMU to allow for refinement of its depth awareness in any situation where the camera moves.

The D455 achieves less than 2% Z-error at 4 meters with several improvements. First, the depth sensors are located 95 millimeters apart, providing greater depth accuracy at a longer range. Second, the depth and RGB sensors are placed on the same stiffener, resulting in an improved alignment of color and depth. Lastly, the RGB sensor has the same field of view as the depth sensors, further improving correlation of depth and color points.
"


Sharp presentation on its distance measuring sensors explains their operation:


Sharp also makes SPAD-based ToF distance sensors:

Go to the original article...

css.php