Infineon Estimates ToF Sensor Market

Image Sensors World        Go to the original article...

 Infineon's quarterly Investor Presentation shows the company's forecast of ToF sensor market:

Go to the original article...

GigaDevice Develops SWIR ToF Sensor for Smartphones

Image Sensors World        Go to the original article...

Sohu (via IFNews), Zaotech: Gigadevice Director of ToF Marketing of the Sensor Division Haolei Liu presents the company plans to develop ToF sensors and optical fingerprint sensors for smartphones:

"GigaDevice’s innovative ToF solution adopts a special process, has a higher QE, can effectively reduce power consumption and system cost, and can support both the 1350nm-1550nm long wavelength band and the 940nm wavelength band. Outdoors have excellent performance, which meets the needs of the future screen direction.

Liu Haolei also pointed out: “In our opinion, iToF and dToF will be parallel for a period of time. Although the resolution is a shortcoming of dToF, in the long run, we believe that dToF has a lot of room for growth because of the dToF solution. Not long after its launch, the industrial chain is not mature, which also means that it has a lot of room for improvement.

We believe that the potential of ToF needs to be promoted by industry chain ecological partners. It is certain that this technology will have higher and higher requirements for hardware . For example, resolution. In the dToF solution, if the resolution can be significantly improved, the application of products will become more and more extensive.

The new GigaDevice's ToF sensor is said to have QVGA resolution, QE of 65% at 940nm or 50% at 1350nm, "which is nearly double that of the ToF chip based on silicon technology."

MEMSensor: The company also develops a new α-Si process-based under OLED-screen optical fingerprint sensor named GSL7253. It's said to have a sensitive area of 20x30mm2, a QE of 80%, and is only 0.3mm thin:

Go to the original article...

Image Sensor Wafers for $25

Image Sensors World        Go to the original article...

A number of Ebay sellers offer "research" 8-inch image sensor wafers for $25 a piece. Possibly, they could serve as a nice souvenir:

Go to the original article...

Galaxycore IPO Approved at $7.5B Valuation

Image Sensors World        Go to the original article...

FinanceSecond, NetEase: GalaxyCore listing at Science and Technology Innovation Board of the Shanghai Stock Exchange has been approved by authorities. The IPO includes 15% share of the company at the price of 7,428,830,300 RMB (about $1.124B). This values the company at $7.5B.

The money will be invested into CIS R&D and 12-inch wafer BSI processing facility that Galaxycore built in Lingang New Area of China (Shanghai) Free Trade Pilot Zone.

"Through the construction of some 12-inch BSI wafer back-end production lines, 12-inch wafer manufacturing pilot lines, some OCF manufacturing and back grinding and cutting production lines, the company has realized the transition from the Fabless model to the Fab-Lite model."

Go to the original article...

Velodyne Reports Q3 2020 Results

Image Sensors World        Go to the original article...

BusinessWire: As a public company, Velodyne reports its quarterly results, giving a good food for thought about LiDAR market as a whole:

  • Revenue: Total revenue of $32.1M represents a 137% increase year-over-year.
  • Units and ASPs: We shipped 2,235 sensor units with a ASP of approximately $5,600.
  • Gross Profit: GAAP and non-GAAP gross profit totaled $15.0M. Previous public guidance reflected a third quarter benefit from a one-time $11M stocking fee, which positively impacted gross profit.
  • Net Loss: GAAP net loss was $5.3M and non-GAAP net loss was $9.1M.
  • Liquidity: Cash of $298M was on the balance sheet at the end of the third quarter.
  • For the full year 2020, we expect total revenue of approximately $101M, as previously forecasted.
  • For the full year, GAAP operating loss is expected to total between $208M and $205M.

Go to the original article...

Assorted News: Ambarella, ON Semi, Omnivision

Image Sensors World        Go to the original article...

BusinessWire: Ambarella introduces the CV28M camera SoC, the latest in the CVflow family, combining image processing, high-resolution video encoding, and CVflow computer vision processing in a single, low-power design. The CV28M’s efficient AI architecture provides the flexibility to enable a new class of smart edge devices for applications including smart home security, retail monitoring, consumer robotics, and occupancy monitoring.

All around us, devices are becoming smarter, and with our newest CV28M SoC, our customers can develop a new generation of intelligent sensing cameras for a variety of new applications,” said Chris Day, VP of marketing and business development at Ambarella. “In privacy-sensitive applications—such as monitoring retail stores, workplaces, rental properties, or the elderly at home—edge-based AI processing can support intelligent monitoring and fast decision-making without the requirement to record or stream video to the cloud.


ON Semi publishes a promotional video about its SiPM use in dToF laser rangefinders:


ResearchInChina reviews the ADAS market design wins for major vision processor companies:


AspenCoreGroup, the mother company of EETimes, EDN, and many other electronics magazines, announces its  Global Electronic Achievement Awards 2020. In the sensors category, the Award goes to Omnivision OV64B sensor.


Digitimes reports that OmniVision is ramping up the volume production of OV64A in Q4 2020 for Xiaomi, Oppo and Vivo in their midrange and high-end smartphones.

As China continues its de-Americanization and self-sufficiency policy in semiconductors, OmniVision is expected to significantly increase its share on Chinese CIS market, according to Digitimes sources.

Go to the original article...

Soitec Presents IR Sensitivity Improvements in FSI Sensors

Image Sensors World        Go to the original article...

Soitec says that its "Imager-SOI [wafer] product line is designed specifically for fabricating front-side imagers for near-infrared (NIR) applications including advanced 3D image sensors."

The product line is available in 300mm. BOX from 15nm to 150nm. “Epi Ready” Top silicon from 50nm to 200nm.

Go to the original article...

Soitec Presents IR Sensitivity Improvements in FSI Sensors

Image Sensors World        Go to the original article...

Soitec says that its "Imager-SOI [wafer] product line is designed specifically for fabricating front-side imagers for near-infrared (NIR) applications including advanced 3D image sensors."

The product line is available in 300mm. BOX from 15nm to 150nm. “Epi Ready” Top silicon from 50nm to 200nm.

Go to the original article...

Sub-Micron X-Ray Pixels

Image Sensors World        Go to the original article...

Waterloo Institute for Nanotechnology is going to design sub-micron pixels for X-Ray ptychography. In regular X-ray sensors, the pixel pitch is 100-200um or more.

Go to the original article...

CIS Market to Grow 1% in 2020, 12% in 2021

Image Sensors World        Go to the original article...

IC-Insights forecasts a slight growth of image sensor market this year, followed by a 12% growth in 2021:

Go to the original article...

Samsung Launches 4-Tap iTOF Sensor

Image Sensors World        Go to the original article...

 Samsung unveils its first iTOF product - ISOCELL Vizion 33D:

"Featuring 4-tap pixels, the Samsung ISOCELL Vizion 33D delivers precise and swift depth sensing capabilities for next-level 3D applications.

Enabling pro-grade shots with bokeh effects or accurate 3D object images, the ToF (Time-of-Flight) sensor is optimized to provide best-in-class photography and AR/VR experiences.

To enable precise depth measurement of fast-moving objects, the ISOCELL Vizion 33D features a 4-tap demodulation system and supports frame rate of up to 120fps. Each pixel in the sensor can receive four phase signals simultaneously (0°, 90°, 180°, and 270°), which means it can generate a depth image with just a single frame. The ISOCELL Vizion 33D can capture moving objects with significant reduction of motion artifacts.

In both indoor and outdoor conditions, the sensor can detect the depth of an object within up to 5m with high accuracy. ISOCELL’s pixel technology, coupled with high resolution, enables the sensor to accurately separate objects from the background with 3D bokeh effect.

Deep Trench Isolation technology (DTI) maximizes isolation between pixels to reduce crosstalk, while Backside Scattering Technology (BST) enhances the sensor’s quantum efficiency. With high-precision depth images, the ISOCELL Vizion 33D delivers next-level 3D applications, such as facial authentication for payment services.

With a total power consumption of under 400mW for both IR illuminator and the ToF sensor, the 33D makes it possible for users to enjoy powerful 3D features, such as AR games and video bokeh, throughout the day."

Go to the original article...

Zeiss Use of MALS is "The Biggest Breakthrough in Microscopy Since the Invention of Microscope"

Image Sensors World        Go to the original article...

Zeiss Visioner 1 digital microscope uses SD Optics' Micro-mirror Array Lens System (MALS) technology to achieve digitally-extended depth of focus up to 69mm:

"ZEISS Visioner 1 revolutionizes the world of optical inspection and documentation. Driven by the unique Micro-mirror Array Lens System (MALSTM technology), enables for the first time, real-time all-in-focus imaging – first time, every time.

Using a micro-mirror array lens system (MALS™) enables us to generate “virtual” lenses with distinctly different curvatures, thus different focus planes. This is achieved by changing the orientation of each individual micro-mirror in an orchestrated way.

Re-shaping the curvature of this “virtual” lens at speed enables ultra-fast focusing and real-time all-in focus imaging and documentation.
"

  • Up to 100x more usable Extended Depth of Field
  • Allows for height differences of up to 69mm*
  • Reflective micro-mirror array with curvatures (variable) arranged in a flat plane
  • Each micro-mirror is about 100x100 µm
  • Each micro-mirror rotates & translates to form the optical surfaces with variable curvatures
  • No need for Z-stacking or re-focusing


Go to the original article...

Image Sensors at IEDM 2020: Facebook, Samsung, Omnivision, Sony, More…

Image Sensors World        Go to the original article...

IEDM publishes its 2020 program with many image sensor-related papers:
  • Sony presents 10um BSI SPAD with 14% PDE @ 940nm, possibly used in Apple iPad/iPhone LiDAR
  • Facebook and Brillnics present low power sensor
  • Samsung presents 108MP Nonacell sensor with 0.8um pixels and 18Ke- FWC in 3x3 binning mode
  • Omnivision presents 64MP sensor with 0.7um pixels with 18Ke- PWC in 2x2 binning mode
  • Imec presents SWIR imager with 1.82um pixels
  • Much more...
16.1 A 4.6µm, 512×512, Ultra-Low Power Stacked Digital Pixel Sensor with Triple Quantization and 127dB Dynamic Range,
Chiao Liu, Lyle Bainbridge, Andrew Berkovich, Song Chen, Wei Gao, Tsung-Hsun Tsai, Kazuya Mori*, Rimon Ikeno, Masayuki Uno*, Toshiyuki Isozaki*, Yu-Lin Tsai**, Isao Takayanagi*, Junichi Nakamura*,
Facebook Inc, *Brillnics Japan Inc., **Brillnics Inc.
A 512x512 digital pixel sensor (DPS) in stacked CIS process for ultra-low power, ultra-wide dynamic range mobile computer vision applications is presented. Each 4.6µm DPS pixel has an ADC and 10-bit SRAM. We introduce a single exposure triple quantization (3Q) scheme to achieve 127dB DR while consuming 5.3mW at 30fps.

16.2 A 0.8 μm Nonacell for 108 Megapixels CMOS Image Sensor with FD-Shared Dual Conversion Gain and 18,000e- Full-Well Capacitance,
Youngsun Oh, Munhwan Kim, Wonchul Choi, Hana Choi, Honghyun Jeon, Junho Seok, Yujung Choi, Jaejin Jung, Kwisung Yoo, Donghyuk Park, Yitae Kim, Kyoung-min Koh, Jesuk Lee, Chang-Rok Moon, JungChakAhn,
Samsung Electronics Co., Ltd.
A 0.8μm-pitch 108 megapixels ultrahigh-resolution CMOS image sensor has been demonstrated for mobile applications. The Nonacell was developed with odd-number shared pixel, and the FWC was secured up to 18,000e-. 3 active binning mode to achieve 12 megapixels resolution, ensuring excellent low- and high-illuminance SNR.

16.3 A 64M CMOS Image Sensor using 0.7um pixel with high FWC and switchable conversion gain,
Y. Jay Jung, Vincent Venezia, Sangjoo Lee, Chun Yung Ai, Yibo Zhu, King W. Yeung, Geunsook
Park, Woonil Choi, Zhiqiang Lin, Wu-Zang Yang, Alan Chih-Wei Hsiung, Lindsay Grant,
OmniVision Technologies, Inc.
This paper presents a 64MP, backside-illuminated, imager using 0.7um pixel-pitch with 7.0ke- FWC. Switchable-conversion-gain was also demonstrated to have high 18.0ke- FWC in 4-Cell mode. Several new processes were implemented to overcome pixel performance degradation. As a result, this high FWC imager achieves low dark-noise and high QE, comparable to 0.8um.

16.4 A Global Shutter Wide Dynamic Range Soft X-ray CMOS Image Sensor with BSI Pinned Photodiode, Two-stage LOFIC and Voltage Domain Memory Bank,
Hiroya Shike, Rihito Kuroda, Ryota Kobayashi, Maasa Murata, Yasuyuki Fujihara, Manabu Suzuki, Taku Shibaguchi*, Naoya Kuriyama*, Jun Miyawaki**, Tetsuo Harada***, Yuichi Yamasaki^, Takeo Watanabe***, Yoshihisa Harada***, Shigetoshi Sugawa, 
*Tohoku University, **LAPIS Semiconductor Co., Ltd., ***The University of Tokyo, ^University of Hyogo
A prototype soft X-ray CMOS image sensor (sxCMOS) with BSI pinned photodiode with a 45µm-thick Si substrate, two-stage LOFIC and voltage domain memory bank with high density capacitors is presented. The fabricated chip demonstrated a high QE toward soft X-ray with a single exposure 129dB dynamic range by global shutter.

16.5 Imaging in Short-Wave Infrared with 1.82 µm Pixel Pitch Quantum Dot Image Sensor
Jiwon Lee, Epimitheas Georgitzikis, Yunlong Li, Ziduo Lin, Jihoon Park, Itai Lieberman, David Cheyns, Murali Jayapala, Andy Lambrechts, Steven Thijs, Richard Stahl, Pawel Malinowski,
imec
High pixel density SWIR image sensor with 1.82 μm pixel pitch is presented. PbS QD photodiode is monolithically integrated on custom CMOS readout. We show through-silicon vision and lens-free imaging (LFI) examples. To our knowledge, this is the smallest pitch SWIR pixel ever reported and the first QD-based LFI system.

16.6 A Back Illuminated 10μm SPAD Pixel Array Comprising Full Trench Isolation and Cu-Cu Bonding with Over 14% PDE at 940nm,
K. Ito, Y. Otake, Y. Kitano, A. Matsumoto, J. Yamamoto, T. Ogasahara, H. Hiyama, R. Naito*, K. Takeuchi*, T. Tada*, K. Takabayashi*, H. Nakayama*, K. Tatani, T. Hirano, and T. Wakano,
Sony Semiconductor Solutions, *Sony Semiconductor Manufacturing
We developed a BI 10um SPAD array sensor using pixel-level Cu-Cu bonding and metal-buried Full Trench Isolation. Using a 7um thick Si layer, a fine-tuned potential and process, over 14% PDE at λ=940nm and the best in class DCR were achieved. Low timing jitter and suppressed X-talk were also demonstrated.

17.1 Portable Multi-Spectral Imaging: Devices, Vertical Integration, and Applications (Invited),
Alberto Valdes-Garcia, Petar Pepeljugoski, Ivan Duran, Jean-Olivier Plouchart, Mark Yeck, Huijuan Liu,
IBM T. J. Watson Research Center
Advances in semiconductor and packaging technologies have downsized sensing devices including visible-domain/IR and mmWave radars. This paper discusses challenges and opportunities associated with portable multi-spectral imaging systems, where data from across the EM spectrum is captured, processed, and displayed simultaneously. A prototype system, experimental data, and potential applications are discussed.

33.1 Low power consumption and high resolution 1280X960 Gate Assisted Photonic Demodulator pixel for indirect Time of flight,
Y. Ebiko, H. Yamagishi, K. Tatani, H. Iwamoto, Y. Moriyama, Y. Hagiwara, S. Maeda, T. Murase, T. Suwa, H. Arai, Y. Isogai, S. Hida*, S. Kameda*, T. Terada*, K. Koiso*, F. T Brady**, S. Han**, A. Basavalingappa**, T. Michiel***, T. Ueno***,
Sony Semiconductor Solutions Corporation, * Sony Semiconductor Manufacturing Corporation, ** Sony Electronics Inc. Image Sensor Design Center, ***Sony Depth Sensing Inc.
A 3.5um square 1.2M pixel indirect time of flight sensor achieves 18,000e- full well capacity and 32% quantum efficiency with diffraction structure. Low power consumption is also achieved, due to low resistance Cu-Cu connection wiring. These device architectures enable high resolution and wide dynamic range 3D depth sensing.

33.2 A 2.8 μm Pixel for Time of Flight CMOS Image Sensor with 20 ke- Full-Well Capacity in a Tap and 36 % Quantum Efficiency at 940 nm Wavelength,
YongHun Kwon, Sungyoung Seo, Sunghyuck Cho, Sung-Ho Choi, Taeun Hwang, Youngchan Kim, Young-Gu Jin, Youngsun Oh, Min-Sun Keel, Daeyun Kim, Myunghan Bae, Yeomyung Kim, Seung-Chul Shin, SunJu Hong, Seok-HaLee, Ho Woo Park, Yitae Kim, Kyoungmin Koh, JungChak Ahn,
Samsung Electronics
A 2.8μm 4-tap global shutter pixel has been realized for a compact and high-resolution time of flight (ToF) CMOS image sensor. 20,000 e- of full-well capacity (FWC) per a tap is obtained by employing a MOS capacitor. 36% of quantum efficiency (QE) 86 % of demodulation contrast (DC) are achieved.

9.4 Characterization Scheme for Plasma-Induced Defect due to Stochastic Lateral Straggling in Si Substrates for Ultra-Low Leakage Devices,
Yoshihiro Sato, Takayoshi Yamada, Kazuko Nishimura, Masayuki Yamasaki, Masashi Murakami, Keiichiro Urabe*, Koji Eriguchi*,
Panasonic Corporation, *Kyoto University
This study demonstrates a new characterization scheme to assess the density and profile of defects in the lateral direction and to verify their impacts using CMOS image sensor-based structures. We present a 3D (vertical and lateral) defect map as well as possible optimization strategies for ultra-low leakage devices.

Go to the original article...

3 Year-Old Aeva Goes Public at $2.1B Valuation

Image Sensors World        Go to the original article...

PRNewswire: 3 year-old FMCW LiDAR startup Aeva announces a reverse merging with InterPrivate Acquisition Corp. to be listed on NYSE at $2.1B valuation. This transaction is to provide up to $363M in gross proceeds, comprised of InterPrivate's $243M held in trust and a $120M fully committed common stock PIPE at $10.00 per share, including investments from Adage Capital and Porsche SE.

The combined company expected to have an estimated post-transaction equity value of approximately $2.1B and is expected to be listed on the NYSE under the ticker symbol AEVA following anticipated transaction close in Q1 2021.

Founded in 2017 by former Apple engineers Soroush Salehian and Mina Rezk and having a team of over 100 employees, Aeva is engaged with thirty of the top players in automated and autonomous driving across passenger, trucking and mobility.
  • In 2019, Aeva announced a partnership with Audi's Autonomous Intelligent Driving entity. Aeva has also partnered with multiple other passenger car, trucking and mobility platforms to further adoption of ADAS and autonomous applications.
  • Aeva is in a production partnership with ZF, one of the world's largest automotive Tier 1 manufacturers to top OEMs, to supply the first automotive grade 4D LiDAR from select ZF production plants. The partnership — Aeva's expertise in FMCW LiDAR technology combined with ZF's experience in industrialization of automotive grade sensors — represents a key commitment to accelerate mass production of safe and scalable 4D LiDAR technology.
"From the beginning our vision has been to create a fundamentally new sensing system to enable perception across all devices. This milestone accelerates our journey toward delivering the next paradigm in perception to mass market applications, not just in automotive but consumer and beyond," said Soroush Salehian, Co-founder and CEO at Aeva.

Mina Rezk, Co-founder and CTO at Aeva, said, "From the beginning, we believed that the only way to achieve the holy grail of LiDAR is to be integrated on a chip. Over the last four years, we did it by leveraging Aeva's unique coherent FMCW approach. With today's announcement, we can use our development efforts to expand into new markets that were simply not possible before.


Go to the original article...

Sony CIS Sales Predicted to Fall by 42% in a Year

Image Sensors World        Go to the original article...

BusinessKorea tells: "Sony’s image sensor sales are predicted to fall from 240 billion yen in the second quarter of this year to 130 billion yen in the second quarter of next year.

This is leading to an opportunity for Samsung. The latecomer in the industry has focused on Xiaomi, Vivo and others rather than Huawei.

Samsung is aiming to rise to the top in the global image sensor market by 2030. Last year, Samsung’ share in the market was 18.1 percent and Sony’s was 53.5 percent."

Go to the original article...

LiDAR News: Livox, Voyant, Aeye, Conti, Luminar, Daimler

Image Sensors World        Go to the original article...

 Livox announces two new products, Mid-70 and AVIA:


Livox demos 500m detection range of its LiDAR:

Voyant Photonics President Peter Stern talks about Apple LiDAR:

"The iPhone time-of-flight LiDAR, probably built with the same amazing SPAD array used in the iPad, coupled with a VCSEL array for illumination, is an engineering marvel. It’s absolute magic.

After working on LiDAR three decades ago that could detect telephone lines kilometers away from a fast-moving, low-flying helicopter, I have been waiting for this kind of LiDAR magic for a long time.

At Voyant, we have a different approach. No VCSELs, no SPADs. Adapting microscopic optical components from datacom chips to active sensing, we have created a coherent pixel array for LiDAR, similar to the ubiquitous CMOS image sensors found everywhere. Each pixel both transmits and receives light at 1550 nm wavelengths.

We expect the sale price for our initial imaging LiDAR to be less than $500 by 2023 and drop quickly as production ramps up. To put that in perspective, our initial product price for 2023 is less than Velodyne’s plan for 2024."


Aeye switches between its CEO and President. The new CEO, Blair LaCorte, used to be the company's President, while Luis Dussan, AEye founder and former CEO, becomes the new President.

Aeye also announces that Continental has invested into the company. By partnering with AEye, Continental complements its existing short-range 3D Flash LiDAR technology, which goes into series production later this year. This start of production of the High-Resolution 3D Flash LiDAR (HFL) is a key milestone for Conti. It is said to be the first high-resolution solid-state LiDAR sensor to go into series production in the automotive market worldwide.

We now have optimum short-range and world-class long-range LiDAR technologies with their complimentary set of benefits under one roof. This puts us in a strong position to cover the full vehicle environment with state-of-the-art LiDAR sensor technology and to facilitate Automated Driving at SAE levels 3 or higher in both passenger cars and commercial vehicle applications,” said Frank Petznick, head of the ADAS business unit at Conti.


Techcrunch, BusinessWire: Daimler’s  trucks division has invested in Luminar  as part of a broader partnership to produce autonomous trucks. The undisclosed investment by Daimler is in addition to the $170M that Luminar raised as part of its reverse-merger IPO. Y year go, Daimler took a majority stake in Torc Robotics,  an autonomous trucking startup that had been working with Luminar the past two years.

The deal comes just days after Daimler and Waymo announced plans to work together to build an autonomous version of the Freightliner Cascadia truck. is the latest action by the German manufacturer to move away from robotaxis and shared mobility and instead focus on tracks.

Go to the original article...

Optical Quantum Random Number Generator

Image Sensors World        Go to the original article...

 APL Photonics paper "An optical chip for self-testing quantum random number generation" by  Nicolò Leone,  Davide Rusca,  Stefano Azzini,  Giorgio Fontana,  Fabio Acerbi,  Alberto Gola,  Alessandro Tontini, Nicola Massari,  Hugo Zbinden, and  Lorenzo Pavesi from University of Trento, FBK, and University of Geneva describes how photon shot noise-based RNG is built:

"We present an implementation of a semi-device-independent protocol of the generation of quantum random numbers in a fully integrated silicon chip. The system is based on a prepare-and-measure scheme, where we integrate a partially trusted source of photons and an untrusted single photon detector. The source is a silicon photomultiplier, which emits photons during the avalanche impact ionization process, while the detector is a single photon avalanche diode. The proposed protocol requires only a few and reasonable assumptions on the generated states. It is sufficient to measure the statistics of generation and detection in order to evaluate the min-entropy of the output sequence, conditioned on all possible classical side information. We demonstrate that this protocol, previously realized with a bulky laboratory setup, is totally applicable to a compact and fully integrated chip with an estimated throughput of 6 kHz of the certified quantum random bit rate."

Go to the original article...

RGB Color Error Tested with Hyperspectral Camera

Image Sensors World        Go to the original article...

 MDPI paper "How Good Are RGB Cameras Retrieving Colors of Natural Scenes and Paintings?—A Study Based on Hyperspectral Imaging" by João M. M. Linhares, José A. R. Monteiro, Ana Bailão, Liliana Cardeira, Taisei Kondo, Shigeki Nakauchi, Marcello Picollo, Costanza Cucci, Andrea Casini, Lorenzo Stefani, and Sérgio Miguel Cardoso Nascimento from University of Minho, University of Lisbon, Portuguese Catholic University (Portugal), Toyohashi University of Technology (Japan), and Istituto di Fisica Applicata “Nello Carrara” del Consiglio Nazionale delle Ricerche (Italy), describes an interesting experiment:

"RGB digital cameras (RGB) compress the spectral information into a trichromatic system capable of approximately representing the actual colors of objects. Although RGB digital cameras follow the same compression philosophy as the human eye (OBS), the spectral sensitivity is different. To what extent they provide the same chromatic experiences is still an open question, especially with complex images. We addressed this question by comparing the actual colors derived from spectral imaging with those obtained with RGB cameras. The data from hyperspectral imaging of 50 natural scenes and 89 paintings was used to estimate the chromatic differences between OBS and RGB. The corresponding color errors were estimated and analyzed in the color spaces CIELAB (using the color difference formulas ΔE*ab and CIEDE2000), Jzazbz, and iCAM06. In CIELAB the most frequent error (using ΔE*ab) found was 5 for both paintings and natural scenes, a similarity that held for the other spaces tested. In addition, the distribution of errors across the color space shows that the errors are small in the achromatic region and increase with saturation. Overall, the results indicate that the chromatic errors estimated are close to the acceptance error and therefore RGB digital cameras are able to produce quite realistic colors of complex scenarios."

Go to the original article...

SiPM for Time Domain NIR Spectroscopy

Image Sensors World        Go to the original article...

 IEEE JSSC publishes an open-access paper "Large-Area, Fast-Gated Digital SiPM With Integrated TDC for Portable and Wearable Time-Domain NIRS" by Enrico Conca, Vincenzo Sesta, Mauro Buttafava, Federica Villa, Laura Di Sieno, Alberto Dalla Mora, Davide Contini, Paola Taroni, Alessandro Torricelli, Antonio Pifferi, Franco Zappa , and Alberto Tosi from Politecnico di Milano.

"We present the design and characterization of a large-area, fast-gated, all-digital single-photon detector with programmable active area, internal gate generator, and time-to-digital converter (TDC) with a built-in histogram builder circuit, suitable for performing high-sensitivity time-domain near-infrared spectroscopy (TD-NIRS) measurements when coupled with pulsed laser sources. We used a novel low-power differential sensing technique that optimizes area occupation. The photodetector is a time-gated digital silicon photomultiplier (dSiPM) with an 8.6-mm 2 photosensitive area, 37% fill-factor, and ~300 ps (20%–80%) gate rising edge, based on low-noise single-photon avalanche diodes (SPADs) and fabricated in 0.35- μm CMOS technology. The built-in TDC with a histogram builder has a least-significant-bit (LSB) of 78 ps and 128 time-bins, and the integrated circuit can be interfaced directly with a low-cost microcontroller with a serial interface for programming and readout. Experimental characterization demonstrated a temporal response as good as 300-ps full-width at half-maximum (FWHM) and a dynamic range >100 dB (thanks to the programmable active area size). This microelectronic detector paves the way for a miniaturized, stand-alone, multi-wavelength TD-NIRS system with an unprecedented level of integration and responsivity, suitable for portable and wearable systems."

Go to the original article...

Hynix Unveils its First ToF Sensor

Image Sensors World        Go to the original article...

Korea IT News: SK Hynix introduces its TOF image sensor at “SEDEX 2020” exhibition this week. Its sensor has 10µm BSI pixels and QVGA resolution in 1/4.5-inch format. The sensor is still in development and its release date has not been disclosed.

SK Hynix TOF sensor is a part of its plan to grow its image sensor business. The company opened a R&D center in Japan last year that primarily focuses on image sensor technology. It also reorganized its lineup of image sensors for smartphone cameras this year. It supplies sensors with increased pixel numbers and smaller pixel size to major smartphone manufacturers. It is currently working on sensors with high pixel numbers such as 48MP and 64MP sensors.

Go to the original article...

SK Hynix: The Latest Technology Trend in CIS is about Functions, not Pixel

Image Sensors World        Go to the original article...

 SK Hynix Head of CIS ISP Taehyun (Ted) Kim publishes a post "The Visual Evolution & Innovation of Image Sensors." Few quotes:

"...this trend for high pixels in CIS is expected to face technical difficulties soon, and the innovation for a high level of functions centered on the ISP will be in full swing.

This is due to the limits of miniaturization of CIS pixels due to the diffraction limit. It is possible to reduce the critical dimension of electric circuits to several nanometers with the current semiconductor technology; however, since the light reception amount decreases as the pixel size decreases, the sensitivity and the signal level is reduced, resulting in the decline in SNR and the image quality degradation.

Currently, SK hynix’s CIS has built-in image processing functions such as phase detection auto focus (PDAF), Quad pixel processing, and high dynamic range (HDR) processing, and new functions are constantly being added to it.

Currently, SK hynix’s CIS, mainly the Black Pearl product line, is widely used in smartphone cameras and the application field is expected to expand to various fields such as bio, security, and autonomous vehicles.

In the future, CIS is expected to evolve into an information sensor that supports advanced additional functions, without being limited to image quality improvement. SK hynix’s stack sensor is already capable of embedding a simple AI hardware engine inside the ISP on the lower substrate, based on the advanced semiconductor process. Based on this, SK hynix is currently developing new machine learning-based technologies such as super resolution, color restoration, face recognition, and object recognition."

Go to the original article...

Fujifilm Uses Polarization Sensor for Multispectral Imaging

Image Sensors World        Go to the original article...

OSA Optics Express paper "Snapshot multispectral imaging using a pixel-wise polarization color image sensor" by Shuji Ono, Fujifilm, uses polarization to separate multispectral filter bands:

"This study proposes a new imaging technique for snapshot multispectral imaging in which a multispectral image was captured using an imaging lens that combines a set of multiple spectral filters and polarization filters, as well as a pixel-wise color polarization image sensor. The author produced a prototype nine-band multispectral camera system that covered from visible to near-infrared regions and was very compact. The camera’s spectral performance was evaluated using experiments; moreover, the camera was used to detect the freshness of food and the activity of wild plants and was mounted on a vehicle to obtain a multispectral video while driving."

Go to the original article...

One More HDR Pixel Paper

Image Sensors World        Go to the original article...

Sensing and Imaging An International Journal publishes a paper "On Wide Dynamic Range Tone Mapping CMOS Image Sensor" by Waqas Mughal and Bhaskar Choubey from University of Southampton, UK, and Universitat Siegen, Germany.

"The dynamic range of a natural scene often covers over 6 decades of intensity from bright to dark areas. Typical image sensors, however, have limited ability to capture this dynamic range available in nature. Even after designing specific wide dynamic range (WDR) image sensors, displaying them on conventional media with limited ability requires computationally complex tone mapping. This paper proposed a novel CMOS pixel which can capture and perform tone mapping during data acquisition. The pixel requires a reference voltage to generate tone mapped response. A number of different reference signals are proposed and generated which can perform WDR operation. Nevertheless, fixed pattern noise (FPN) effects the performance of these pixel. A pixel model with simple parameter extraction procedure is described for a typical tone mapping operator. This model is then used to obtain a simple procedure for pixel calibration leading to reduced FPN. The new proposed pixel response is able to capture upto 6 decades of light intensity and reported FPN correction procedure produces 1% of FPN contrast error."

Go to the original article...

Sony 33-Sensor Concept Car

Image Sensors World        Go to the original article...

NikkeiAsia: "I believe the next megatrend [after mobile phones] will be mobility," said Sony Chairman and President Kenichiro Yoshida as he unveiled the Vision-S concept car at the CES tech show in the U.S. in January.

The Vision-S will have 33 sensors, including image sensors, a Sony specialty. Izumi Kawanishi, Sony's SVP who is shepherding development of the car, said the sensors "give passengers and pedestrians a sense of security thanks to the 360-degree vision it provides."

NikkeiAsia says that Sony controls about 70% of the global market for the image sensors used in smartphone cameras, but its share for automotive image sensors is only 9%. The Vision-S is an exploratory effort by the company as it taps into a market led by ON Semi. According to NikkeiAsia, ON Semi has been producing automotive image sensors for over 50 years (since 1970?) and controls 45% of the market.

Go to the original article...

Sony and Omnivision Receive US License to Supply Sensors to Huawei

Image Sensors World        Go to the original article...

Nikkei reports that Sony and Omnivision have been granted licenses by the U.S. government to resume some shipments to China's Huawei.

"What we learned was that some... image sensor related suppliers are receiving some licenses from the U.S. government as those components are viewed as less related to cybersecurity concerns, and Sony is among those who received approval," an unnamed chip industry executive told Nikkei Asia.

Go to the original article...

Light Launches Clarity, Better than LiDAR

Image Sensors World        Go to the original article...

 Light Co. announces its automotive 3D depth Clarity platform:

"Lidars do a great job, but they don’t do the whole job. Their range is often limited to ~250 meters. Class 8 trucks need at least 400+ meters to come to a complete stop, safely. Lidar as well as monocular camera-based systems can get confused as to whether they’re seeing a person painted on the side of a truck or an actual person.

Clarity is a camera-based perception platform that’s able to see any 3D structures in the road from 10 centimeters to 1000 meters away — three times the distance of the best-in-class lidar with 20 times the detail."


There is nothing else like the Clarity platform with its combination of depth range, accuracy, and density per second. It enables a new generation of vehicles that can be made safer, without having to compromise on cost, quality, or reliability,” said Prashant Velagaleti, Chief Product Officer of Light. “Rather than only minimizing the severity of a collision, having high fidelity depth allows any vehicle powered by Clarity to make decisions that can avoid accidents, keeping occupants safe as well as comfortable.” 


Go to the original article...

Sony Reports 1% Decrease in Image Sensor Sales, Reduces Forecast

Image Sensors World        Go to the original article...

Sony reports its quarterly results and updates on its image sensor business:


  • FY20 Q2 sales decreased slightly year-on-year to 307.1 billion yen and operating income significantly decreased 26.5 billion yen to 49.8 billion yen.
  • FY20 sales are expected to decrease 40 billion yen to 960 billion yen and operating income is expected to significantly decrease 49 billion yen to 81 billion yen.
  • Even accounting for the decrease in operating income in FY20, we expect the difference between the total of operating cash flow and investing cash flow for the segment over the three fiscal years begun April 1, 2018 to be positive.
  • Pursuant to export restrictions announced by the U.S. government on August 17, 2020 we terminated product shipments to a certain major Chinese customer [Huawei - ISW] as of September 15, 2020.
  • The forecast disclosed today for the second half of this fiscal year does not include any shipments to that customer.
  • In addition, the operating income for the quarter includes an approximately 17.5 billion yen write-down of finished goods and work-in-progress inventory for that customer recorded at the end of September.
  • Based on this situation, we are further revising the business strategy, as I explained at the previous earnings announcement, from the perspective of capital expenditures, research and development and customer base.
  • We are further postponing the timing of capital expenditures, with cumulative capital expenditures for the three fiscal years begun April 1, 2018 expected to be reduced 40 billion yen from the approximately 650 billion yen I explained last time.
  • We do not think it is prudent to prematurely reduce research and development spending because we want to meet the needs of a wide range of smartphone customers, as well as maintain and increase our future technological competitive advantage.
  • We have had some success expanding and diversifying our customer base for FY21. The financial impact on our business in FY20 is limited, but we think it is possible to recapture, in FY21, a large portion of the market share, on a unit basis, we lost this fiscal year.
  • However, we expect that it will take a long time for other customers to follow the trend to higher-functionality and larger die-sized smartphone cameras that the Chinese customer [Huawei - ISW] was leading. Thus, we expect the substantial recovery of profitability driven by these high value-added products to take place in the fiscal year ending March 31, 2023 (“FY22”).
  • By recapturing market share in FY21 through an increase in sales of commodity sensors, and by recouping our business profitability in FY22 through more high valueadded products, we aim to return the mobile image sensor business to growth.
  • In addition, there is no change to our mid-to long-term strategy of growing our business through expansion of applications that use edge AI and 3D sensing capabilities, as well as through starting up automotive sensors in earnest.
Reuters reports that Huawei was Sony’s second-largest image sensor customer after Apple, accounting for about 20% of its $10b in sensor revenue, according to analyst estimates.

Go to the original article...

Will the iPhone LiDAR Change AR Forever?

Image Sensors World        Go to the original article...

 AWE publishes a panel discussion "AWE Nite NYC: Will the iPhone LiDAR Change AR Forever? With Snap, Niantic, Occipital."

Go to the original article...

Infineon and PMD Announce 10m Long Range ToF Sensor for Smartphones

Image Sensors World        Go to the original article...

BusinessWire: Infineon and pmdtechnologies developed a 3D ToF sensor which is claimed to outperform other solutions in the market and aims for a wider spectrum of consumer applications. The 3D sensor market in smartphones for rear side cameras is expected to grow up to more than 500M units per year until 2024.

The latest 3D image sensor from Infineon and pmdtechnologies enables a new generation of applications”, says Philipp von Schierstaedt, SVP Infineon. “It aims to create most immersive and smarter AR experiences as well as better photography results with a faster autofocus in low-light conditions or more beautiful night mode portraits based on picture segmentation. This latest chip development is truly setting standards when it comes to improvements of the imager, the driver and processing as well as unprecedented ten meters long range capabilities at lowest power.

The new chip allows the integration into miniaturized camera modules, accurately measuring depth in short and long range for AR while meeting low power consumption requirements with more than 40% power saving on the imager.

Furthermore seamless augmented reality sensing experiences are being achieved, allowing for high quality 3D depth data capture up to a distance of 10m (at reduced resolution), without losing resolution in the shorter range. Always-on applications such as mobile AR gaming can greatly benefit from the small power budget required by the new sensor. For applications such as the 3D scanning for room and object reconstruction or 3D mapping for furniture planning and other design applications the sensor allows to double the measuring range beyond the current solution in the market.

The volume delivery for this chip starts in Q2 2021, demo kits are already available. The recorded livestream from the official press event is available here: https://livestream.com/infineontechnologies/real3

Go to the original article...

ST Announces 64-Point dToF Sensor

Image Sensors World        Go to the original article...

GlobeNewswireSTMicroelectronics extends its portfolio of FlightSense ToF sensors with a 64-zone device. This first-of-its-kind product comprises a 940nm VCSEL light source, a SoC sensor integrating a VCSEL driver, the receiving array of SPADs, and a low-power 32-bit MCU core and accelerator running firmware. The VL53L5 retains the Class 1 certification of all ST’s FlightSense sensors and is fully eye-safe for consumer products.

The multi-zone VL53L5 FlightSense direct Time-of-Flight sensor uses our most advanced 40nm SPAD production process to offer outstanding 4m ranging performance and up to 64 ranging zones that help an imaging system build a detailed spatial understanding of the scene,” said Eric Aussedat, GM of ST’s Imaging Division. “Delivering 64x more ranging zones than previously available, the VL53L5 offers radical performance improvement in laser autofocus, touch-to-focus, presence detection, and gesture interfaces while helping developers create even more innovative imaging applications.

With a vertically integrated manufacturing model for its FlightSense sensors, ST builds its SPAD wafers on a 40nm proprietary silicon process in the Company’s 12” wafer plant at Crolles, France before assembling all of the module components in ST’s back-end plants in Asia. This approach delivers exceptional quality and reliability to customers.

Packaged in a 6.4 x 3.0 x 1.5 mm module, the VL53L5 integrates both transmit and receive lenses into the module design and expands the FoV of the module to 61-degrees diagonal. This wide FoV is especially suited to detect off-center objects and ensure perfect autofocus in the corners of the image. In the ‘Laser Autofocus’ use case, the VL53L5 gathers ranging data from up-to 64 zones across the full FoV to support “Touch to Focus” and many other features.

Further flexibility is available via the SPAD array, which can be set to favor spatial resolution, where it outputs all 64 zones at up to 15fps, or to favor maximum ranging distance, where the sensor outputs 4×4/16 zones at a frame rate of 60fps.

ST’s architecture can automatically calibrate each ranging zone and direct Time-of-Flight technology allows each zone to detect multiple targets and reject reflection from the cover-glass. 

Customer development with the VL53L5 can build on ST’s strong relationships with key smartphone and PC platform suppliers as ST has pre-integrated the sensor onto these platforms. The VL53L5 is in mass production with millions of units already shipped to leading wireless and computer manufacturers.

Go to the original article...

css.php