AIStorm Wins Frost & Sullivan’s 2019 Technology Innovation Award

Image Sensors World        Go to the original article...

PRNewswire: California-based startup AIStorm AI-in-Sensor (AIS) technology that enables real-time processing of sensor data at the edge, without digitization wins Frost & Sullivan's 2019 Technology Innovation Award. The AIS technology uses a "charge domain processing that controls the electron movement between the storage elements in the chip and uses switch charge circuits for mathematical control over the charge transfer."

AIStorm's chips integrate imagers (CIS or Lidar), voice (MEMs, microphones), or waveform (vibration or motion) sensors as well as flow, network, memory, power management and communication tasks. AIStorm's solutions enable "always on" imaging and audio event driven capability without polling, utilizing intelligent AI-based trigger mechanism, thus eliminating false triggers and using minimal power while waiting for an event.

Go to the original article...

Omnivision Promotes its 8.3MP Automotive Sensor

Image Sensors World        Go to the original article...

EETimes publishes Junko Yoshida's article on Omnivision's new 8.3MP image sensor with LED flicker mitigation:

"Celine Baron, OmniVision’s staff automotive product manager, noted during an interview with EE Times that LEDs are everywhere, ranging from headlamps and traffic lights to road signs, billboards and bus displays. Given their ubiquity, it’s hard to avoid LED flickering. It can be distracting enough to human eyes, but it could be fatal to an AVs’ machine vision. Human vision can compensate for flickering. AV machine vision can’t."

Go to the original article...

LiDAR News: Blickfeld, Aeva, Outsight, Leddartech, First Sensor, Draper, SiLC, Robosense

Image Sensors World        Go to the original article...

Blickfeld publishes an article explaining challenges of automotive MEMS LiDAR:

"In order to capture as much light as possible, a large aperture, i.e. as large a mirror as possible, is required. However, the mirror size is also limited by certain factors – it is therefore necessary to calculate the optimum size on the basis of these factors.

MEMS mirrors oscillate at a certain resonant frequency. The resonant frequency at which a mirror oscillates depends on the size and mounting of the mirror. For this purpose we have developed a proprietary embedding of the mirrors in order to be able to use particularly large mirrors. Due to the unusually large diameter, a large number of photons can be directed onto the scene and back onto the detector, which allows Blickfeld LiDAR sensors to achieve a long range. In addition, thanks to their size, the mirrors are more robust than conventional products, which are only a few millimeters in diameter. Yet, they have a high resonant frequency due to their lightweight construction which ensures that the photons are returned to the detector. If the mirror oscillates too quickly or too slowly, the photons are deflected past the detector due to the coaxial structure.
"


IEEE Spectrum, Reuters, Businesswire: Aeva announces its FMCW LiDAR that integrates all the key elements of a LiDAR sensor onto a photonics chip. Aeva’s 4D LiDAR-on-chip reduces the size and power of the device by orders of magnitude while achieving full range performance of over 300m for low reflective objects and the ability to measure instant velocity for every point. Aeva’s LiDAR-on-chip will cost less than $500 at scale, in contrast to the several tens of thousands of dollars for today’s LiDAR sensors.

Not all FMCW LiDARs are created equally,” said Mina Rezk, Co-Founder of Aeva. “A key differentiator of our approach is breaking the dependency between maximum range and points density, which has been a barrier for time-of-flight and FMCW LiDARs so far. Our 4D LiDAR integrates multiple beams on a chip, each beam uniquely capable of measuring more than 2 million points per second at distances beyond 300m.

Aeva promises to unveil at CES 2020 its next generation LiDAR, Aeries, features a 120-degree FOV at only half the size of Aeva’s first product. Aeries meets the final production requirements for autonomous driving robo-taxis and large volume ADAS customers and will be available for use in development vehicles in the first half of 2020.

We have scanned the market closely and believe Aeva’s 4D LiDAR on a chip technology is the best LiDAR solution on the market, solving a fundamental bottleneck for perception in taking autonomous driving to mass scale,” said Alex Hitzinger, SVP of Autonomous Driving at VW Group and CEO of VW Autonomy GmbH. “Together we are looking into using Aeva’s 4D LiDAR for our VW ID Buzz AV, which is scheduled to launch in 2022/23.


EIN Presswire: The French startup Outsight announces it has raised $20M in seed funding. Outsight's 3D Semantic Camera can includes hyperspectral-based detection of the material composition of objects.

Earlier, Outsight announced a collaboration with Faurecia and Safran. Founded in 2019 by Cedric Hutchings and Raul Bravo, Outsight launched its 3D Semantic Camera in September.

Our 3D Semantic Camera is not only a new device but a change of paradigm where Situation Awareness becomes plug&play for the first time: we’re creating a new category of solutions that will unleash tremendous business value. We’re proud of having the support of such solid and knowledgeable investors that share our ambition,” said Raul Bravo, President co-founder of Outsight.


Globenewswire: LeddarTech announces a strategic collaboration with First Sensor AG that is also now joining the Leddar Ecosystem.

LeddarTech, with the support of First Sensor and other industry leaders, is developing the only open and comprehensive LiDAR platform option for OEMs and Tier1s. The platform provides the following benefits:

LeddarTech and First Sensor will develop a LiDAR Evaluation Kit, a demonstration tool for Tier 1s and system integrators to develop their own LiDAR based on LeddarEngine technology, First Sensor APDs, and additional ecosystem partners’ technologies, products, and services. The evaluation kit will be primarily targeting automotive front LiDAR applications for high-speed highway driving such as Highway Pilot and Traffic Jam Assist.


Draper unveils LiDAR with MEMS beamsteering. Draper’s all-digital switches provide robustness for the harsh automotive environment, which carries advantages over competing solid-state approaches that rely on analog beamsteering. With Draper’s LiDAR, light is emitted through a matrix of optical switches and collected through the same optical switches, which allows for a favorable signal-to-noise ratio, since little ambient light is collected.

Draper’s LiDAR is being developed to image a range of hundreds of meters while providing a corresponding angular resolution targeted at less than 0.1-degrees, a significant advancement over competing LiDAR systems, many of which offer lower range and resolution.

At Draper, we have experience with differing beamsteering methods, such as optical phased arrays. However, we feel MEMS optical switches provide an elegant simplicity,” said Sabrina Mansur, Draper’s self-driving vehicle program manager. “If we want to image a target at a specified location, we simply enable the corresponding optical switch, whereas other approaches rely on precise analog steering, which is challenging given automotive’s thermal and vibration environment.

The new offering which is available to license, adds to Draper’s all-weather LiDAR technology, named Hemera, a detection capability designed to see through dense fog and is compatible with most LiDAR systems.


PRNewswire: SiLC Technologies, a developer of integrated single-chip FMCW LiDAR, and Varroc Lighting Systems announces a seamless LiDAR integration into a production automotive headlamp. The Varroc Lighting Systems headlamp is based on a sophisticated production LED design and leverages four of SiLC's silicon photonics FMCW vision chips providing a full 20 x 80-degree FOV per headlamp.

SiLC's 1550nm LiDAR chip can be inconspicuously embedded anywhere on a vehicle for optimal vision and safety. SiLC's 4D+ Vision Chip integrates all required functionality, such as a coherent light source and optical signal processing, to enable additional information to be extracted from the returning photons before their conversion to electrons. SiLC's vision sensor can detect height, width, distance, reflectivity, velocity, and light polarization of objects. The coherent interferometric sensing approach improves achievable accuracy by orders of magnitude over existing technologies. SiLC's 4D+ Vision Chip can detect low reflectance objects beyond 200m, providing enough time for a vehicle to avoid an obstacle at highway speeds.


Businesswire: RoboSense launches a complete LiDAR perception solution for Robo Taxi (RS-Fusion-P5) in markets outside China. The RS-Fusion-P5 was first launched in China last month. Equipped with an RS-Ruby and four RS-BPearl, The RS-Fusion-P5 is considered to be the alternative to Waymo's LiDAR solution, further accelerating the development of Robo Taxi.

Go to the original article...

1/f Noise in CMOS Sensors

Image Sensors World        Go to the original article...

A paper "1/f Noise Modelling and Characterization for CMOS Quanta Image Sensors" by Wei Deng and Eric R. Fossum, Dartmouth College belongs to MDPI Special issue on the 2019 International Image Sensor Workshop (IISW2019). The paper presents rather surprising results that match Hooge mobility fluctuation model, largely abandoned by the industry and academic worlds:

"This work fits the measured in-pixel source-follower noise in a CMOS Quanta Image Sensor (QIS) prototype chip using physics-based 1/f noise models, rather than the widely-used fitting model for analog designers. This paper discusses the different origins of 1/f noise in QIS devices and includes correlated double sampling (CDS). The modelling results based on the Hooge mobility fluctuation, which uses one adjustable parameter, match the experimental measurements, including the variation in noise from room temperature to –70 °C. This work provides useful information for the implementation of QIS in scientific applications and suggests that even lower read noise is attainable by further cooling and may be applicable to other CMOS analog circuits and CMOS image sensors."

Go to the original article...

D-ToF LiDAR Model

Image Sensors World        Go to the original article...

A paper "Modeling and Analysis of a Direct Time-of-Flight Sensor Architecture for LiDAR Applications" by Preethi Padmanabhan, Chao Zhang, and Edoardo Charbon, EPFL and TU Delft, belongs to MDPI Special issue on the 2019 International Image Sensor Workshop.

"Direct time-of-flight (DTOF) is a prominent depth sensing method in light detection and ranging (LiDAR) applications. Single-photon avalanche diode (SPAD) arrays integrated in DTOF sensors have demonstrated excellent ranging and 3D imaging capabilities, making them promising candidates for LiDARs. However, high background noise due to solar exposure limits their performance and degrades the signal-to-background noise ratio (SBR). Noise-filtering techniques based on coincidence detection and time-gating have been implemented to mitigate this challenge but 3D imaging of a wide dynamic range scene is an ongoing issue. In this paper, we propose a coincidence-based DTOF sensor architecture to address the aforementioned challenges. The architecture is analyzed using a probabilistic model and simulation. A flash LiDAR setup is simulated with typical operating conditions of a wide angle field-of-view (FOV = 40 ∘ ) in a 50 klux ambient light assumption. Single-point ranging simulations are obtained for distances up to 150 m using the DTOF model. An activity-dependent coincidence is proposed as a way to improve imaging of wide dynamic range targets. An example scene with targets ranging between 8–60% reflectivity is used to simulate the proposed method. The model predicts that a single threshold cannot yield an accurate reconstruction and a higher (lower) reflective target requires a higher (lower) coincidence threshold. Further, a pixel-clustering scheme is introduced, capable of providing multiple simultaneous timing information as a means to enhance throughput and reduce timing uncertainty. Example scenes are reconstructed to distinguish up to 4 distinct target peaks simulated with a resolution of 500 ps. Alternatively, a time-gating mode is simulated where in the DTOF sensor performs target-selective ranging. Simulation results show reconstruction of a 10% reflective target at 20 m in the presence of a retro-reflective equivalent with a 60% reflectivity at 5 m within the same FOV."

Go to the original article...

Intel Unveils Indoor MEMS LiDAR

Image Sensors World        Go to the original article...

Intel announces the RealSense lidar camera L515 able to generate 23M depth points per second with mm accuracy. The LiDAR Camera L515 has a focus on indoor applications that require depth data at high resolution and high accuracy. The L515 uses a proprietary MEMS mirror scanner, enabling better laser power efficiency compared to other ToF technologies. The new camera has an internal vision processor, motion blur artifact reduction and short photon-to-depth latency.

The Intel RealSense lidar is priced at $349 and available for pre-order now.

The main features of the L515 indoor LiDAR:

  • Laser wavelength: 860nm
  • Technology: Laser scanning
  • Depth Field of View (FOV): 70° × 55° (±2°)
  • Maximum Distance: 9m
  • Minimum Depth Distance:0.25m
  • Depth Output Resolution & Frame Rate: Up to 1024 × 768 depth pixels, 30 fps
  • Ambient Temperature: 0-30 °C
  • Power consumption: less than 3.5W


Go to the original article...

Isorg to Demo Full-Screen Fingerprint Sensor for Smartphones

Image Sensors World        Go to the original article...

ALA News: Isorg will demonstrate its full-screen Fingerprint-on-Display (FoD) module for improved multi-finger smartphone authentication at CES 2020. It supports up to four fingers simultaneously touching a smartphone display.

Currently available solutions are restricted to single finger identification within a surface area of less than 10mm x 10mm. In contrast, Isorg’s FoD module supports one- to four-finger authentication across the entire dimensions of the 6-inch smartphone display (or even larger). In addition, the module is very thin, less than 0.3mm thick, so integration into smartphones is made easy for OEMs.

Isorg is excited to demonstrate what could be the future in multi-fingerprint-on-display security to strengthen authentication on smartphones and wearable devices,” said Jean-Yves Gomez, CEO at Isorg. “Our Fingerprint-on-Display module provides OEMs with a complete solution. In addition to the image sensor, it includes other hardware: optimized thin film optical filters developed in-house and driving electronics, as well as software from our industrial partners covering the interface with smartphone OS and the matching algorithm. Isorg has achieved a significant milestone in designing a scalable FoD solution that provides excellent performance results, it is compatible with foldable displays and easier to implement than existing technologies.

Smartphone OEMs will be able to sample Isorg’s Fingerprint-on-Display module on spring 2020.

Go to the original article...

Assorted News: CIS Fabs Capacity, Espros, Artilux

Image Sensors World        Go to the original article...

China Money Network: "As reported previously, the current CIS is mainly divided into mobile phones and security. Among them, mobile phones are basically manufactured using a 12-inch 55nm process, and security chips are manufactured using a 0.11um eight-inch process. In terms of domestic wafer foundries, SMIC, Huahong Grace and XMC are among the big players. Recently, the newly established 12-inch factory, Guangzhou-based Cansemi Technology, has also won the favor of large local customers in CIS, and the company is introducing related product production.

According to a reporter from the Semiinsight who learned from friends in the relevant supply chain, with the tight production capacity of these fabs, the wafer delivery date of related CIS chips has been extended to four months, and the time required for packaging increases two to three weeks.

In addition, the popularity of under-screen optical fingerprint solutions using the same process as CIS has exacerbated this phenomenon. "Because the Die Size of the under-screen optical sensor is relatively large, the number of Dies that are originally cut per wafer is limited. The increasing demand makes the supply of CIS more stretched". "To today’s CIS manufacturer, who has the factory capacity, who is the boss," a supply chain insider said to Semiinsight reporter in an interview.
"

Espros announces ToF Developers Conference to be held in San Francisco on January 28–30, 2020:

"Over the past four conferences, we have trained more than 130 engineers to successfully design TOF camera systems. Due to the high demand, we have decided to continue our TOF Developer Conference.

There is, at least to our knowledge, no engineering school which addresses TOF and LiDAR as an own discipline. We at ESPROS decided to fill the gap with a training program called TOF Developer Conference. The objective is to provide a solid theoretical background, a guideline to working implementations based on examples and practical work with TOF systems. Thus, the TOF Developer Conference shall become the enabler for electronics engineers (BS and MS in EE engineering) to design working TOF systems. It is ideally for engineers who are or will be, involved in the design of TOF system. We hope that our initiative helps to close the gap between the desire of TOF sensors to massively deployed TOF applications.
"


PRNewswire: Artilux unveils world's first GeSi wide spectrum ToF sensor at CES 2020. The demo, being shown live for the first time, will include a RGB-D camera for logistics applications and robot vision, and a 3D camera system that can operate at a longer wavelength. The sensor is projected to enter mass production in Q1 2020 and targets applications such as mobile devices, automotive LiDAR, and machine vision.

In contrast to existing 3D sensors, which typically operate at 850nm or 940nm, the GeSi sensor can cover the range from 850nm to 1550nm. By utilizing this capability, the new Explore Series sensor substantially reduces the potential risk of eye damage. According to the most recent findings, the power of the laser can safely be at least 10 times greater at 1200-1400nm than at 940nm, which improves performance without compromising on safety for long range and highly accurate 3D imaging; it also means that the safe minimum distance of the laser from the eye can be further reduced to sub-centimeter, following the international standards IEC 60825-1:2007 and IEC 60825-1:2014.

The use of longer NIR wavelengths also minimizes interference from sunlight and enables better performance in outdoor environments. All the breakthroughs are brought about by a new GeSi technology platform developed by Artilux in cooperation with TSMC, enabling it to be the first CMOS-based ToF solution to work with light wavelengths up to 1.55µm. A paper that addresses the sensor design based on a GeSi platform has recently been accepted by ISSCC 2020. Artilux has also updated its last year Arxiv.org paper with more recent data.

Go to the original article...

6 Types of Random Telegraph Noise

Image Sensors World        Go to the original article...

TSMC, French Atomique Energie Commission, and Institut supérieur de l’aéronautique et de l’espace, Toulouse, publish a joint MDPI paper "Random Telegraph Noises from the Source Follower, the Photodiode Dark Current, and the Gate-Induced Sense Node Leakage in CMOS Image Sensors" by Calvin Yi-Ping Chao, Shang-Fu Yeh, Meng-Hsu Wu, Kuo-Yu Chou, Honyih Tu, Chih-Lin Lee, Chin Yin, Philippe Paillet, and Vincent Goiffon. The paper is a part of MDPI Special issue on the 2019 International Image Sensor Workshop (IISW2019).

"In this paper we present a systematic approach to sort out different types of random telegraph noises (RTN) in CMOS image sensors (CIS) by examining their dependencies on the transfer gate off-voltage, the reset gate off-voltage, the photodiode integration time, and the sense node charge retention time. Besides the well-known source follower RTN, we have identified the RTN caused by varying photodiode dark current, transfer-gate and reset-gate induced sense node leakage. These four types of RTN and the dark signal shot noises dominate the noise distribution tails of CIS and non-CIS chips under test, either with or without X-ray irradiation. The effect of correlated multiple sampling (CMS) on noise reduction is studied and a theoretical model is developed to account for the measurement results."


"Continued improvement of RTN is essential for enhancing CIS performance when the pixel scales down to 0.7 um pitch and beyond. Understanding the RTN behavior and classification of the RTN pixels into different types are the necessary first step in order to reduce RTN through pixel design and minimizing process-induced damage (PID). In this paper, we identified the SF-RTN, the DC-RTN, the TG GIDL-RTN, and the RST GIDL-RTN in active pixels according to their dependence on the PD integration time, the SN charge retention time, the 𝑉𝐷𝐺 across the TG device, and the 𝑉𝑆𝐺 across the RST device, in CIS and non-CIS chips, with and without X-ray irradiation.

We further studied the effect of CMS as a useful technique for RTN reduction through circuit design. A theoretical model was presented to account for the time-dependence of the effectiveness of CMS, which explained the measured data reasonably well. The process nodes used to manufacture the pixel-array and the ASIC layers in stacked CIS are expected to move down the path of the Moore’s Law gradually. Extending the study of RTN to highK metal gate and FinFET technologies is an important goal for our future investigation.
"

Go to the original article...

NHK Organic 8K Image Sensor

Image Sensors World        Go to the original article...

SMPTE publishes NHK presentation "8K Camera Recorder using Organic-photoconductive CMOS Image Sensor & High-quality Codec" by Shingo Sato:

Go to the original article...

Sony News: TSMC, Third Point, Automotive Sensors

Image Sensors World        Go to the original article...

Digitimes reports that TSMC received CIS orders from Sony "and will fabricate the chips using 40nm process technology at Fab 14A in Tainan, southern Taiwan." TSMC has placed equipment orders for additional 40nm process capacity at the fab to fulfill CIS Sony's orders. The new equipment is to be installed in Q2 2020 with pilot runs slated for August next year.

Taiwan TechNews adds: "in the case of Sony's current insufficient production capacity, Sony released the first order to TSMC for OEM production, which not only added orders for 5G related products to TSMC, but also boosted revenue momentum for its high-end image sensor supply chain.

...although Sony and TSMC had a cooperative relationship in the past, it was limited to the manufacturing of logic products and did not place an order for TSMC on high-end image sensors. This time, due to insufficient production capacity, the first release of orders also made TSMC actively prepare. This batch of orders is also expected to be built in the TSMC 14a factory with a 40-nanometer process. It is expected that after the expansion of TSMC's production line, mass production will occur in 2021, reaching a scale of 20,000 pieces per month. In the future, it will not even rule out that it will reach 28. Cooperate with processes below nanometer. In this regard, TSMC did not comment and explain.
"

SeekingAlpha publishes Third Point response on Sony refusal to spin-off its CIS business:

"Most investors expected that following a lengthy review, Sony would share some meaningful plans to close the yawning gap between its share price and intrinsic value.... While we did not expect that all our requests, such as the separation of the image sensor business, would be addressed immediately, we did expect that the Company would make some recommendations to address the structural impediments to long‐term value creation for Sony's shareholders.

Instead, Sony revealed that the review's conclusion was to maintain the status quo with no concrete proposals to improve the business. As students and practitioners of Japanese business principles like kaizen, it is difficult for us to imagine that a company of Sony's size and complexity could not find a single concrete action to improve its business and valuation.

We are committed to a continued constructive dialogue with the Company and to creating long‐term value at Sony for all stakeholders. Discussions are ongoing, guided by our view that Sony remains one of the most undervalued large capitalization stocks in the world.
"


Sony publishes an interview "Will Sony's automotive CMOS image sensor be a key to autonomous driving?" with its automotive image sensor designers Yuichi Motohashi, Satoko Iida, and Naoya Sato. Few interesting quotes:

"...automotive cameras are difficult to compare and evaluate. Although the performance is good, there is no method established to evaluate them and we can't emphasize our advantages. So, we always consider how we can create a yardstick to prove our superiority.

The image sensor development cycle is two to three years, but it takes longer than other applications for those image sensors to be actually integrated into cars in the market. In fact, the negotiations we're having right now are for cars that will hit the market in five years.

While we emphasize the "low illumination characteristics," the core competence Sony has cultivated over many years, we have developed Sony's original pixel architecture based on the "dynamic range expansion technology with single exposure," which is strongly demanded for automotive image sensors. I think this technology is unbeatable.

...process technologies have become commoditized today, and it has become difficult to differentiate them. It is necessary to make differentiation through pixel architecture and show superior characteristics.
"

Go to the original article...

Sony News: TSMC, Third Point, Automotive Sensors

Image Sensors World        Go to the original article...

Digitimes reports that TSMC received CIS orders from Sony "and will fabricate the chips using 40nm process technology at Fab 14A in Tainan, southern Taiwan." TSMC has placed equipment orders for additional 40nm process capacity at the fab to fulfill CIS Sony's orders. The new equipment is to be installed in Q2 2020 with pilot runs slated for August next year.

Taiwan TechNews adds: "in the case of Sony's current insufficient production capacity, Sony released the first order to TSMC for OEM production, which not only added orders for 5G related products to TSMC, but also boosted revenue momentum for its high-end image sensor supply chain.

...although Sony and TSMC had a cooperative relationship in the past, it was limited to the manufacturing of logic products and did not place an order for TSMC on high-end image sensors. This time, due to insufficient production capacity, the first release of orders also made TSMC actively prepare. This batch of orders is also expected to be built in the TSMC 14a factory with a 40-nanometer process. It is expected that after the expansion of TSMC's production line, mass production will occur in 2021, reaching a scale of 20,000 pieces per month. In the future, it will not even rule out that it will reach 28. Cooperate with processes below nanometer. In this regard, TSMC did not comment and explain.
"

SeekingAlpha publishes Third Point response on Sony refusal to spin-off its CIS business:

"Most investors expected that following a lengthy review, Sony would share some meaningful plans to close the yawning gap between its share price and intrinsic value.... While we did not expect that all our requests, such as the separation of the image sensor business, would be addressed immediately, we did expect that the Company would make some recommendations to address the structural impediments to long‐term value creation for Sony's shareholders.

Instead, Sony revealed that the review's conclusion was to maintain the status quo with no concrete proposals to improve the business. As students and practitioners of Japanese business principles like kaizen, it is difficult for us to imagine that a company of Sony's size and complexity could not find a single concrete action to improve its business and valuation.

We are committed to a continued constructive dialogue with the Company and to creating long‐term value at Sony for all stakeholders. Discussions are ongoing, guided by our view that Sony remains one of the most undervalued large capitalization stocks in the world.
"


Sony publishes an interview "Will Sony's automotive CMOS image sensor be a key to autonomous driving?" with its automotive image sensor designers Yuichi Motohashi, Satoko Iida, and Naoya Sato. Few interesting quotes:

"...automotive cameras are difficult to compare and evaluate. Although the performance is good, there is no method established to evaluate them and we can't emphasize our advantages. So, we always consider how we can create a yardstick to prove our superiority.

The image sensor development cycle is two to three years, but it takes longer than other applications for those image sensors to be actually integrated into cars in the market. In fact, the negotiations we're having right now are for cars that will hit the market in five years.

While we emphasize the "low illumination characteristics," the core competence Sony has cultivated over many years, we have developed Sony's original pixel architecture based on the "dynamic range expansion technology with single exposure," which is strongly demanded for automotive image sensors. I think this technology is unbeatable.

...process technologies have become commoditized today, and it has become difficult to differentiate them. It is necessary to make differentiation through pixel architecture and show superior characteristics.
"

Go to the original article...

Sony Announces 2×2 On-Chip Lens For Mobile Sensors

Image Sensors World        Go to the original article...

SonyAlphaRumors: Sony presents 2x2 On-Chip Lens (OCL) technology for high-speed focus, high-resolution, high-sensitivity, and HDR:


"In conventional technologies, the variance in sensitivity per pixel caused by the structure (described below), which places an on-chip lens that spans four pixels, was a major issue. However, we have successfully developed a high-performance image sensor with high image quality through optimization of the device structure and the development of a new signal processing technology."

The main features of the Sony lens structure:
  • Phase differences can be detected across all pixels
  • Improved phase difference detection performance (focus performance)
  • Focus performance at low light intensity
  • Focus performance that does not depend on the object shape or pattern
  • Real-time HDR output

Go to the original article...

Sony Quietly Acquires Insightness

Image Sensors World        Go to the original article...

As mentioned in comments, Zurich-based event-based sensor startup Insightness is a part of Sony Semiconductor Solutions Group now:


Few slides about Insightness:

Go to the original article...

Xiaomi Under-Display Selfie Camera Patent Application

Image Sensors World        Go to the original article...

CnTechPost noticed Xiaomi patent application US20190369422 "Display Structure and Electronic Equipment" by Zhihui Zeng, Anyu Liu, Lei Tang, Zhongsheng Jiang, Shaoxing Hu, and Chengfang Sun.

"...there is provided a display structure, which includes: a light adjusting component, where an operating state of the light adjusting component includes a light transmitting state and a polarization state, and the light adjusting component includes a first region and a second region which are independently controllable; and a display screen including a plurality of independently controllable pixels. The light adjusting component is located at a light emitting side of the display screen, and when the first region is in the light transmitting state, the pixels that are in the display screen and correspond to the first region are disabled to allow light emitted from the first region to penetrate through the display screen."

On the figures below,
  • the reference numeral 1 indicates a display structure;
  • the reference numeral 11 indicates a display screen;
  • the reference numeral 12 indicates a light adjusting component;
  • the reference numeral 2 indicates a lens.

Go to the original article...

Yole on Disposable Medical Sensor Revolution

Image Sensors World        Go to the original article...

Yole Development report "Disposable image sensors: a revolution for microscopy and next-generation sequencing" states the image sensor market for microscopy & NGS will show an impressive growth: +18% CAGR between 2018-2024 (in volume).

Microscopy & NGS markets are undergoing enormous technological changes”, announces Marjorie Villien, Technology & Market Analyst at Yole Développement (Yole).“These innovations are opening the way for new business opportunities, especially within the camera image sensors industry.”

Indeed, one of the most remarkable changes is the introduction of disposable image sensors within the cameras for microscopy & NGS. Cameras are key elements in the microscopy and NGS space.

The main trend in optical microscopy is to attain higher resolution, as well as faster acquisition and higher sensitivity for quicker and better diagnostics, and real time imaging of living organisms,” explains Marjorie Villien from Yole. “CCD is the main image sensor technology used today, but CMOS is gaining market share due to an increasing need for high-speed image acquisition.

However, this trend towards better imaging is counterbalanced by another trend – one that leans towards portability and use of microscopy at the point of care. These systems are sleeker and cheaper, and deliver microscopy results directly to the caregiver.

This is also the case for NGS. Two very different trends are discernable, one towards higher throughput with very expensive, bulky equipment; and another that is lower throughput, with cheaper equipment offering lower footprint and wide availability
.
Illumina, the optical NGS market leader with more than 80% market share is a good example. The company has a diverse product portfolio of mid- to high-end systems, but recently launched a more affordable, lower-throughput system – the iSeq100. This follows the trend towards commoditization of NGS. The iSeq100 does not integrate optical systems in the instrument anymore, but uses a disposable image sensor directly inside the flow cell, which is a game-changer in the NGS market. Indeed, this makes the instrument much more affordable, enabling Illumina to place more systems and therefore sell more consumables, which they can make cheaper because of increased volumes.
This trend is also seen with BGI, Illumina’s Chinese competitor, which recently announced a benchtop, optics-free NGS instrument running CMOS chips.

Go to the original article...

OmniVision Unveils 8.3MP Automotive Sensors With LED Flicker Mitigation and 140dB HDR

Image Sensors World        Go to the original article...

PRNewswire: OmniVision announced the first two members of its new automotive sensor platform—the 8MP, front-view OX08A and OX08B. The high-resolution OX08A features HDR, while the pinout-compatible OX08B adds a LED flicker mitigation (LFM), enabled by the sensor’s on-chip HALE (HDR and LFM engine) combination algorithm. The new platform also integrates ASIL-C features.

These new image sensors utilize OmniVision’s dual conversion gain (DCG) technology to achieve 82dB dynamic range on the first exposure, whereas competitors’ image sensors only provide a dynamic range of 60dB or less. Unlike DCG, the competing method, known as staggered HDR, relies on additional passes that introduce motion artifacts and diminish range, especially in low light,” said Celine Baron, staff automotive product marketing manager at OmniVision. “Additionally, OmniVision’s 3D stacking technology allowed us to integrate our unique HALE algorithm into the OX08B. The result is that this sensor platform provides an industry-leading 140dB HDR, along with the best LFM performance and high 8MP resolution for superior front-view captures, regardless of external lighting conditions.

The new 1/1.8” optical format sensors have a 2.1um pixel with its 4-cell LFM technology in PureCel Plus-S stacked architecture. The OX08A and OX08B image sensors are both planned to be AEC-Q100 Grade 2 certified.

Yole Développement’s recent technology and market research ‘Imaging for Automotive 2019’ confirms the accelerated market pull for viewing and ADAS applications at a 13.7% CAGR between 2018 and 2024,” asserted Pierre Cambou, principal analyst at Yole. “OmniVision’s new automotive CIS platform includes key features such as HDR and LFM, and is enabled by a stacked semiconductor approach. Introducing such exciting technology to its automotive lineup allows for on-chip integration that reduces BOM costs while providing a high level of performance and features in a very compact package, indeed much in sync with current market expectations.

Go to the original article...

Mobile CIS Market Decline Predicted

Image Sensors World        Go to the original article...

IFNews quotes Sigmaintell forecast saying that mobile CIS market starts to shrink after the peaking in 2021:


"With the continuous innovation and upgrading of technology and the promotion of 5G networks, multi-camera 2020 will increase the function of video shooting. It is expected that starting from the second half of 2020, the multi-camera + ToF ranging technology will slowly start to enhance the effect of background blur. In addition, post-shooting ToF combined with AI algorithms has the opportunity to improve the accuracy of indoor navigation, which is also a key focus of technological development. Therefore , Sigmaintell believes that the mainstream rear camera in the future will develop in the direction of main camera wide angle + video shooting + large telephoto + ToF.

The upgrade of 48M and above pixels accelerated, with a market share of about 9% in the third quarter. Among them, Sony accounts for approximately 41%, Samsung accounts for approximately 56%, and OV accounts for approximately 3%. Due to the accelerated replacement of 48M and the insufficient production capacity of high-resolution camera sensor wafers, Sigmaintell expects supply shortages from Q4 to Q1 2019.

According to Sigmaintell data , global shipments of 48M camera sensors will exceed 450 million units in 2020.
"

Go to the original article...

Sony Presents its Production Quality Control System

Image Sensors World        Go to the original article...

IFNews quotes Mynavi review of Sony invited presentation at 2019 AEC/APC Symposium Asia on "The manufacturing and future of Sony CMOS Image Sensor."

"Sony aims to become a smart factory and is working on the following seven projects.
  1. Intelligent manufacturing equipment (utilization of edge computing) = Monitors the sensor output of all manufacturing equipment, automatically detects small fluctuations by machine learning, and performs feedback forward.
  2. Value chain (market submission survey, demand forecast, customer information collection) factory production management based on information)
  3. Intelligent MES (Manufacturing Execution System)
  4. Innovative energy system
  5. Integrated IT system
  6. Smart purchasing activities for parts and materials
  7. Smart engineering system (use of big data)
...a number of low-priced surveillance cameras are installed in the manufacturing process, and in combination with AI, efforts are being made to monitor process abnormalities in real time."


The same Symposium also published 2015 Sony paper on Vth variations.

Go to the original article...

More About New Snapdragons Imaging Features

Image Sensors World        Go to the original article...

Tech2 publishes photos from Qualcomm Summit held on Dec. 3-5, 2019 in Maui, Hawaii. Few interesting pictures, more are in Judd Heape, Qualcomm camera products manager, presentation:

Snapdragon 865 ISP development took 3 years:


Not sure what kind of lower than 1mW power image processing it can perform:


Now, Spectra ISP supports quad CFA processing at the hardware level. No need to convert it to the Bayer pattern:


Few other slides:


Qualcomm also announces Snapdragon XR2 AR/VR Platform supporting 7 cameras on a headset:

"With XR, for the first time a user can be virtually teleported to a new environment. To do this accurately and efficiently, the Snapdragon XR2 introduces support for seven (7) concurrent cameras and a custom computer vision processor. Multiple concurrent cameras enable real-time and highly accurate tracking of the head, lips and eyes together with 26-point skeletal hand tracking. Computer vision provides highly efficient scene understanding and 3D reconstruction. Together, these features allow users to be transported to new environments where they can intuitively interact within a digital world."


Even 7c and 8c compute platforms have quite impressive camera specs:


Qualcomm's competitor Mediatek announced its 5G platform Dimensity. The platform utilizes big-little processing approach featuring "the world’s first 5 core ISP in a unique 3 big plus 2 small design. It offers the best performance to power efficiency index by matching the right size sensor to the most appropriate ISP."

Go to the original article...

Polarization Sensitive Thermal Imager Finds Military Use

Image Sensors World        Go to the original article...

US Army Combat Capabilities Development Command's Army Research Laboratory and Polaris Sensor Technologies propose to use polarization sensitive thermal camera to find targets camouflaged in natural clutter.

A camera displays the targets (in green) using conventional LWIR thermal imagery (left), raw polarimetric imagery (center), and the combined thermal and polarimetric imagery (right):

Go to the original article...

Brookman Demos its Night Vision Sensor

Image Sensors World        Go to the original article...

Brookman publishes a demo video captured by its High Sensitivity BT200C CMOS sensor evaluation board.
  • Sensor : BT200C
  • Optical format : 2/3 Type
  • Pixel size : 5um x 5um
  • Active pixel area : 1920x1080
  • ADC resolution : 19-bit ADC (BT FI-Cyclic type)



Go to the original article...

Samsung CIS Business Data

Image Sensors World        Go to the original article...

Paxnet publishes Samsung business overview with interesting data on the company's image sensor operation (DDI stands for Display Driver IC, AP - Application Processor):

  • Samsung's Non-Memory Division revenue breakdown by product: Foundry 41%, CIS 19%, DDI 15%, AP 23%
  • Cumulative growth since 2010: Foundry + 1,053%, CIS +457%, DDI 60%, AP +229%
  • Samsung's Non-memory Growth Engine: Foundry and CIS.
    Cumulative growth forecast for 2018-2021: Foundry +63%, CIS +87%, DDI -3%, AP +14%
  • CIS Division Sales KRW 453.2 billion in 2010 → KRW 2.5 trillion in 2018 → 4.1 trillion KRW in 2021
  • Samsung CIS Capacity starts to approach Sony
  • Samsung is currently producing low and mid-priced CIS on its 8-inch line
  • In addition to the existing 12-inch plant, the company is expanding its market share by expanding the S4, a CIS-only plant.
  • Manufacturing process technology is similar to DRAM, so CIS expansion will continue using aging DRAM equipment
  • Expected to deploy CIS conversion of Line 11 in 2018 and Line 13 in 2020
  • SK Hynix is too expected to expand 12-inch CIS lines in its domestic plants

Go to the original article...

Snapdragon 865 & 765 Imaging Capabilities

Image Sensors World        Go to the original article...

Qualcomm announces Snapdragon 865 mobile platform with upgraded camera features:

"Gigapixel Speed ISP: The Snapdragon 865’s ISP operates at staggering speeds of up to 2 gigapixels per second and provides brand-new camera features and capabilities. You can capture in 4K HDR with over a billion shades of color, capture 8K video, or snap massive 200-megapixel photos [Old Snapdragon 845 announced in 2017, as well as the newer 855 and 855+ platforms already supported 192MP]. You can also take advantage of the gigapixel speeds to slow things down and capture every millisecond of detail with unlimited high-definition slow-motion video capture at 960 fps. And now, for the first time ever on a mobile platform, Dolby Vision for video capture creates brilliant HDR footage that’s primed and ready for the big screen. In tandem with the 5th generation Qualcomm AI Engine, the gigapixel speed ISP can quickly and intelligently identify different backgrounds, people, and objects, so they can be treated individually for a truly customized photo."




Qualcomm also announces a mid-range Snapdragon 765 platform with somewhat weaker but still impressive camera support:

Go to the original article...

PMD to Present its 5um ToF Pixels

Image Sensors World        Go to the original article...

PMD and Infineon are going to unveil their 5um ToF pixel sensor at CES 2020:

"Our new VGA 3D imager IRS2877C is the highest resolution, most flexible and robust depth sensor we have ever developed. With our new 5µm pmd pixel core, we offer VGA resolution to give your applications more detailed 3D data – e.g. for secure FaceID, AR applications, or enhanced photography. We also have incorporated additional on-chip functionalities to enable smaller and cheaper 3D modules."


Thanks to RW for the link!

Go to the original article...

ON Semi 0.3MP Sensor Wins World Electronics Achievement Award

Image Sensors World        Go to the original article...

ON Semi reports that it has won the Innovative Product of the Year award in the sensor category of the World Electronics Achievement Awards (WEAA) 2019 for its recently announced ARX3A0 CMOS sensor.

The 0.3MP sensor features:
  • 1/10th-inch Optical Format
  • Super-Low Power Mode and Motion Detection Function with Smart Wake
  • High Frame Rate of 360fps
  • Ultra-Fast Electronic Rolling Shutter
  • Small Die Size 3.35 mm x 3.40 mm
  • 560 (H) x 560 (V) [1:1] VGA resolution in a square format
  • 2.2 µm BSI non-stacked pixel
  • Monochrome with NIR+ implementation
  • Low Power consumption:
    less than 19 mW at 30fps
    less than 82 mW at 120fps
    less than 140 mW at 360fps
    less than 3.2 mW standby
  • Smart wake motion detection function


Go to the original article...

Gpixel Announces Red Fox NIR-Enhancing Technology

Image Sensors World        Go to the original article...

Gpixel announces a NIR-enhanced version of its GMAX0505 sensor, a 26MP global shutter CMOS sensor in a square 1.1” optical format. The GMAX0505 and the new NIR-enhanced GMAX0505RF are part of Gpixel’s 2.5 µm pixel family of pin-compatible C-mount sensors for industrial and scientific applications.

Typically, NIR sensitivity is enhanced by increasing the thickness of the sensor’s epitaxial layer. However, this impacts MTF and limits the effective resolution of the sensor. The new Red Fox process modifies the
sensor’s sensitive layer to achieve an optimal balance between NIR sensitivity and MTF.

The GMAX0505RF achieves a QE of almost 34% at 850 nm and 14% at 940 nm. In comparison, based on publicly available EMVA test data some competitive image sensors only offer a QE of 20% at 850 nm. This equates to a 70% improvement in QE performance.

As the Red Fox process can be applied to any GMAX products, Gpixel’s anticipates other GMAX RF products will be released in the near future. Engineering samples of the GMAX0505RF are available now for evaluation.

Go to the original article...

Image Sensors at ISSCC 2020

Image Sensors World        Go to the original article...

ISSCC 2020 to be held in San Francisco on Feb. 16-20 publishes its agenda with heavy emphasis on ToF imaging. The biggest surprise is a joint Prophesee and Sony paper on event-based sensor:
  • A 240×192Pixel 10fps 70klux 225m-Range Automotive LiDAR SoC Using a 40ch 0.0036mm2 Voltage/Time Dual-Data-Converter-Based AFE
    S. Kondo, H. Kubota, H. Katagiri, Y. Ota, M. Hirono, T. T. Ta, H. Okuni, S. Ohtsuka, Y. Ojima, T. Sugimoto, H. Ishii, K. Yoshioka, K. Kimura, A. Sai, N. Matsumoto,
    Toshiba, Japan
  • A 1200×900 6μm 450fps Geiger-Mode Vertical Avalanche Photodiodes CMOS Image Sensor for a 250m Time-of-Flight Ranging System Using Direct-Indirect-Mixed Frame Synthesis with Configurable-Depth-Resolution Down to 10cm
    T. Okino, S. Yamada, Y. Sakata, S. Kasuga, M. Takemoto, Y. Nose, H. Koshida, M. Tamaru, Y. Sugiura, S. Saito, S. Koyama, M. Mori, Y. Hirose, M. Sawada, A. Odagawa, T. Tanaka,
    Panasonic, Nagaokakyo, Japan
  • An Up-to-1400nm 500MHz Demodulated Time-of-Flight Image Sensor on a Ge-on-Si Platform
    C-L. Chen, S-W. Chu, B-J. Chen, Y-F. Lyu, K-C. Hsu, C-F. Liang, S-S. Su, M-J. Yang, C-Y. Chen, S-L. Cheng, H-D. Liu, C-T. Lin, K. P. Petrov, H-W. Chen, K-C. Chu, P-C. Wu, P-T. Huang, N. Na, S-L. Chen,
    Artilux, Hsinchu, Taiwan
  • A Dynamic Pseudo 4-Tap CMOS Time-of-Flight Image Sensor with Motion Artifact Suppression and Background Light Cancelling Over 120klux
    D. Kim, S. Lee, D. Park, C. Piao, J. Park, Y. Ahn, K. Cho, J. Shin, S. M. Song, S-J. Kim, J-H. Chun, J. Choi,
    Sungkyunkwan University, Suwon,
    Ulsan National Institute of Science and Technology, Ulsan, Korea
    Zeeann, Hanam, Korea
  • A 2.1e-Temporal Noise and -105dB Parasitic Light Sensitivity Backside-Illuminated 2.3μm-Pixel Voltage-Domain Global Shutter CMOS Image Sensor Using High-Capacity DRAM Capacitor Technology
    J-K. Lee, S. S. Kim, I-G. Baek, H. Shim, T. Kim, T. Kim, J. Kyoung, D. Im, J. Choi, K. Cho, D. Kim, H. Lim, M-W. Seo, J. Kim, D. Kwon, J. Song, J. Kim, M. Jang, J. Moon, H. Kim, C. K. Chang, J. Kim, K. Koh, H. Lim, J. Ahn, H. Hong, K. Lee, H-K. Kang,
    Samsung Electronics, Hwaseong, Korea
  • A 1/2.65in 44Mpixel CMOS Image Sensor with 0.7µm Pixels Fabricated in Advanced Full-Depth Deep-Trench Isolation Technology
    H. Kim, J. Park, I. Joe, D. Kwon, J. H. Kim, D. Cho, T. Lee, C. Lee, H. Park, S. Hong, C. Chang, J. Kim, H. Lim, Y. Oh, Y. Kim, S. Nah, S. Jung, J. Lee, J. Ahn, H. Hong, K. Lee, H-K. Kang,
    Samsung Electronics, Hwaseong, Korea
  • A 132dB Single-Exposure-Dynamic-Range CMOS Image Sensor with High Temperature Tolerance
    Y. Sakano, T. Toyoshima, R. Nakamura, T. Asatsuma, Y. Hattori, T. Yamanaka, R. Yoshikawa, N. Kawazu, T. Matsuura, T. Iinuma, T. Toya, T. Watanabe, A. Suzuki, Y. Motohashi, J. Azami, Y. Tateshita, T. Haruta,
    Sony Semiconductor, Japan
  • A 0.50erms Noise 1.45μm-Pitch CMOS Image Sensor with Reference-Shared In-Pixel Differential Amplifier at 8.3Mpixel 35fps
    M. Sato, Y. Yorikado, Y. Matsumura, H. Naganuma, E. Kato, T. Toyofuku, A. Kato, Y. Oike,
    Sony Semiconductor Solutions, Atsugi, Japan
  • A 0.8V Multimode Vision Sensor for Motion and Saliency Detection with Ping-Pong PWM Pixel
    T-H. Hsu, Y-K. Chen, J-S. Wu, W-C. Ting, C-T. Wang, C-F. Yeh, S-H. Sie, Y-R. Chen, R-S. Liu, C-C. Lo, K-T. Tang, M-F. Chang, C-C. Hsieh,
    National Tsing Hua University, Hsinchu, Taiwan
  • A 1280×720 Back-Illuminated Stacked Temporal Contrast Event-Based Vision Sensors with 4.86μm Pixels, 1.066GEPS Readout, Programmable Event-Rate Controller and Compressive Data-Formatting Pipeline
    T. Finateu, A. Niwa, D. Matolin, K. Tsuchimoto, A. Mascheroni, E. Reynaud, P. Mostafalu, F. Brady, L. Chotard, F. LeGoff, H. Takahashi, H. Wakabayashi, Y. Oike, C. Posch,
    PROPHESEE, Paris, France,
    Sony Semiconductor Solutions, Atsugi, Japan,
    Sony Electronics, Rochester, NY

Go to the original article...

Samsung 2017-19 Achievements

Image Sensors World        Go to the original article...

Samsung publishes a half-year report listing its recent achievements:

Go to the original article...

LiDAR News: Quanergy, Innoviz

Image Sensors World        Go to the original article...

MarketWatch: Quanergy CEO says that Elon Musk's criticism of Lidar 'makes no sense':



Innoviz decided to put its non-automotive qualified InnovizPro LiDAR into production. The automotive qualified version named InnovizOne is expected to start sampling next year.

Go to the original article...

css.php