“NIKKOR – The Thousand and One Nights (Tale 84) has been released”

Nikon | Imaging Products        Go to the original article...

Go to the original article...

In-pixel compute: IEEE Spectrum article and Nature Materials paper

Image Sensors World        Go to the original article...

A paper by Dodda et al. from a research group in the Material Science and Engineering department at Pennsylvania State University was recently published in Nature Materials. 

Link: https://www.nature.com/articles/s41563-022-01398-9

Active pixel sensor matrix based on monolayer MoS2 phototransistor array

Abstract:

In-sensor processing, which can reduce the energy and hardware burden for many machine vision applications, is currently lacking in state-of-the-art active pixel sensor (APS) technology. Photosensitive and semiconducting two-dimensional (2D) materials can bridge this technology gap by integrating image capture (sense) and image processing (compute) capabilities in a single device. Here, we introduce a 2D APS technology based on a monolayer MoS2 phototransistor array, where each pixel uses a single programmable phototransistor, leading to a substantial reduction in footprint (900 pixels in ∼0.09 cm2) and energy consumption (100s of fJ per pixel). By exploiting gate-tunable persistent photoconductivity, we achieve a responsivity of ∼3.6 × 107 A W−1, specific detectivity of ∼5.6 × 1013 Jones, spectral uniformity, a high dynamic range of ∼80 dB and in-sensor de-noising capabilities. Further, we demonstrate near-ideal yield and uniformity in photoresponse across the 2D APS array.

 


 Fig 1: 2D APS. a, 3D schematic (left) and optical image (right) of a monolayer MoS2 phototransistor integrated with a programmable gate stack. The local back-gate stacks, comprising atomic layer deposition grown 50 nm Al2O3 on sputter-deposited Pt/TiN, are patterned as islands on top of an Si/SiO2 substrate. The monolayer MoS2 used in this study was grown via an MOCVD technique using carbon-free precursors at 900 °C on an epitaxial sapphire substrate to ensure high film quality. Following the growth, the film was transferred onto the TiN/Pt/Al2O3 back-gate islands and subsequently patterned, etched and contacted to fabricate phototransistors for the multipixel APS platform. b, Optical image of a 900-pixel 2D APS sensor fabricated in a crossbar architecture (left) and the corresponding circuit diagram showing the row and column select lines (right).

Fig. 2: Characterization of monolayer MoS2. a, Structure of MoS2 viewed down its c axis with atomic-resolution HAADF-STEM imaging at an accelerating voltage of 80 kV. Inset: the atomic model of 2H-MoS2 overlayed on the STEM image. b, SAED of the monolayer MoS2, which reveals a uniform single-crystalline structure. c,d, XPS of Mo 3d (c) and S 2p (d) core levels of monolayer MoS2 film. e,f, Raman spectra (e) and corresponding spatial colourmap of peak separation between the two Raman active modes, E12g and A1g, measured over a 40 µm × 40 µm area, for as-grown MoS2 film (f). g,h, PL spectra (g) and corresponding spatial colourmap of the PL peak position (h), measured over the same area as in f. The mean peak separation was found to be ~20.2 cm−1 with a standard deviation of ~0.6 cm−1 and the mean PL peak position was found to be at ~1.91 eV with a standard deviation of ~0.002 eV. i, Map of the relative crystal orientation of the MoS2 film obtained by fitting the polarization-dependence of the SHG response shown in j, which is an example polarization pattern obtained from a single pixel of i by rotating the fundamental polarization and collecting the harmonic signal at a fixed polarization.
 
Fig. 3: Device-to-device variation in the characteristics of MoS2 phototransistors. a, Transfer characteristics, that is, source to drain current (IDS) as a function of the local back-gate voltage (VBG), at a source-to-drain voltage (VDS) of 1 V and measured in the dark for 720 monolayer MoS2 phototransistors (80% of the devices that constitute the vision array) with channel lengths (L) of 1 µm and channel widths (W) of 5 µm. b–d, Device-to-device variation is represented using histograms of electron field-effect mobility values (μFE) extracted from the peak transconductance (b), current on/off ratios (rON/OFF) (c), subthreshold slopes (SS) over three orders of magnitude change in IDS (d) and threshold voltages (VTH) extracted at an isocurrent of 500 nA µm−1 for 80% of devices in the 2D APS array (e). f, Pre- and post-illumination transfer characteristics of 720 monolayer MoS2 phototransistors after exposure to white light with Pin = 20 W m−2 at Vexp = −3 V for τexp = 1 s. g–j, Histograms of dark current (IDARK) (green) and photocurrent (IPH) (yellow) (g), the ratio of post-illumination photocurrent to dark current (rPH) (h), responsivity (R) (i) and detectivity (D*) (j), all measured at VBG = −1 V.

Fig. 4: HDR and spectral uniformity. a–c, The post-illumination persistent photocurrent (IPH) read out using VBG = 0 V and VDS = 1 V under different exposure times (τexp) is plotted against Pin for Vexp = −2 V at red (a), green (b) and blue (c) wavelengths. Clearly, the 2D APS demonstrates HDR for all wavelengths investigated. d–f, However, the 2D APS displays spectral non-uniformity in the photoresponse, which can be adjusted by exploiting gate-tunable persistent photoconductivity, that is, by varying Vexp. This is shown by plotting IPH against Pin for different Vexp at red (d), green (e) and blue (f) wavelengths.

 Fig. 5: Photodetection metrics. a–c, Responsivity (R) as a function of Vexp and Pin for τexp = 100 ms for red (a), green (b) and blue (c) wavelengths. R increases monotonically with the magnitude of Vexp. d, Transfer characteristics of a representative 2D APS in the dark and post-illumination at Vexp = −6 V with Pin = 0.6 W m−2 for τexp = 200 s and VDS = 6 V. e, R as a function of VBG. For VDS = 6 V and VBG = 5 V we extract an R value of ~3.6 × 107 A W−1. f, Specific detectivity (D*) as a function of VBG at different VDS. At lower VBG, both R and Inoise, that is, the dark current obtained from d, are low, leading to lower D*, whereas at higher VBG both R and Inoise are high, also leading to lower D*. Peak D* can reach as high as ~5.6 × 1013 Jones. g, Energy consumption per pixel (E) as a function of Vexp.

Fig. 6: Fast reset and de-noising. a, After the read out, each pixel can be reset by applying a reset voltage (Vreset) for time periods as low as treset = 100 µs. b, The conductance ratio (CR), defined as the ratio between the conductance values before and after the application of a reset voltage, is plotted against different Vreset. c, Energy expenditure for reset operations under different Vreset. d, Heatmaps of conductance (G) measured at VBG = 0 V from the image sensor with and without Vreset when exposed to images under noisy conditions. Clearly, application of Vreset helps in de-noising image acquisition.
 

This work was covered in the IEEE Spectrum magazine in an article titled "New Pixel Sensors Bring Their Own Compute: Atomically thin devices that combine sensing and computation also save power".

Link: https://spectrum.ieee.org/active-pixel-sensor

In the new study, the researchers sought to add in-sensor processing to active pixel sensors to reduce their energy and size. They experimented with the 2D material molybdenum disulfide, which is made of a sheet of molybdenum atoms sandwiched between two layers of sulfur atoms. Using this light-sensitive semiconducting material, they aimed to combine image-capturing sensors and image-processing components in a single device.

The scientists developed a 2D active pixel sensor array in which each pixel possessed a single programmable phototransistor. These light sensors can each perform their own charge-to-voltage conversion without needing any extra transistors.

The prototype array contained 900 pixels in 9 square millimeters, with each pixel about 100 micrometers large. In comparison, state-of-the-art CMOS sensors from Omnivision and Samsung have reached about 0.56 µm in size. However, commercial CMOS sensors also require additional circuitry to detect low light levels, increasing their overall area, which the new array does not... .



Go to the original article...

VoxelSensors and OQmented collaborate on laser scanning-based 3D perception to blend the physical with digital worlds

Image Sensors World        Go to the original article...

https://www.globenewswire.com/news-release/2022/12/20/2576935/0/en/VoxelSensors-and-OQmented-collaborate-on-laser-scanning-based-3D-perception-to-blend-the-physical-with-digital-worlds.html

BRUSSELS, Belgium and ITZEHOE, Germany, Dec. 20, 2022 (GLOBE NEWSWIRE) -- VoxelSensors, the inventor of Switching Pixels®, a revolutionary 3D perception technology, and OQmented, the technology leader in MEMS-based AR/VR display and 3D sensing solutions, have entered a strategic partnership. The collaboration focuses on the system integration and commercialization of a high-performance 3D perception system for AR/VR/MR and XR devices. Both companies will demonstrate this system and their technologies during CES 2023 in Las Vegas.


Switching Pixels® resolves major challenges in 3D perception for AR/VR/MR/XR devices. The solution is based on laser beam scanning (LBS) technology to deliver accurate and reliable 3D sensing without compromising on power consumption, data latency or size. VoxelSensors’ key patented technologies ensure optimal operation under any lighting condition and with concurrent systems. Their new sensor architecture provides asynchronous tracking of an active light source or pattern. Instead of acquiring frames, each pixel within the sensor array only generates an event upon detecting active light signals, with a repetition rate of up to 100 MHz.


This system is enabled through OQmented’s unique Lissajous scan pattern: in contrast to raster scanning which works line by line to complete a frame, the Lissajous trajectories scan much faster and are created very power efficiently. They can capture complete scenes and fast movements considerably quicker and require less data processing. That makes this particular technique essential for the low latency and the power efficiency of the combined perception system.


“The partnership with VoxelSensors is a great opportunity to unlock the potential of Lissajous laser beam scanning for 3D perception in lightweight Augmented Reality glasses,” said Ulrich Hofmann, co-CEO/CTO and co-founder of OQmented. “We are proud to deliver the most efficient scanning solution worldwide which enables the amazing products of our partner, bringing us one step closer to our goal of allowing product developers to build powerful but also stylish AR glasses.”


“At VoxelSensors, we wanted to revolutionize the perception industry. For too long, innovation in our space has focused on data processing, while there is so much efficiency to gain when working on the boundaries of photonics and physics. Combined with OQmented technology, we have the ability to transform the industry, enabling strong societal impact in multiple verticals, such as Augmented and Virtual Reality,” explains Johannes Peeters, founder and CEO of VoxelSensors. “Blending the physical and virtual worlds will create astonishing experiences for consumers and productivity gains in the enterprise world.”


This cooperation between two fabless deep tech semiconductor startups demonstrates Europe’s innovation capabilities in the race to produce next-generation technologies for AR/XR/VR and many other applications. These are crucial to Europe’s strategic objective of increasing its market share in semiconductors through key contributions of EU fabless companies as part of the European Chips Act.

Go to the original article...

ESPROS voted No. 1 optoelectronic company of 2022

Image Sensors World        Go to the original article...

https://www.espros.com/espros-voted-no-1-optoelectronic-company-of-2022/ 

The Swiss company has been voted by the influential Semiconductor Review publication, going so far as to say ESPROS is “shaping a new paradigm of Time of Flight technologies”, with exceptional performance under full sunlight with moving objects and varying target reflectivity. ESPROS’ unique technology and its ability to help clients analyze an application and offer proven engineering solutions have ensured its growth as a custom ASIC chip manufacturer and 3D TOF module designer.

The company’s true system-on-chip TOF imager enables improved time delayed imaging and fluorescent lifetime imaging outcomes.

In the current scenario merging 3D imaging and optical sensors for mass applications requires very fast time resolving capabilities plus high sensitivity in NIR, conventional manufacturing processes are not robust enough dealing with background light movement and reflectivity. That’s where ESPROS has a major advantage having developed a backside-illuminated imager that merges CCD and CMOS technology.

The ESPROS approach means expensive peripheral components such as FPGAs and A/D converters are not required. This also means ESPROS products are both more cost effective and compact. ESPROS Photonics offers a wide range of TOF chips and line imagers as well as sensor modules, using its proprietary OHC15L silicon imager technology. Meanwhile, its off the shelf reference design 3D modules speed up a customer’s time to market.

Full article in Semiconductor Review available here: https://www.semiconductorreview.com/espros-photonics

Go to the original article...

MagikEye to Present Disruptive 3D Sensing with Invertible Light™ Image Sensor Technology at CES

Image Sensors World        Go to the original article...

From Businesswire: https://www.businesswire.com/news/home/20221220005152/en/MagikEye-to-Present-Disruptive-3D-Sensing-with-Invertible-Light%E2%84%A2-Image-Sensor-Technology-at-CES

STAMFORD, Conn.--(BUSINESS WIRE)--Magik Eye Inc. (www.magik-eye.com), an innovative 3D sensing company will be holding demonstrations for its latest Invertible Light™ Technology (ILT) at the 2023 Consumer Electronics Show in Las Vegas Nevada. ILT is a patented alternative to older Time of Flight and Structured Light solutions, enabling the smallest, fastest and most power-efficient 3D sensing method. At its essence, ILT uses a patent protected regular dot projector pattern versus current random dot projection used by Structured Light. This allows for transformative simplicity of design, compute and form factor. “We see that the simplicity of ILT is driving demand for automotive and smarter home use cases. As we see more use cases opening up for the robotics age that lies ahead, we envision a world where there is 3D everywhere with ILT” said Takeo Miyazawa, Founder & CEO of MagikEye.

CES 2023 will take place in Las Vegas on Jan. 5-8, 2023. Attendees will experience new technologies from global brands, hear about the future of technology from thought leaders and collaborate face-to-face with other attendees. Live demonstrations of MagikEye’s latest ILT solutions for next-gen 3D sensing solutions will be held from January 5-8 at the Luxor Hotel. Demonstration times are limited and private reservations will be accommodated by contacting ces2023@magik-eye.com.

About Magik Eye Inc. www.magik-eye.com
Founded in 2015, Magik Eye Inc. has a family of 3D depth sensing solutions that support a wide range of applications for smartphones, robotic and surveillance. Magik Eye’s patent protected technology is based on Invertible Light™ that enables the smallest, fastest & most power-efficient 3D sensing.


Go to the original article...

Yole webinar on SWIR applications for consumer markets

Image Sensors World        Go to the original article...

Yole published a webinar on SWIR imaging potential applications for mass market:

 
 
 
 

 

 

 

 

Go to the original article...

LiDAR News: Quanergy Files for Bankruptcy

Image Sensors World        Go to the original article...

Coverage in Wall Street Journal [paywalled]: https://www.wsj.com/articles/sensor-startup-files-for-bankruptcy-10-months-after-spac-merger-11670975713

From Businesswire

Quanergy to Facilitate Sale of Business Through Voluntary Chapter 11 Process, Announces Leadership Changes

SUNNYVALE, Calif.--(BUSINESS WIRE)--Quanergy Systems, Inc. (OTC: QNGY) (“Quanergy” or the “Company”), a leading provider of LiDAR sensors and smart 3D solutions, today announced that the Company initiated an orderly sale process for its business. To facilitate the sale and maximize value, the Company filed for protection under Chapter 11 (“Chapter 11”) of the U.S. Bankruptcy Code (the “Bankruptcy Code”) in the United States Bankruptcy Court for the District of Delaware (the “Bankruptcy Court”) and intends to pursue a sale of the business under section 363 of the Bankruptcy Code.

Quanergy also announced today that Kevin Kennedy, Chief Executive Officer, will retire effective December 31, 2022, but will continue to serve as non-executive Chair of the Board of Directors. Mr. Kennedy will transition executive leadership to a newly appointed Chief Restructuring Officer and President, Lawrence Perkins.

“It has been my honor to serve as CEO at Quanergy for the past 2.5 years,” said Kevin Kennedy, Chief Executive Officer of Quanergy. “During this time, the company shifted our technology focus towards security and industrial applications which enabled the company to grow revenue by serving customer needs in a new marketplace. The Board and I have agreed that it is an appropriate time for me to transition day-to-day leadership to our capable newly appointed Chief Restructuring Officer. I will continue to provide guidance, continuity, and support as non-executive Board Chair.”

Mr. Perkins is the founder and Chief Executive Officer of SierraConstellation Partners, an interim management and advisory firm, which he founded in 2013. Mr. Perkins has served in a variety of senior-level positions, including interim CEO/President, Chief Restructuring Officer, board member, financial advisor, strategic consultant, and investment banker, to numerous private and public middle-market companies.

Prior to the filing of the Company’s Chapter 11 case, the Board of Directors and management evaluated a wide range of strategic alternatives to maximize value for all stakeholders. The Company also significantly reduced operating expenses and resolved significant patent litigation with Velodyne. Now with the protections afforded by the Bankruptcy Code, the Company intends to broaden its marketing efforts to potential purchasers interested in specific business segments or assets as well as continuing to seek a going concern sale of the business.

The Company expects to continue operations during the Chapter 11 process and seeks to complete an expedited sale process with Bankruptcy Court approval. To help fund and protect its operations, Quanergy intends to use available cash on hand along with normal operating cash flows to fund post-petition operations and costs in the ordinary course.

“Quanergy has made considerable efforts to address ongoing financial challenges stemming from volatile capital market conditions,” said Lawrence Perkins, Chief Restructuring Officer and President of Quanergy. “Despite these challenges, the Company has seen improving demand in the security, smart spaces, and industrial markets, and improvements in supply chain conditions. We are confident that Quanergy’s efforts have positioned the Company for a value-maximizing transaction during the Chapter 11 sale process. During the process, we will continue to prioritize the needs of our customers and I am thankful to the entire Quanergy team for their continued efforts and contributions to the business.”

The Company has filed customary motions with the Bankruptcy Court intended to allow Quanergy to maintain operations in the ordinary course including, but not limited to, paying employees and continuing existing benefits programs, meeting commitments to customers and fulfilling go-forward obligations, including vendor payments. Such motions are typical in the Chapter 11 process and Quanergy anticipates that they will be heard in the first few days of its Chapter 11 case.

For more information about the Company’s Chapter 11 case, including claims information, please visit https://cases.stretto.com/Quanergy or call our hotline at 855-613-0451 (for toll-free U.S. and Canada calls) or 949-889-0181 (for tolled international calls).

Cooley LLP is serving as counsel, Young Conaway Stargatt & Taylor LLP is serving as co-counsel, Raymond James & Associates, Inc. is serving as investment banker, and FTI Consulting is serving as financial advisor to Quanergy.

Go to the original article...

CES 2023 Award for Aeva Aeries II 4D LiDAR

Image Sensors World        Go to the original article...

From Businesswire


MOUNTAIN VIEW, Calif.--(BUSINESS WIRE)--Aeva® (NYSE: AEVA), a leader in next-generation sensing and perception systems, today announced that its Aeries™ II sensor has been named a CES® 2023 Innovation Awards Honoree. The prestigious CES Innovation Awards honor outstanding design and engineering in consumer technology products, and were given in advance of CES 2023.

The CES Innovation Award builds on growing recognition for Aeries II and its innovative 4D LiDAR™ technology, which were recently chosen as one of TIME’s Best Inventions of 2022.

“Our next-generation 4D LiDAR technology goes beyond legacy 3D LiDAR systems because of its unique instant velocity detection and long range performance capabilities, in addition to Ultra Resolution,” said Mina Rezk, Co-Founder and CTO at Aeva. “We are honored that Aeries II continues to receive further recognition with this CES Innovation Award because, put simply, we believe Aeva 4D LiDAR has the potential to change the game for passenger cars, commercial vehicles and robotaxis by making vehicle automation safer and more reliable.”

Aeva’s Aeries II 4D LiDAR sensor delivers breakthrough sensing and perception performance using Frequency Modulated Continuous Wave (FMCW) technology to directly detect the instant velocity of each point, in addition to precise 3D position at long range. Its capabilities go beyond legacy time-of-flight 3D LiDAR sensors to enable the next generation of driver assistance and autonomous vehicle capabilities, including:

  • Instant Velocity Detection: Directly measure velocity for each point of detection, in addition to 3D position, to perceive where things are, and precisely how fast they are moving.
  • Long Range Performance: Detect, classify and track objects such as vehicles, cyclists and pedestrians at long distances.
  • Ultra Resolution™: A real-time camera-level image providing up to 20 times the resolution of legacy time-of-flight LiDAR sensors.
  • Road Hazard Detection: Detect small objects on the roadway with greater confidence at up to twice the distance of legacy time-of-flight LiDAR sensors.
  • 4D Localization™: Per-point velocity data enables real-time vehicle motion estimation with six degrees of freedom to enable accurate vehicle positioning and navigation without the need for additional sensors, like IMU or GPS.

Aeries II is the first sensor on the market to integrate Aeva’s unique LiDAR-on-chip technology which integrates all key sensor components including transmitters, receivers and optics onto silicon photonics in a compact module. This design uses no fiber optics, resulting in a highly automated manufacturing process that allows Aeva to scale deployment of its products and lower costs to meet the needs of automotive OEMs and other volume customers.

Detailed information about the CES 2023 Innovation Awards honorees can be found at CES.tech/innovation. In January 2023, Aeva will join other honorees to display their products in the Innovation Awards Showcase area at CES 2023. At the Aeva Booth (#6001, LVCC – West Hall), Aeva will showcase its Aeries II 4D LiDAR sensor alongside its unique LiDAR-on-chip technology that integrates all key LiDAR components onto a silicon photonics chip in a compact module.

Go to the original article...

EETimes article on sensor fusion for neuromorphic vision

Image Sensors World        Go to the original article...

Link: https://www.eetimes.com/improving-sensor-fusion-for-neuromorphic-vision/

Improving Sensor Fusion for Neuromorphic Vision (Nov 21, 2022)

The article links two videos about event cameras. The first one is a tutorial about event cameras from 2020:


 


The second video shows an example of a commercially available event camera called Davis camera (made by iniVation AG) which has a CMOS image sensor together with an event sensor and allows sensor fusion, giving the best of both worlds:

 

 


The article ends by highlighting two key challenges for wider applicability of event-based image sensors: (1) non-standard processing techniques that are different form conventional RGB data processing pipelines, (2) high power requirements of event data processing schemes.

Go to the original article...

Nikon releases the NIKKOR Z 40mm f/2 (SE), a compact and lightweight prime lens for the Nikon Z mount system

Nikon | Imaging Products        Go to the original article...

Go to the original article...

"Burst Vision" using SPAD Cameras

Image Sensors World        Go to the original article...

In a paper titled "Burst Vision Using Single-Photon Cameras", Sizhuo Ma, Paul Mos, Edoardo Charbon and Mohit Gupta from University of Wisconsin-Madison and École polytechnique fédérale de Lausanne write:

Single-photon avalanche diodes (SPADs) are novel image sensors that record the arrival of individual photons at extremely high temporal resolution. In the past, they were only available as single pixels or small-format arrays, for various active imaging applications such as LiDAR and microscopy. Recently, high-resolution SPAD arrays up to 3.2 megapixel have been realized, which for the first time may be able to capture sufficient spatial details for general computer vision tasks, purely as a passive sensor. However, existing vision algorithms are not directly applicable on the binary data captured by SPADs. In this paper, we propose developing quanta vision algorithms based on burst processing for extracting scene information from SPAD photon streams. With extensive real-world data, we demonstrate that current SPAD arrays, along with burst processing as an example plug-and-play algorithm, are capable of a wide range of downstream vision tasks in extremely challenging imaging conditions including fast motion, low light ($<5$ lux) and high dynamic range. To our knowledge, this is the first attempt to demonstrate the capabilities of SPAD sensors for a wide gamut of real-world computer vision tasks including object detection, pose estimation, SLAM, and text recognition. We hope this work will inspire future research into developing computer vision algorithms in extreme scenarios using single-photon cameras.


Full paper is available here: https://wisionlab.com/wp-content/uploads/2022/11/burst_vision_wisionlab.pdf

The paper will be presented at the upcoming Winter Conference on Applications of Computer Vision (WACV) conference in January 2023. 

 

Video summary


Dealing with motion blur in extremely low light


Dealing with extreme dynamic range



A large dataset of over 50 million binary burst frames for a wide range of computer vision tasks



Go to the original article...

Global CMOS Image Sensor Market to Grow at 6.32% CAGR, Expected to Reach USD 39.54 Billion by 2031

Image Sensors World        Go to the original article...

Link: https://www.gophotonics.com/news/details/3480-global-cmos-image-sensor-market-to-grow-at-6-32-cagr-expected-to-reach-usd-39-54-billion-by-2031

 

 

Research Nester recently published a report on "CMOS Image Sensor Market Analysis by Technology; and by End Use Industry – Global Supply & Demand Analysis & Opportunity Outlook 2018-2031."

The Global CMOS Image Sensor Market is estimated to grow at a CAGR of 6.32% over the forecast period, i.e., 2022-2031. Rising demand for high-definition image-capturing devices is expected to propel the market growth. For instance, Sony Corporation unveiled the IMX485 type 1/1.2 4K-resolution back-illuminated CMOS image sensor and the IMX415 type 1/2.8 4K CMOS image sensor in June 2019. Sony created these two security camera sensors to address the constantly growing demand for security cameras in a range of monitoring applications, such as anti-theft, disaster warning, and traffic monitoring systems, or commercial complexes.

Furthermore, there has been growing demand for CMOS image sensor in healthcare industry. They are usually used in observing patient during the surgeries. A recent report by the National Library of Medicine states that a staggering 310 million major procedures are carried out year around the world, with between 40 to 50 million taking place in the United States and 20 million in Europe.

Global CMOS Image Sensor Market: Key Takeaways

  •  Asia Pacific to hold the largest market revenue
  •  Popularity of smartphones to propel market growth in North America region
  •  Consumer electronics segment to garner the largest revenue


Rising Demand for Security & Surveillance to Drive Market Growth
CMOS image sensor is extensively used for the purpose of security and surveillance. CMOS image sensor has an ability to convert the photoelectrical signal into digital signal. Security is the major concern for everyone. Hence owing to increasing instances of theft and crime, more security cameras having CMOS senor are expected to be installed, allowing market growth. As it is estimated that approximately 82% of burglars check the presence of alarm system before breaking in.
However, they can’t be installed everywhere owing to privacy concerns. Hence many organizations have come up with innovative ideas which are anticipated to fuel the market. For instance, in December 2021, Canon revealed a brand-new outdoor 4K camera that can be used as both a conventional camera and a security camera. Additionally, it can combine every 4K UHD pixel that the 4K UHD CMOS image sensor has ever captured.

Global CMOS Image Sensor Market: Regional Overview
The global CMOS image sensor market is segmented into five major regions including North America, Europe, Asia Pacific, Latin America, and the Middle East and Africa region.
Government Initiative for Smart Cities to Drive Growth in Asia Pacific Region
The CMOS image sensor market in Asia Pacific region is anticipated to garner the largest revenue of USD 17,759.3 Million by the end of 2031. Government initiatives for smart cities is expected to fuel the growth in the market. The Ministry of Electronics and Information Technology in India has tasked ERNET India and IISc with developing the LoRa gateway (pole gateway), a low-cost compute device that can connect to cameras, temperature, humidity, air quality, and other sensors. This is part of the

Internet of Things (IoT) Management Framework for Smart Cities.
Growing Demand for Consumer Electronics to Favour Growth in North America Region
Further, North America Region is expected to grow further by garnering revenue of USD 12579.0 Million by the end of 2031, growing at a CAGR of 6.14% during 2022-2031. Increase in demand for smartphones to drive the market growth. Approximately 85% percent of all mobile users in the US are expected to have a smartphone by 2025. Various electronics item including smart phones, TVs, wearable gadgets and more consist of senor which is in huge demand in this region. Many smartphones manufacturers use image sensor in their smartphones. For instance, the Xiaomi 12S Ultra smartphone contains the world's biggest sensor in a smartphone. As part of the new line, Xiaomi has launched the 12S Series, which includes the Leica-engineered Ultra.
The study further incorporates Y-O-Y growth, demand & supply and forecast future opportunity in:

  •  North America (U.S., Canada)
  •  Europe (U.K., Germany, France, Italy, Spain, Hungary, Belgium, Netherlands & Luxembourg, NORDIC [Finland, Sweden, Norway, Denmark], Poland, Turkey, Russia, Rest of Europe)
  •  Latin America (Brazil, Mexico, Argentina, Rest of Latin America)
  •  Asia-Pacific (China, India, Japan, South Korea, Indonesia, Singapore, Malaysia, Australia, New Zealand, Rest of Asia-Pacific)
  •  Middle East and Africa (Israel, GCC [Saudi Arabia, UAE, Bahrain, Kuwait, Qatar, Oman], North Africa, South Africa, Rest of Middle East and Africa).
  • Global CMOS Image Sensor Market, Segmentation by End Use Industry
  •  Consumer Electronics
  •  Medical
  •  Industrial
  •  Security & Surveillance
  •  Automotive & Transportation
  •  Aerospace & Defense

The consumer electronics segment is estimated to hold the largest revenue of USD 27010.4 Million by the end of 2031. Increasing demand for CMOS in consumer electronics is expected to boost the market growth. This CMOS technology is extensively used in smartphones. CMOS are known for using less power and hence their demand in smartphones are increasing. Instead of capturing the whole image in a single instance it captures image in scanning type way. Moreover, cameras with CMOS sensor gives better saturation capacity owing to which many manufacturers are installing it in their smartphones. For instance, the newest CMOS image sensor in the XGS series was unveiled by ON Semiconductor. A 16Mp sensor called the XGS 16000 offers excellent global shutter imaging for robotics and inspection systems in factories. The XGS 16000 delivers great performance at low power while giving the highest resolutions for typical 29 x 29 mm industrial cameras, consuming just 1 Watt at 65FPS. In North America, the segment generated the largest revenue of USD 8576.4 Million by the end of 2031, while in the Asia Pacific, the segment is projected to register the largest revenue of USD 12124.3 Million by the end of 2031.

Global CMOS Image Sensor Market, Segmentation by Technology

  •  Front Side Illumination (FSI)
  •  Back Side Illumination (BSI)

The back side illumination (BSI) segment is anticipated to garner the largest revenue by the end of 2031, growing at the highest CAGR of 6.68% over the forecast period. The growth can be attributed to the increasing use of BSI technology in high quality and higher pixel cameras. The preference of smartphones producer is increasing for BSI technology which is also expected to lead a boost in demand. For instance, with the 42 megapixel Sony Alpha A7R Mark II, Sony has added a BSI Full-Frame sensor. The Sony Cyber-shot RX10 II and RX100 IV both have "stacked" sensors that enable even faster continuous shooting and high speed video recording. In the Asia Pacific, the segment is projected to grow with a CAGR of 7.34% during the forecast period, while in North America, the front side illumination (FSI) segment is projected to grow with a CAGR of 5.41% during the forecast period.
Few of the well-known market leaders in the global CMOS image sensor market that are profiled by Research Nester are STMicroelectronics International NV, Samsung Electronics America, Inc., Sony Semiconductor Solutions Corporation, ON Semiconductor Components Industries, LLC, Canon, Inc., SK Hynix Inc., OMNIVISION Technologies Inc., Hamamatsu Photonics K.K., Panasonic Industry Co. Ltd., and Teledyne Technologies Inc. and other key players.

Recent Development in in the Global CMOS Image Sensor Market

In December 15, 2021, Canon creates the world's highest resolution 3.2 megapixel SPAD sensor and introduces a breakthrough low-light imaging camera that achieves outstanding colour reproduction even in dimly lit conditions.

In February 14, 2018, Panasonic Corporation revealed that it has created a breakthrough technology that enables simultaneous 450k high-saturation electrons, global shutter photography with sensitivity modulation, and 8K high-resolution (36M pixels) imaging using a CMOS image sensor with an organic photoconductive layer (OPF).

Go to the original article...

Videos of the day [AMS-OSRAM, ESPROS, Sony]

Image Sensors World        Go to the original article...

New Mira global shutter image sensor from ams OSRAM advances 2D and 3D sensing with high quantum efficiency at visible and NIR wavelengths. The Mira sensors come supplied in a chip-scale package, with an optimized footprint and an industry-leading ratio of size to resolution empowered by state-of-the-art stacked back-side illumination technology to shrink package footprint, giving greater design flexibility to manufacturers of smart glasses and other space-constrained products. The Mira image sensors are super small and offer superior image quality in low light conditions and with its many on-chip operations, our image sensors open up many new possibilities for developers.

 
 

ESPROS Time-of-Flight products were developed for outdoor use and handle background light very well. These outdoor scenes were taken with our TOFcam-660. In this TOFcam-660 a epc660 is installed, which has a resolution of 320x240 pixels and can easily be used for outdoor applications with a lot of ambient light, even in direct sunlight of 100klux. Thanks to the good resolution, HDR mode, with different integration times and the already mentioned outdoor performance, various applications can be developed that require a clean distance image (depth map).




[Read more...]

Go to the original article...

Tamron 20-40mm f2.8 Di III review

Cameralabs        Go to the original article...

The Tamron 20-40mm f2.8 Di III is a wide-angle zoom designed for full-frame Sony mirrorless. See how it compares to Sony's FE 16-35mm f2.8 GM and Tamron's 17-28mm f2.8 Di III in my review!…

Go to the original article...

New Canon option for semiconductor lithography system back-end process contributes to 3D advanced packaging technologies, enables mass production of dense circuitry with exposure fields of up to 100 mm x 100 mm

Newsroom | Canon Global        Go to the original article...

Go to the original article...

NIT SWIR Portfolio

Image Sensors World        Go to the original article...

Press release from NIT (New Imaging Technologies) about their wide range of SWIR offerings:

https://new-imaging-technologies.com/news/the-largest-portfolio-of-swir-sensors-of-the-imaging-industry/


NIT is widely known for its large range of SWIR cameras designed for industrial, defense, and medical markets. Less known is that NIT designs and manufactures in-house all the InGaAs sensors embedded into our cameras. We master the design of silicon read-out circuits, InGaAs photodiode arrays, and assembly technologies such as 3D stacking.

Our recent investment into a new clean room facility and back-end process machines will bring our production capacity to several ten thousand sensors per year with the highest quality. ​



Such vertical integration allows us to offer a line of cameras with specific features, all adapted to our customer markets and applications. Our cameras and their performances are unique as they don’t use third-party sensors. The sensitivity, noise level, frame rate, pitches, dynamic range, and pixel numbers of our InGaAs sensors make our cameras the best in their class.​



 

Go to the original article...

2023 International Solid-State Circuits Conference (ISSCC) Feb 19-23, 2023

Image Sensors World        Go to the original article...

ISSCC will be held as an in-person conference Feb 19-23, 2023 in San Francisco. 

An overview of the program is available here: https://www.isscc.org/program-overview

Some sessions of interest to image sensors audience below:


Tutorial on  "Solid-State CMOS LiDAR Sensors" (Feb 19)
Seong-Jin Kim, Ulsan National Institute of Science and Technology, Ulsan, Korea

This tutorial will present the technologies behind single-photon avalanche-diode (SPAD)-based solid-state
CMOS LiDAR sensors that have emerged to realize level-5 automotive vehicles and the metaverse AR/VR in mobile devices. It will begin with the fundamentals of direct and indirect time-of-flight (ToF) techniques, followed by structures and operating principles of three key building blocks: SPAD devices, time-to-digital converters (TDCs), and signal-processing units for histogram derivation. The tutorial will finally introduce the recent development of on-chip histogramming TDCs with some state-of-the-art examples.

Seong-Jin Kim received a Ph.D. degree from KAIST, Daejeon, South Korea, in 2008 and joined the Samsung Advanced Institute of Technology to develop 3D imagers. From 2012 to 2015, he was with the Institute of Microelectronics, A*STAR, Singapore, where he was involved in designing various sensing systems. He is currently an associate professor at Ulsan National Institute of Science and Technology, Ulsan, South Korea, and a co-founder of SolidVUE, a LiDAR startup company in South Korea. His current research interests include high-performance imaging devices, LiDAR systems, and biomedical interface circuits and systems.

[Read more...]

Go to the original article...

ESPROS supplies ToF sensing to Starship Technologies

Image Sensors World        Go to the original article...

ESPROS supplies world leader for delivery robots

Sargans, 2022/11/29

Starship Technologies' autonomous delivery robots implement ESPROS’ epc660 Time-of-Flight chip ESPROS' epc660 chip is used by Starship Technologies, a pioneering US robotics technology company, headquartered in San Francisco, with its main engineering office in Estonia, is the world’s leading provider of autonomous last mile delivery services.
 

What was once considered science fiction is now a fact of modern life: in many countries robots deliver a variety of goods, such as parcels, groceries, medications. Starship’s robots are a common sight on University campuses and also in public areas.

Using a combination of sensors, artificial intelligence, machine learning and GPS to accurately
navigate, delivery robots face the need to operate in darkness, but also in bright sunlight. ESPROS sensors excel in both conditions.

The outstanding operation of the ambient light of ESPROS’ epc660 chip, together with its very high quantum efficiency, provided a valuable breakthrough that Starship Technologies needed to further increase autonomy in all ambient light conditions. It wasn’t possible to achieve the same level of performance, implementing other technologies.

ESPROS’ epc660 is able to detect objects over long distances, using very low power. This, together with its small size, results in lower system costs. The success of this chip lies in the years of development by ESPROS and in its strong technological know-how. The combination of its unique Time-Of-Flight technology, with Starship Technologies' position as the leading commercial autonomous delivery service, lies at the heart of over 3.5 million commercial deliveries and over 4 million miles driven around the world.

"The future of delivery, today: this is our bold promise," says Lauri Vain (VP of Engineering at Starship), adding, "With a combination of mobile technology, our global fleet of autonomous robots, and partnerships with stores and restaurants, we are helping to make the local delivery industry faster, cleaner, smarter and more cost-efficient, and we are very excited about our partnership with ESPROS and its unique chip technology."




Go to the original article...

IEDM 2022 (International Electron Devices Meeting)

Image Sensors World        Go to the original article...

IEDM conference will be held December 3-7, 2022 at the Hilton San Francisco Union Square. Starting December 12, the full conference will be on-demand. The full technical program is available here:

https://www.ieee-iedm.org/s/program2022-webiste-rev-002-779a.pdf

There are a couple of sessions of potential interest to the image sensors community.

Session 37: ODI - Silicon Image Sensors and Photonics
Wednesday, December 7, 1:30 p.m.

37.1 Coherent Silicon Photonics for Imaging and Ranging (Invited), Ali Hajimiri, Aroutin Khachturian, Parham Khial, Reza Fatemi, California Institute of Technology
Silicon photonics platform and their potential for integration with CMOS electronics present novel opportunities in applications such as imaging, ranging, sensing, and displays. Here, we present ranging and imaging results for a coherent silicon-imaging system that uses a two-path quadrature (IQ) approach to overcome optical path length mismatches.

37.2 Near-Infrared Sensitivity Enhancement of Image Sensor by 2 ND -Order Plasmonic Diffraction and the Concept of Resonant-Chamber-Like Pixel, Nobukazu Teranishi, Takahito Yoshinaga, Kazuma Hashimoto, Atsushi Ono, Shizuoka University
We propose 2 nd -order plasmonic diffraction and the concept of a resonant-chamber-like pixel to enhance the near-infrared (NIR) sensitivity of Si image sensors. Optical requirements for deep trench isolation are explained. In the simulation, Si absorptance as high as 49% at 940 nm wavelength for 3.25-µm-thick Si is obtained.

37.3 A SPAD Depth Sensor Robust Against Ambient Light: The Importance of Pixel Scaling and Demonstration of a 2.5µm Pixel with 21.8% PDE at 940nm, S. Shimada, Y. Otake, S. Yoshida, Y. Jibiki, M. Fujii, S. Endo, R. Nakamura, H. Tsugawa, Y. Fujisaki, K. Yokochi, J. Iwase, K. Takabayashi*, H. Maeda*, K. Sugihara*, K. Yamamoto*, M. Ono*, K. Ishibashi*, S. Matsumoto, H. Hiyama, and T. Wakano, Sony Semiconductor Solutions, *Sony Semiconductor Manufacturing
This paper presents scaled-down SPAD pixels to prevent PDE degradation under high ambient light. This study is carried out on Back-Illuminated structures with 3.3, 3.0, and 2.5µm pixel pitches. Our new SPAD pixels can achieve PDE at ?=940nm of over 20% and a peak of over 75%, even 2.5µm pixel.

37.4 3-Tier BSI CIS with 3D Sequential & Hybrid Bonding Enabling a 1.4um pitch,106dB HDR Flicker Free Pixel, F. Guyader, P. Batude*, P. Malinge, E.Vire, J. Lacord*, J. Jourdon, J. Poulet, L. Gay, F. Ponthenier*, S. Joblot, A. Farcy, L. Brunet*, A. Albouy*, C. Theodorou**, M. Ribotta*, D. Bosch*, E. Ollier*, D.Muller, M.Neyens, D. Jeanjean, T.Ferrotti, E.Mortini, J.G. Mattei, A. Inard, R. Fillon, F. Lalanne, F. Roy, E. Josse, STMicroelectronics, *CEA-Leti, Univ. Grenoble Alpes, **Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, Grenoble INP, IMEP-LAHC
A 3-tier CIS combining 3D Sequential Integration for the 2-tier pixel realization & Hybrid Bonding for the logic circuitry connection is demonstrated. Thin film pixel transistors are built above photo-gate without
congestion. Dual carrier collection 3DSI pixel offers an attractive dynamic range (106dB, Single Exposure) versus pixel pitch (1,4µm) trade-off

37.5 3-Layer Stacked Voltage-Domain Global Shutter CMOS Image Sensor with 1.8µm-Pixel-Pitch, Seung-Sik Kim, Gwi-Deok Ryan Lee, Sang-Su Park, Heesung Shim, Dae-Hoon Kim, Minjun Choi, Sangyoon Kim, Gyunha Park, Seung-Jae Oh, Joosung Moon, Sungbong Park, Sol Yoon, Jihye Jeong, Sejin Park, Sanggwon Lee, HaeJung Lee, Wonoh Ryu, Taehyoung Kim, Doowon Kwon, Hyuk Soon Choi, Hongki Kim, Jonghyun Go, JinGyun Kim, Seunghyun Lim, HoonJoo Na, Jae-kyu Lee, Chang-Rok Moon, Jaihyuk Song, Samsung Electronics
We developed a 1.8µm-pixel GS sensor which is suitable for mobile applications. Pixel shrink was possible by the 3-layer stacking structure with pixel-level Cu-to-Cu bonding and high-capacity DRAM capacitors. As a result, excellent performances were achieved i.e. -130dB, 1.8e-rms and 14ke- of PLS, TN and FWC, respectively.

37.6 Advanced Color Filter Isolation Technolgy for Sub-Micron Pixel of CMOS Image Sensor, Hojin Bak, Horyeong Lee, Won-Jin Kim, Inho Choi, Hanjun Kim, Dongha Kim, Hanseung Lee, Sukman Han, Kyoung-In Lee, Youngwoong Do, Minsu Cho, Moung-Seok Baek, Kyungdo Kim, Wonje Park, Seong-Hun Kang, Sung-Joo Hong, Hoon-Sang Oh, and Changrock Song SK hynix Inc.
The novel color filter isolation technology, which adopts the air, the lowest refractive index material on the earth, as a major component of an optical grid for sub-micron pixels of CMOS image sensors, is presented. The image quality improvement was verified through the enhanced optical performance of the air-grid-assisted pixels.

37.7 A 140 dB Single-Exposure Dynamic-Range CMOS Image Sensor with In-Pixel DRAM Capacitor, Youngsun Oh, Jungwook Lim, Soeun Park, Dongsuk Yoo, Moosup Lim, Joonseok Park, Seojoo Kim, Minwook Jung, Sungkwan Kim, Junetaeg Lee, In-Gyu Baek, Kwangyul Ryu, Kyungmin Kim, Youngtae Jang, Min-SunKeel, Gyujin Bae, Seunghun Yoo, Youngkyun Jeong, Bumsuk Kim, Jungchak Ahn, Haechang Lee, Joonseo Yim, Samsung Electronics Co., Ltd.
A CMOS image sensor with a 2.1 µm pixel for automotive applications was developed. With a sub-pixel structure and a high-capacity DRAM capacitor, a single exposure dynamic range achieves 140 dB at 85, supporting LED flicker mitigation and blooming free. SNR stay above 23 dB at 105

Session 19: ODI - Photonic Technologies and Non-Visible Imaging
Tuesday, December 6, 2:15 p.m.

19.1 Record-low Loss Non-volatile Mid-infrared PCM Optical Phase Shifter based on Ge2Sb2Te 3S2, Y. Miyatake, K. Makino*, J. Tominaga*, N. Miyata*, T. Nakano*, M. Okano*, K. Toprasertpong, S. Takagi, M. Takenaka, The University of Tokyo, *National Institute of Advanced Industrial Science and Technology (AIST)
We propose a low-loss non-volatile PCM phase shifter operating at mid-infrared wavelengths using Ge 2Sb 2Te 3S2 (GSTS), a new selenium-free widegap PCM. The GSTS phase shifter exhibit the record-low optical loss for p phase shift of 0.29 dB/p, more than 20 times better than reported so far in terms of figure-of-merit.

19.2 Monolithic Integration of Top Si3N4-Waveguided Germanium Quantum-Dots Microdisk Light Emitters and PIN Photodetectors for On-chip Ultrafine Sensing, C-H Lin, P-Y Hong, B-J Lee, H. C. Lin, T. George, P-W Li, National Yang Ming Chiao Tung University
An ingenious combination of lithography and self-assembled growth has allowed accurate control over the geometric with high-temperature thermal stability. This significant fabrication advantage has opened up the 3D integration feasibility of top-SiN waveguided Ge photonics for on-chip ultrafine sensing and optical interconnect applications.

19.3 Colloidal quantum dot image sensors: a new vision for infrared (Invited), P. Malinowski, V. Pejovic*, E. Georgitzikis, JH Kim, I. Lieberman, N. Papadopoulos, M.J. Lim, L. Moreno Hagelsieb, N. Chandrasekaran, R. Puybaret, Y. Li, T. Verschooten, S. Thijs, D. Cheyns, P. Heremans*, J. Lee, imec,
*KULeuven
Short-wave infrared (SWIR) range carries information vital for augmented vision. Colloidal quantum dots (CQD) enable monolithic integration with small pixel pitch, large resolution and tunable cut-off wavelength, accompanied by radical cost reduction. In this paper, we describe the challenges to realize manufacturable CQD image sensors enabling new use cases.

19.4 Grating-resonance InGaAs narrowband photodetector for multispectral detection in NIR-SWIR region, J. Jang, J. Shim, J. Lim, G. C. Park*, J. Kim**, D-M Geum, S. Kim, Korea Advanced Institute of Science and Technology (KAIST), *Electronics and Telecommunications Research Institute (ETRI), **Korea Advanced Nano Fab Center (KANC)
We proposed grating-resonance narrowband photodetector for the wavelength selection functionality at the range of 1300~1700 nm. Based on parameters designed from the simulation, we fabricated an array of pixels to selectively detect different wavelengths. Our device showed great wavelength selectivity and tunability depending on grating design with a narrow FWHM.

19.5 Alleviating the Responsivity-Speed Dilemma of Photodetectors via Opposite Photogating Engineering with an Auxiliary Light Source beyond the Chip, Y. Zou, Y. Zeng, P. Tan, X. Zhao, X. Zhou, X. Hou, Z. Zhang, M. Ding, S. Yu, H. Huang, Q. He, X. Ma, G. Xu, Q. Hu, S. Long, University of Science and Technology of China
The dilemma between responsivity and speed limits the performance of photodetectors. Here, opposite photogating engineering was proposed to alleviate this dilemma via an auxiliary light source beyond the chip. Based on a WSe 2/Ga 2O3 JFET, a >103 times faster speed towards deep ultra-violet has been achieved with negligible sacrifice of responsivity.

19.6 Experimental Demonstration of the Small Pixel Effect in an Amorphous Photoconductor using a Monolithic Spectral Single Photon Counting Capable CMOS-Integrated Amorphous-Selenium Sensor, R. Mohammadi, P. M. Levine, K. S. Karim, University of Waterloo
We directly demonstrate, for the first time, the small pixel effect in an amorphous material, a-Se. The results are also the first demonstration of the transient response of a-Se monolithically combined with a CMOS, with and without SPE, and the first aSe/CMOS PHS results, offering a-Se/CMOS for photon counting applications.

Go to the original article...

Harvest Imaging Forum April 5 and 6, 2023

Image Sensors World        Go to the original article...

https://harvestimaging.com/forum_introduction_2023_new.php

After the Harvest Imaging forums during the last decade, a next and nineth one will be organized on April 5 & 6, 2023 in Delft, the Netherlands. The basic intention of the Harvest Imaging forum is to have a scientific and technical in-depth discussion on one particular topic that is of great importance and value to digital imaging. The forum 2023 will again be organized in a hybrid form:

  • You can attend in-person and can benefit in the optimal way of the live interaction with the speakers and audience,
  • There will be also a live broadcast of the forum, still interactions with the speakers through a chat box will be made possible,
  • Finally the forum also can be watched on-line at a later date.

The 2023 Harvest Imaging forum will deal with a single topic from the field of solid-state imaging and will have only one world-level expert as the speaker.

Register here: https://harvestimaging.com/forum_registration_2023_new.php

 

"Imaging Beyond the Visible"
Prof. dr. Pierre MAGNAN (ISAE-SUPAERO, Fr)
 

Abstract:
Two decades of intensive and tremendous efforts have pushed the imaging capabilities in the visible domain closer to physical limits. But also extended the attention to new areas beyond visible light intensity imaging. Examples can be found either to higher photon energy with appearance of CMOS Ultra-Violet imaging capabilities or even to other light dimensions with Polarization Imaging possibilities, both in monolithic form suitable to common camera architecture.

But one of most active and impressive fields is the extension of interest to the spectral range significantly beyond the visible, in the Infrared domain. Special focus is put on the Short Wave Infrared (SWIR) used in the reflective imaging mode but also the Thermal Infrared spectral range used in self-emissive ‘thermal’ imaging mode in Medium Wave Infrared (MWIR) and Long Wave Infrared (LWIR). Initially mostly motivated for military and scientific applications, the use of these spectral domains have now met new higher volume applications needs.

This has been made possible thanks to new technical approaches enabling cost reduction stimulated by the efficient collective manufacturing process offered by the microelectronics industry. CMOS, even no more sufficient to address alone the non- visible imaging spectral range, is still a key part of the solution.

The goal of this Harvest Imaging forum is to go through the various aspects of imaging concepts, device principles, used materials and imager characteristics to address the beyond-visible imaging and especially focus on the infrared spectral bands imaging.

Emphasis will be put on the material used for both detection :

  • Germanium, Quantum Dots devices and InGaAs for SWIR,
  •  III-V and II-VI semiconductors for MWIR and LWIR
  •  Microbolometers and Thermopiles thermal imagers

Besides the material aspects, also attention will be given to the associated CMOS circuits architectures enabling the imaging arrays implementation, both at the pixel and the imager level.
A status on current and new trends will be provided.
 

Bio:
Pierre Magnan graduated in E.E. from University of Paris in 1980. After being a research scientist involved in analog and digital CMOS design up to 1994 at French Research Labs, he moved in 1995 to CMOS image sensors research at SUPAERO (now ISAE-SUPAERO) in Toulouse, France. The latter is an Educational and Research Institute funded by the French Ministry of Defense. Here Pierre was involved in setting up and growing the CMOS active-pixels sensors research and development activities. From 2002 to 2021, as a Full Professor and Head of the Image Sensor Research Group, he has been involved in CMOS Image Sensor research. His team worked in cooperation with European companies (including STMicroelectronics, Airbus Defense& Space, Thales Alenia Space and also European and French Space Agencies) and developed custom image sensors dedicated to space instruments, extending in the last years the scope of the Group to CMOS design for Infrared imagers.
In 2021, Pierre has been nominated Emeritus Professor of ISAE-Supaero Institute where he focuses now on Research within PhD work, mostly with STMicroelectronics.

Pierre has supervised more than 20 PhDs candidates in the field of image sensors and co-authored more than 80 scientific papers. He has been involved in various expertise missions for French Agencies, companies and the European Commission. His research interests include solid-state image sensors design for visible and non-visible imaging, modelling, technologies, hardening techniques and circuit design for imaging applications.

He has served in the IEEE IEDM Display and Sensors subcommittee in 2011-2012 and in the International Image Sensor Workshop (IISW) Technical Program Committee, being the General Technical Chair of 2015 IISW. He is currently a member of the 2022 IEDM ODI sub-committee and the IISW2023 Technical Program Committee.



Go to the original article...

Himax Technologies, Inc. Announces Divestiture of Emza Visual Sense Subsidiary

Image Sensors World        Go to the original article...

Link:  https://www.globenewswire.com/news-release/2022
/10/28/2543724/8267/en/Himax-Technologies-Inc-Announces-Divestiture-of-Emza-Visual-Sense-Subsidiary.html

 

TAINAN, Taiwan, Oct. 28, 2022 (GLOBE NEWSWIRE) -- Himax Technologies, Inc. (Nasdaq: HIMX) (“Himax” or “Company”), a leading supplier and fabless manufacturer of display drivers and other semiconductor products, today announced that it has divested its wholly owned subsidiary Emza Visual Sense Ltd. (“Emza”), a company dedicated to the development of proprietary vision machine-learning algorithms. Following the transaction, Himax will continue to partner with Emza. The divestiture will not affect the existing business with the leading laptop customer where Himax continues to be the supplier for the leading-edge ultralow power AI processor and always-on CMOS image sensor.

WiseEyeTM, Himax’s total solution for ultralow power AI image sensing, includes Himax proprietary AI processors, CMOS image sensors, and CNN-based machine-learning AI algorithms, all featuring unique characteristics of ultralow power consumption. For the AI algorithms, Himax has historically adopted a business model where it not only develops its own solutions through an in-house algorithm team and Emza, a fully owned subsidiary before the divestiture, but also partners with multiple third-party AI algorithm specialists as a way to broaden the scope of application and widen the geographical reach. Moving forward, the AI business model will be unchanged where the Company will continue to develop its own algorithms and work with third-party algorithms partners, including Emza.

The Company continues to collaborate with its ecosystem partners to jointly make the WiseEye AI solution broadly accessible to the market, aiming to scale up adoption in numerous relatively untapped end-point AI markets. Tremendous progress has been made so far in areas such as laptop, desktop PC, automatic meter reading, video conference device, shared bike parking, medical capsule endoscope, automotive, smart office, battery cam and surveillance, among others. Additionally, Himax is committed to strengthening its WiseEye product roadmap while retaining its leadership position in ultralow power AI processor and image sensor. By targeting even lower power consumption and higher AI inference performance that leverage integral optimization from hardware to software, the Company believes it can capture the vast end-point AI opportunities presented ahead.

Go to the original article...

Canon strengthens medical business with establishment of Canon Healthcare USA, INC.

Newsroom | Canon Global        Go to the original article...

Go to the original article...

SK Hynix developing AI powered image sensor

Image Sensors World        Go to the original article...

From: https://www.thelec.net/news/articleView.html?idxno=4281
 

 
SK Hynix was developing a new CMOS image sensor (CIS) that uses neural network technology, TheElec has learned. The South Korean memory giant is planning to embed an AI accelerator into the CIS, sources said. The accelerator itself is based on SRAM combined with a microprocessor --- also called in-memory computing. The AI-powered CIS will be able to recognize information related to the subject of the image, while the image was being saved as data. For example, the CIS will be able to recognize the owner of the smartphone when it is used on a front camera. Most current devices have the CIS and the face-recognizing feature separate. Having the CIS do it on its own can save time and conserve the power of the device. SK Hynix has recently verified the design and field programmable gate array of the CIS. The company is also planning to develop an AI accelerator that uses non-volatile memory instead of the volatile SRAM. SK Hynix is a very small player in the CIS field. According to Strategy Analytics, Sony controlled 44% of the market during the first half of the year followed by Samsung’s 30%. Omivision had a 9% market share. The remaining three companies, which include SK Hynix, controlled 17% together. SK Hynix is currently supplying its high-resolution CIS to Samsung; last year it supplied a 13MP CIS for the Galaxy Z Fold 3. It is supplying 50MP CIS for the Galaxy A series this year. However, CIS companies are focusing on strengthening other features of the CIS besides resolution. They are reaching the limits of making the pixels smaller. Pixels absorb less light and the signals are smaller when they become too small, obscuring the resolution of the images.


Go to the original article...

Voigtlander 65mm f2 APO-Lanthar Macro review-so-far

Cameralabs        Go to the original article...

The Voigtländer 65mm f2 APO-Lanthar Macro is designed for full-frame mirrorless cameras and delivers 1:2 magnification. How does it compare to other macro lenses? Find out in my review-so-far!…

Go to the original article...

Sony to make self-driving sensors that need 70% less power

Image Sensors World        Go to the original article...

From: https://asia.nikkei.com/Business/Automobiles/Sony-to-make-self-driving-sensors-that-need-70-less-power

Sony is developing its own electric vehicles. (Asia Nikkei)
July 19, 2022


TOKYO -- Sony Group will develop a new self-driving sensor that uses 70% less electricity, helping to reduce autonomous systems' voracious appetite for power and extend the range of electric vehicles.
The sensor, made by Sony Semiconductor Solutions, will be paired with new software to be developed by Sompo Holdings-backed startup Tier IV with the goal of cutting the amount of power used by EV onboard systems by 70%. The companies hope to achieve Level 4 technology, allowing cars to drive themselves under certain conditions, by 2030.


Electric vehicles will make up 59% of new car sales globally in 2035, the Boston Consulting Group predicts. Over 30% of trips 5 km and longer are expected to be made in self-driving cars, which rely on large numbers of sensors and cameras and transmit massive amounts of data.


Existing autonomous systems are said to use as much power as thousands of microwave ovens, hindering improvements in the driving range of EVs. Combined with the drain from air conditioning and other functions, EVs could end up with a range at least 35% smaller than on paper, according to Japan's Ministry of Economy, Trade and Industry. If successful, Sony's new sensors would limit this impact to around 10%.


Sony plans to lower the amount of electricity needed in self-driving systems through edge computing, processing as much data as possible through AI-equipped sensors and software on the vehicles themselves instead of transmitting it to external networks. This approach is expected to shrink communication lags as well, making the vehicles safer. 

[Thanks to the anonymous blog comment for sharing the article text.]

 

Go to the original article...

Fujifilm instax SQUARE Link review

Cameralabs        Go to the original article...

The instax SQUARE Link is a portable printer that connects to your phone over Bluetooth and prints onto instax square paper. Find out if it's the best wireless printer for you in my review!…

Go to the original article...

InP Market Expanding, Proximity Sensor on iPhone 14, Depth Sensing Issues on iPhone 13

Image Sensors World        Go to the original article...

From Electronics Weekly and Yole:

https://www.electronicsweekly.com/news/business/inp-moving-into-consumer-2022-10/

https://www.yolegroup.com/strategy-insights/apple-and-the-compound-semi-industry-the-story-begins/ 

The InP device market is expanding from traditional datacom and telecom towards the consumer reaching about $5.6 billion by 2027, says Yole Developpement.



 

Datacom and telecom applications are the traditional markets for InP.Land will continue to grow, but the biggest growth driver – with a 37% CAGR between 2021 and 2027 – will be consumer.
The InP supply chain is fragmented, though it is dominated by two vertically integrated American players: Coherent (formerly II-VI) and Lumentum.


The InP supply chain will need more investment with the rise of the consumer applications.
The migration to higher data rates, lower power consumption within data centres, and the deployment of 5G base stations will drive the development and growth of optical transceiver technology in the coming years.
 

As an indispensable building block for high-speed and long-range optical transceivers, InP laser diodes remain the best choice for telecom & datacom photonic applications.
This growth is driven by high volume adoption of high-data-rate modules, above 400G, by big cloud services and national telecom operators requiring increased fiber-optic network capacity.
 

With that in mind, the InP market, long dominated by datacom and telecom applications, is expected grow from $2.5 billion in 2021 to around $5.6 billion in 2027.
 

Yole Intelligence has developed a dedicated report to provide a clear understanding of the InP-based photonics and RF industries. In its InP 2022 report, the company, part of Yole Group, provides a comprehensive view of the InP markets, divided into photonics and RF sectors. It includes market forecasts, technology trends, and supply chain analysis. This updated report covers the markets from wafer to bare die for photonics applications and from wafer to epiwafer for RF applications by volume and revenue.
 

“There has been a lot of speculation on the penetration of InP in consumer applications,” says Yoke’s Ali Jaffal, “the year 2022 marks the beginning of this adoption. For smartphones, OLED displays are transparent at wavelengths ranging from around 13xx to 15xx nm”.
 

OEMs are interested in removing the camera notch on mobile phone screens and integrating the 3D-sensing modules under OLED displays. In this context, they are considering moving to InP EELs to replace the current GaAs VCSELs . However, such a move is not straightforward from cost and supply perspectives.
 

Yole Intelligence noted the first penetration of InP into wearable earbuds in 2021. Apple was the first OEM to deploy InP SWIR proximity sensors in its AirPods 3 family to help differentiate between skin and other surfaces.
 

This has been extended to the iPhone 14 Pro family. The leading smartphone player has changed the aesthetics of its premium range of smartphones, the iPhone 14 Pro family, reducing the size of the notch at the top of the screen to a pill shape.


 


To achieve this new front camera arrangement, some other sensors, such as the proximity sensor, had to be placed under the display. Will InP penetration continue in other 3D sensing modules, such as dot projectors and flood illuminators? Or could GaAs technology come back again with a different solution for long-wavelength lasers?
 

The impact of Apple adding such a differentiator to its product significantly affects companies in its supply chain, and vice versa.
 

Traditional GaAs suppliers for Apple’s proximity sensors could switch from GaAs to InP platforms since both materials could share similar front-end processing tools.
 

Yole Intelligence certainly expects to see new players entering the InP business as the consumer market represents high volume potential.
 

In addition, Apple’s move could trigger the penetration of InP into other consumer applications, such as smartwatches and automotive LiDAR with silicon photonics platforms.


In other Apple iPhone related news:

The True Depth camera on the iPhone 13 seems to be oversmoothing at distances over 20cm:


 


Go to the original article...

Canon comprises number one share of press cameras used during Rugby World Cup New Zealand 2021

Newsroom | Canon Global        Go to the original article...

Go to the original article...

CellCap3D: Capacitance Calculations for Image Sensor Cells

Image Sensors World        Go to the original article...

Sequoia's CellCap3D is a software tool specifically designed for the capacitance matrix calculation of image sensor cells. It is fast, accurate and easy to use.






Please contact SEQUOIA Design Systems, Inc. for further details at info@sequoiadesignsystems.com

Go to the original article...

Videos du jour for Nov 14, 2022

Image Sensors World        Go to the original article...

Graphene Flagship (https://graphene-flagship.eu/) spearhead project AUTOVISION is developing a new high-resolution image sensor for autonomous vehicles, which can detect obstacles and road curvature even in extreme and difficult driving conditions.

 


 

SPAD and CIS camera fusion for high resolution high dynamic range passive imaging (IEEE/CVF WACV 2022) Authors: Yuhao Liu (University of Wisconsin-Madison)*; Felipe Gutierrez-Barragan (University of Wisconsin-Madison); Atul N Ingle (University of Wisconsin-Madison); Mohit Gupta ("University of Wisconsin-Madison, USA "); Andreas Velten (University of Wisconsin - Madison) Description: Reconstruction of high-resolution extreme dynamic range images from a small number of low dynamic range (LDR) images is crucial for many computer vision applications. Current high dynamic range (HDR) cameras based on CMOS image sensor technology rely on multiexposure bracketing which suffers from motion artifacts and signal-to-noise (SNR) dip artifacts in extreme dynamic range scenes. Recently, single-photon cameras (SPCs) have been shown to achieve orders of magnitude higher dynamic range for passive imaging than conventional CMOS sensors. SPCs are becoming increasingly available commercially, even in some consumer devices. Unfortunately, current SPCs suffer from low spatial resolution. To overcome the limitations of CMOS and SPC sensors, we propose a learning-based CMOS-SPC fusion method to recover high-resolution extreme dynamic range images. We compare the performance of our method against various traditional and state-of-the-art baselines using both synthetic and experimental data. Our method outperforms these baselines, both in terms of visual quality and quantitative metrics.




System Semiconductor Image Sensor Explained | 'All About Semiconductor' by Samsung Electronics



tinyML neuromorphic engineering discussion forum:

Neuromorphic Event-based Vision
Christoph POSCH
CTO
PROPHESEE


New Architecture for Visual AI, Oculi Technology Enables Edge Solutions At The Speed Of Machines With The Efficiency of Biology
Charbel RIZK,
Founder CEO
Oculi Inc.



Roman Genov, University of Toronto
Fast Field-Programmable Coded Image Sensors for Versatile Low-Cost Computational Imaging Presented through the Chalk Talks series of the Institute for Neural Computation (UC San Diego)
08/05/22



Go to the original article...

css.php