Archives for December 2022

News: Xenics acquired by Photonis; Omnivision to cut costs

Image Sensors World        Go to the original article...

Xenics acquired by Photonis

Infrared imager maker, Xenics, has been acquired by Photonis, a manufacturer of electro-optic components.

Photonis’ components are used in the detection and amplification of ions, electrons and photons for integration into a variety of applications such as night vision optics, digital cameras, mass spectrometry, physics research, space exploration and many others. The addition of Xenics will bring high-end imaging products to Photonis’ B2B customers.

Jérôme Cerisier, CEO of Photonis, said: “We are thrilled to welcome Paul Ryckaert and the whole Xenics team in Photonis Group. With this acquisition, we are aiming to create a European integrated leader in advanced imaging in high-end markets. We will together combine our forces to strengthen our position in the infrared imaging market.”

Xenics employs 65 people across the world and its headquarters based in Leuven, Belgium.
Paul Ryckaert, CEO of Xenics, said: “By combining its strengths with the ones of Photonis Group, Xenics will benefit from Photonis expertise and international footprint which will allow us to accelerate our growth. It is a real opportunity to boost our commercial, product development and manufacturing competences and bring even more added value to our existing and future customers.” 

[Post title has been corrected as of January 8. Thanks to the commenters for pointing it out. Apologies for the error. --AI]

OmniVision to cut costs 

According to domestic media reports, the world's top ten IC design company China Weir Semiconductor, and its CMOS image sensor subsidiary OmniVision recently announced that it will stop recruiting new employees, reduce salaries for senior management, stop work during the Spring Festival, and stop distributing various items. The bonus will be used to reduce the capital expenditure in 2023 by 20% in response to the impact of the current bad environment on the company's operations. 

The report pointed out that according to the internal documents released by OmniVision Technology, the company has announced cost control, and the goal is to reduce costs by 20% in 2023! OmniVision said, "The current market conditions are very severe. We are facing great market challenges. Prices, inventories and supply chains are all under great pressure. Therefore, we must carry out cost control, and the goal is to reduce costs in 2023. 20%."

In order to achieve the goal of cost reduction, OmniVision announced that it will take a series of measures, including stopping all recruitment, no replacement for resignation, salary reduction for senior management, suspension of work in all regions of the group during the Spring Festival, suspension of quarterly bonuses and any other forms Bonuses, tight controls on spending, and some research and development programs will also reduce spending. OmniVision emphasized, "These measures are temporary. We believe that business-level improvements will occur in the second half of 2023, because we have a new product layout in the consumer market, while automobiles and emerging markets are rising steadily. We will be in 2023 The situation will be reassessed by the end of the first quarter of 2020."

Go to the original article...

Videos of the day [TinyML and WACV]

Image Sensors World        Go to the original article...

Event-based sensing and computing for efficient edge artificial intelligence and TinyML applications
Federico CORRADI, Senior Neuromorphic Researcher, IMEC

The advent of neuro-inspired computing represents a paradigm shift for edge Artificial Intelligence (AI) and TinyML applications. Neurocomputing principles enable the development of neuromorphic systems with strict energy and cost reduction constraints for signal processing applications at the edge. In these applications, the system needs to accurately respond to the data sensed in real-time, with low power, directly in the physical world, and without resorting to cloud-based computing resources.
In this talk, I will introduce key concepts underpinning our research: on-demand computing, sparsity, time-series processing, event-based sensory fusion, and learning. I will then showcase some examples of a new sensing and computing hardware generation that employs these neuro-inspired fundamental principles for achieving efficient and accurate TinyML applications. Specifically, I will present novel computer architectures and event-based sensing systems that employ spiking neural networks with specialized analog and digital circuits. These systems use an entirely different model of computation than our standard computers. Instead of relying upon software stored in memory and fast central processing units, they exploit real-time physical interactions among neurons and synapses and communicate using binary pulses (i.e., spikes). Furthermore, unlike software models, our specialized hardware circuits consume low power and naturally perform on-demand computing only when input stimuli are present. These advancements offer a route toward TinyML systems composed of neuromorphic computing devices for real-world applications.

Improving Single-Image Defocus Deblurring: How Dual-Pixel Images Help Through Multi-Task Learning

Authors: Abdullah Abuolaim (York University)*; Mahmoud Afifi (Apple); Michael S Brown (York University) 
Many camera sensors use a dual-pixel (DP) design that operates as a rudimentary light field providing two sub-aperture views of a scene in a single capture. The DP sensor was developed to improve how cameras perform autofocus. Since the DP sensor's introduction, researchers have found additional uses for the DP data, such as depth estimation, reflection removal, and defocus deblurring. We are interested in the latter task of defocus deblurring. In particular, we propose a single-image deblurring network that incorporates the two sub-aperture views into a multi-task framework. Specifically, we show that jointly learning to predict the two DP views from a single blurry input image improves the network's ability to learn to deblur the image. Our experiments show this multi-task strategy achieves +1dB PSNR improvement over state-of-the-art defocus deblurring methods. In addition, our multi-task framework allows accurate DP-view synthesis (e.g., ~39dB PSNR) from the single input image. These high-quality DP views can be used for other DP-based applications, such as reflection removal. As part of this effort, we have captured a new dataset of 7,059 high-quality images to support our training for the DP-view synthesis task.

Go to the original article...

Yole Insights article on a "meh" year for the CIS market

Image Sensors World        Go to the original article...

Original article available here:

CMOS Image Sensor snapshot: not all doom and gloom, good news is also stacking up 

In the CMOS Image Sensor Monitor Q4 2022, Yole Intelligence, part of Yole Group, announces it expects the CMOS Image Sensors (CIS) industry to show a slight revenue decrease of -0.7% YoY in 2022, with a market value of $21.2B. This estimate takes into account the many events in 2022’s first 3 quarters; the downward revision of smartphone sales, the ongoing inventory reduction from most players in the electronics supply chains, and the continued Covid-19-related disruptions in China. 

2021 was a year of growth for CIS, reaching an all-time high of $21.3B in revenue with a relatively small annual growth of 2.8%. The key driver was the rebound in sales of smartphones, computer laptops, and tablets during the year amid the reopening of western economies after severe Covid-19-related lockdowns. Our hope for 2022 was a continuation of this improving trend. We knew the Huawei ban contributed to some inventory build-up in 2020, which had to be cleared in 2021 and maybe 2022. Our expectation for the smartphone market in 2022 was, unfortunately, too high, which translated directly into lost revenue for CIS.

In the past, the increase in the number of cameras per phone would more than compensate for smartphone volume sales declines, but not in 2022. Huawei was the actor adding the greatest number of cameras per phone, and losing such a player in the geopolitical battle has flattened the growth statistic of cameras per phone. Does it mean consumers have lost interest in high-quality phone cameras? Not at all!


Video creation using smartphones is at an all-time high due to the short-video craze. The emergence of TikTok, the favored social media of the younger generation, has been quickly copied by large incumbents, resulting in YouTube shorts and Facebook reels. This demand for high-quality video hardware was temporarily over-met during the out-of-Covid-19-lockdowns of 2021, and, therefore, the first 3 quarters of 2022 saw slightly less demand. We have seen even more dramatic but similar patterns with computer laptops and tablets in which cameras played a central role during remote work/school teleconferencing.

Another market that has explosive growth right now is Automotive CIS. The Covid-19 era signaled a turning point in consumer behavior, with demand switching to Connected Autonomous Shared and Electric (CASE) vehicles loaded with semiconductor-based features. Overall, the appetite for cameras remains high, but the dominance of the weakened smartphone market translates into the deceptive -0.7% CIS growth expected for 2022.


The smartphone market is down -10% but the sales of CIS have proven relatively resilient, while other semiconductor products, such as memory, are down -12%. The main reason is technical since we are currently experiencing a limited supply of 90nm to 40nm node wafers, the main nodes for CIS, and supporting logic wafers. The prices of these legacy nodes have increased significantly, and we observed, therefore, a continuation of high average selling prices (ASP) for CIS.

At the same time, we noted a product mix shift toward more resolution and larger optical formats; this means more silicon per die and higher ASPs. In this respect, the large smartphone OEMs have different approaches; Apple and Xiaomi favor 12Mp to 48Mp resolution with large pixels, which seems to be the ultra-premium favored approach, while Samsung, Oppo, and Vivo are increasing the resolution to 64Mp and even 108Mp with smaller pixels, which appears as the mid-end favored approach. The market is, therefore, relatively well educated and understands what a good picture means, as described in our publication with DXOMARK, “Ultra-Premium Flagship Smartphones Image Performance: End-User Perspective 2021”.

This year, both Sony and OmniVision have presented products with three-layer stacks. There are two technical reasons for this. First, the “in-pixel connection” allows removing some transistors from the upper wafer layer and moving these to the second wafer layer. This improves the volume of sensing silicon in each pixel. This technology is helpful in optimizing the signal-to-noise ratio (SNR), a critical factor in improving image quality. The second reason is that the triple stack enables high-performance sensing. New uses, such as tiny AR/VR cameras, must go beyond the current rolling-shutter (RS) approach and use either global-shutter (GS), time-of-flight (ToF), or even event-based (EB) cameras. All these require more transistors per pixel than RS approaches, so a second CIS layer is more than welcome in the drive to super compact sensing cameras. The market share of these triple-stack image sensors will grow, which will add again to the increasing silicon content per camera. This trend opens a path for sustained improvement and market growth for CIS.

The 8 leading CIS players – Sony, Samsung, OmniVision, STMicroelectronics, onsemi, SK Hynix, GalaxyCore, and SmartSens – that we have been monitoring every quarter have very different business models. Sony is a hybrid IDM, manufacturing its own 12’’ CIS wafers but outsourcing logic wafers to TSMC, UMC, and possibly also Global Foundry (unconfirmed as yet). Samsung, STMicroelectronics, and SK Hynix are IDMs with some open foundry activity. OmniVision, onsemi, GalaxyCore, and SmartSens, are fabless with varying degrees of desire for internalization; onsemi now having ownership of the East Fishkill, New York fab, and GalaxyCore investing the proceeds of its IPO into a brand new 12’’ foundry. All these players have felt pain from their supply chain structure in 2021 and 2022, either from their dependencies on others or their own limited or vulnerable capabilities. The drought and fires that happened in Samsung’s Austin, Texas, fab last year and the similar events that occurred in Taiwan’s TSMC fabs are clear reminders that no one is immune to supply-side issues in the context of climate change and geopolitical uncertainties.

The next few years will be a race to add new industrial capacities, combined with renewed technological capabilities and a high level of consumer demand. Predictions are very difficult, especially if it’s about the future! With our CIS monitor quarterly publication, we make sure to stick to reality and include some accountability in our forecast. In our view, the future is bright for CIS, but large vulnerabilities exist from the economic and geopolitical context. Let us all make this a well-informed journey with the CIS Monitor publications.

Go to the original article...

“NIKKOR – The Thousand and One Nights (Tale 84) has been released”

Nikon | Imaging Products        Go to the original article...

Go to the original article...

In-pixel compute: IEEE Spectrum article and Nature Materials paper

Image Sensors World        Go to the original article...

A paper by Dodda et al. from a research group in the Material Science and Engineering department at Pennsylvania State University was recently published in Nature Materials. 


Active pixel sensor matrix based on monolayer MoS2 phototransistor array


In-sensor processing, which can reduce the energy and hardware burden for many machine vision applications, is currently lacking in state-of-the-art active pixel sensor (APS) technology. Photosensitive and semiconducting two-dimensional (2D) materials can bridge this technology gap by integrating image capture (sense) and image processing (compute) capabilities in a single device. Here, we introduce a 2D APS technology based on a monolayer MoS2 phototransistor array, where each pixel uses a single programmable phototransistor, leading to a substantial reduction in footprint (900 pixels in ∼0.09 cm2) and energy consumption (100s of fJ per pixel). By exploiting gate-tunable persistent photoconductivity, we achieve a responsivity of ∼3.6 × 107 A W−1, specific detectivity of ∼5.6 × 1013 Jones, spectral uniformity, a high dynamic range of ∼80 dB and in-sensor de-noising capabilities. Further, we demonstrate near-ideal yield and uniformity in photoresponse across the 2D APS array.


 Fig 1: 2D APS. a, 3D schematic (left) and optical image (right) of a monolayer MoS2 phototransistor integrated with a programmable gate stack. The local back-gate stacks, comprising atomic layer deposition grown 50 nm Al2O3 on sputter-deposited Pt/TiN, are patterned as islands on top of an Si/SiO2 substrate. The monolayer MoS2 used in this study was grown via an MOCVD technique using carbon-free precursors at 900 °C on an epitaxial sapphire substrate to ensure high film quality. Following the growth, the film was transferred onto the TiN/Pt/Al2O3 back-gate islands and subsequently patterned, etched and contacted to fabricate phototransistors for the multipixel APS platform. b, Optical image of a 900-pixel 2D APS sensor fabricated in a crossbar architecture (left) and the corresponding circuit diagram showing the row and column select lines (right).

Fig. 2: Characterization of monolayer MoS2. a, Structure of MoS2 viewed down its c axis with atomic-resolution HAADF-STEM imaging at an accelerating voltage of 80 kV. Inset: the atomic model of 2H-MoS2 overlayed on the STEM image. b, SAED of the monolayer MoS2, which reveals a uniform single-crystalline structure. c,d, XPS of Mo 3d (c) and S 2p (d) core levels of monolayer MoS2 film. e,f, Raman spectra (e) and corresponding spatial colourmap of peak separation between the two Raman active modes, E12g and A1g, measured over a 40 µm × 40 µm area, for as-grown MoS2 film (f). g,h, PL spectra (g) and corresponding spatial colourmap of the PL peak position (h), measured over the same area as in f. The mean peak separation was found to be ~20.2 cm−1 with a standard deviation of ~0.6 cm−1 and the mean PL peak position was found to be at ~1.91 eV with a standard deviation of ~0.002 eV. i, Map of the relative crystal orientation of the MoS2 film obtained by fitting the polarization-dependence of the SHG response shown in j, which is an example polarization pattern obtained from a single pixel of i by rotating the fundamental polarization and collecting the harmonic signal at a fixed polarization.
Fig. 3: Device-to-device variation in the characteristics of MoS2 phototransistors. a, Transfer characteristics, that is, source to drain current (IDS) as a function of the local back-gate voltage (VBG), at a source-to-drain voltage (VDS) of 1 V and measured in the dark for 720 monolayer MoS2 phototransistors (80% of the devices that constitute the vision array) with channel lengths (L) of 1 µm and channel widths (W) of 5 µm. b–d, Device-to-device variation is represented using histograms of electron field-effect mobility values (μFE) extracted from the peak transconductance (b), current on/off ratios (rON/OFF) (c), subthreshold slopes (SS) over three orders of magnitude change in IDS (d) and threshold voltages (VTH) extracted at an isocurrent of 500 nA µm−1 for 80% of devices in the 2D APS array (e). f, Pre- and post-illumination transfer characteristics of 720 monolayer MoS2 phototransistors after exposure to white light with Pin = 20 W m−2 at Vexp = −3 V for τexp = 1 s. g–j, Histograms of dark current (IDARK) (green) and photocurrent (IPH) (yellow) (g), the ratio of post-illumination photocurrent to dark current (rPH) (h), responsivity (R) (i) and detectivity (D*) (j), all measured at VBG = −1 V.

Fig. 4: HDR and spectral uniformity. a–c, The post-illumination persistent photocurrent (IPH) read out using VBG = 0 V and VDS = 1 V under different exposure times (τexp) is plotted against Pin for Vexp = −2 V at red (a), green (b) and blue (c) wavelengths. Clearly, the 2D APS demonstrates HDR for all wavelengths investigated. d–f, However, the 2D APS displays spectral non-uniformity in the photoresponse, which can be adjusted by exploiting gate-tunable persistent photoconductivity, that is, by varying Vexp. This is shown by plotting IPH against Pin for different Vexp at red (d), green (e) and blue (f) wavelengths.

 Fig. 5: Photodetection metrics. a–c, Responsivity (R) as a function of Vexp and Pin for τexp = 100 ms for red (a), green (b) and blue (c) wavelengths. R increases monotonically with the magnitude of Vexp. d, Transfer characteristics of a representative 2D APS in the dark and post-illumination at Vexp = −6 V with Pin = 0.6 W m−2 for τexp = 200 s and VDS = 6 V. e, R as a function of VBG. For VDS = 6 V and VBG = 5 V we extract an R value of ~3.6 × 107 A W−1. f, Specific detectivity (D*) as a function of VBG at different VDS. At lower VBG, both R and Inoise, that is, the dark current obtained from d, are low, leading to lower D*, whereas at higher VBG both R and Inoise are high, also leading to lower D*. Peak D* can reach as high as ~5.6 × 1013 Jones. g, Energy consumption per pixel (E) as a function of Vexp.

Fig. 6: Fast reset and de-noising. a, After the read out, each pixel can be reset by applying a reset voltage (Vreset) for time periods as low as treset = 100 µs. b, The conductance ratio (CR), defined as the ratio between the conductance values before and after the application of a reset voltage, is plotted against different Vreset. c, Energy expenditure for reset operations under different Vreset. d, Heatmaps of conductance (G) measured at VBG = 0 V from the image sensor with and without Vreset when exposed to images under noisy conditions. Clearly, application of Vreset helps in de-noising image acquisition.

This work was covered in the IEEE Spectrum magazine in an article titled "New Pixel Sensors Bring Their Own Compute: Atomically thin devices that combine sensing and computation also save power".


In the new study, the researchers sought to add in-sensor processing to active pixel sensors to reduce their energy and size. They experimented with the 2D material molybdenum disulfide, which is made of a sheet of molybdenum atoms sandwiched between two layers of sulfur atoms. Using this light-sensitive semiconducting material, they aimed to combine image-capturing sensors and image-processing components in a single device.

The scientists developed a 2D active pixel sensor array in which each pixel possessed a single programmable phototransistor. These light sensors can each perform their own charge-to-voltage conversion without needing any extra transistors.

The prototype array contained 900 pixels in 9 square millimeters, with each pixel about 100 micrometers large. In comparison, state-of-the-art CMOS sensors from Omnivision and Samsung have reached about 0.56 µm in size. However, commercial CMOS sensors also require additional circuitry to detect low light levels, increasing their overall area, which the new array does not... .

Go to the original article...

VoxelSensors and OQmented collaborate on laser scanning-based 3D perception to blend the physical with digital worlds

Image Sensors World        Go to the original article...

BRUSSELS, Belgium and ITZEHOE, Germany, Dec. 20, 2022 (GLOBE NEWSWIRE) -- VoxelSensors, the inventor of Switching Pixels®, a revolutionary 3D perception technology, and OQmented, the technology leader in MEMS-based AR/VR display and 3D sensing solutions, have entered a strategic partnership. The collaboration focuses on the system integration and commercialization of a high-performance 3D perception system for AR/VR/MR and XR devices. Both companies will demonstrate this system and their technologies during CES 2023 in Las Vegas.

Switching Pixels® resolves major challenges in 3D perception for AR/VR/MR/XR devices. The solution is based on laser beam scanning (LBS) technology to deliver accurate and reliable 3D sensing without compromising on power consumption, data latency or size. VoxelSensors’ key patented technologies ensure optimal operation under any lighting condition and with concurrent systems. Their new sensor architecture provides asynchronous tracking of an active light source or pattern. Instead of acquiring frames, each pixel within the sensor array only generates an event upon detecting active light signals, with a repetition rate of up to 100 MHz.

This system is enabled through OQmented’s unique Lissajous scan pattern: in contrast to raster scanning which works line by line to complete a frame, the Lissajous trajectories scan much faster and are created very power efficiently. They can capture complete scenes and fast movements considerably quicker and require less data processing. That makes this particular technique essential for the low latency and the power efficiency of the combined perception system.

“The partnership with VoxelSensors is a great opportunity to unlock the potential of Lissajous laser beam scanning for 3D perception in lightweight Augmented Reality glasses,” said Ulrich Hofmann, co-CEO/CTO and co-founder of OQmented. “We are proud to deliver the most efficient scanning solution worldwide which enables the amazing products of our partner, bringing us one step closer to our goal of allowing product developers to build powerful but also stylish AR glasses.”

“At VoxelSensors, we wanted to revolutionize the perception industry. For too long, innovation in our space has focused on data processing, while there is so much efficiency to gain when working on the boundaries of photonics and physics. Combined with OQmented technology, we have the ability to transform the industry, enabling strong societal impact in multiple verticals, such as Augmented and Virtual Reality,” explains Johannes Peeters, founder and CEO of VoxelSensors. “Blending the physical and virtual worlds will create astonishing experiences for consumers and productivity gains in the enterprise world.”

This cooperation between two fabless deep tech semiconductor startups demonstrates Europe’s innovation capabilities in the race to produce next-generation technologies for AR/XR/VR and many other applications. These are crucial to Europe’s strategic objective of increasing its market share in semiconductors through key contributions of EU fabless companies as part of the European Chips Act.

Go to the original article...

ESPROS voted No. 1 optoelectronic company of 2022

Image Sensors World        Go to the original article... 

The Swiss company has been voted by the influential Semiconductor Review publication, going so far as to say ESPROS is “shaping a new paradigm of Time of Flight technologies”, with exceptional performance under full sunlight with moving objects and varying target reflectivity. ESPROS’ unique technology and its ability to help clients analyze an application and offer proven engineering solutions have ensured its growth as a custom ASIC chip manufacturer and 3D TOF module designer.

The company’s true system-on-chip TOF imager enables improved time delayed imaging and fluorescent lifetime imaging outcomes.

In the current scenario merging 3D imaging and optical sensors for mass applications requires very fast time resolving capabilities plus high sensitivity in NIR, conventional manufacturing processes are not robust enough dealing with background light movement and reflectivity. That’s where ESPROS has a major advantage having developed a backside-illuminated imager that merges CCD and CMOS technology.

The ESPROS approach means expensive peripheral components such as FPGAs and A/D converters are not required. This also means ESPROS products are both more cost effective and compact. ESPROS Photonics offers a wide range of TOF chips and line imagers as well as sensor modules, using its proprietary OHC15L silicon imager technology. Meanwhile, its off the shelf reference design 3D modules speed up a customer’s time to market.

Full article in Semiconductor Review available here:

Go to the original article...

MagikEye to Present Disruptive 3D Sensing with Invertible Light™ Image Sensor Technology at CES

Image Sensors World        Go to the original article...

From Businesswire:

STAMFORD, Conn.--(BUSINESS WIRE)--Magik Eye Inc. (, an innovative 3D sensing company will be holding demonstrations for its latest Invertible Light™ Technology (ILT) at the 2023 Consumer Electronics Show in Las Vegas Nevada. ILT is a patented alternative to older Time of Flight and Structured Light solutions, enabling the smallest, fastest and most power-efficient 3D sensing method. At its essence, ILT uses a patent protected regular dot projector pattern versus current random dot projection used by Structured Light. This allows for transformative simplicity of design, compute and form factor. “We see that the simplicity of ILT is driving demand for automotive and smarter home use cases. As we see more use cases opening up for the robotics age that lies ahead, we envision a world where there is 3D everywhere with ILT” said Takeo Miyazawa, Founder & CEO of MagikEye.

CES 2023 will take place in Las Vegas on Jan. 5-8, 2023. Attendees will experience new technologies from global brands, hear about the future of technology from thought leaders and collaborate face-to-face with other attendees. Live demonstrations of MagikEye’s latest ILT solutions for next-gen 3D sensing solutions will be held from January 5-8 at the Luxor Hotel. Demonstration times are limited and private reservations will be accommodated by contacting

About Magik Eye Inc.
Founded in 2015, Magik Eye Inc. has a family of 3D depth sensing solutions that support a wide range of applications for smartphones, robotic and surveillance. Magik Eye’s patent protected technology is based on Invertible Light™ that enables the smallest, fastest & most power-efficient 3D sensing.

Go to the original article...

Yole webinar on SWIR applications for consumer markets

Image Sensors World        Go to the original article...

Yole published a webinar on SWIR imaging potential applications for mass market:






Go to the original article...

LiDAR News: Quanergy Files for Bankruptcy

Image Sensors World        Go to the original article...

Coverage in Wall Street Journal [paywalled]:

From Businesswire

Quanergy to Facilitate Sale of Business Through Voluntary Chapter 11 Process, Announces Leadership Changes

SUNNYVALE, Calif.--(BUSINESS WIRE)--Quanergy Systems, Inc. (OTC: QNGY) (“Quanergy” or the “Company”), a leading provider of LiDAR sensors and smart 3D solutions, today announced that the Company initiated an orderly sale process for its business. To facilitate the sale and maximize value, the Company filed for protection under Chapter 11 (“Chapter 11”) of the U.S. Bankruptcy Code (the “Bankruptcy Code”) in the United States Bankruptcy Court for the District of Delaware (the “Bankruptcy Court”) and intends to pursue a sale of the business under section 363 of the Bankruptcy Code.

Quanergy also announced today that Kevin Kennedy, Chief Executive Officer, will retire effective December 31, 2022, but will continue to serve as non-executive Chair of the Board of Directors. Mr. Kennedy will transition executive leadership to a newly appointed Chief Restructuring Officer and President, Lawrence Perkins.

“It has been my honor to serve as CEO at Quanergy for the past 2.5 years,” said Kevin Kennedy, Chief Executive Officer of Quanergy. “During this time, the company shifted our technology focus towards security and industrial applications which enabled the company to grow revenue by serving customer needs in a new marketplace. The Board and I have agreed that it is an appropriate time for me to transition day-to-day leadership to our capable newly appointed Chief Restructuring Officer. I will continue to provide guidance, continuity, and support as non-executive Board Chair.”

Mr. Perkins is the founder and Chief Executive Officer of SierraConstellation Partners, an interim management and advisory firm, which he founded in 2013. Mr. Perkins has served in a variety of senior-level positions, including interim CEO/President, Chief Restructuring Officer, board member, financial advisor, strategic consultant, and investment banker, to numerous private and public middle-market companies.

Prior to the filing of the Company’s Chapter 11 case, the Board of Directors and management evaluated a wide range of strategic alternatives to maximize value for all stakeholders. The Company also significantly reduced operating expenses and resolved significant patent litigation with Velodyne. Now with the protections afforded by the Bankruptcy Code, the Company intends to broaden its marketing efforts to potential purchasers interested in specific business segments or assets as well as continuing to seek a going concern sale of the business.

The Company expects to continue operations during the Chapter 11 process and seeks to complete an expedited sale process with Bankruptcy Court approval. To help fund and protect its operations, Quanergy intends to use available cash on hand along with normal operating cash flows to fund post-petition operations and costs in the ordinary course.

“Quanergy has made considerable efforts to address ongoing financial challenges stemming from volatile capital market conditions,” said Lawrence Perkins, Chief Restructuring Officer and President of Quanergy. “Despite these challenges, the Company has seen improving demand in the security, smart spaces, and industrial markets, and improvements in supply chain conditions. We are confident that Quanergy’s efforts have positioned the Company for a value-maximizing transaction during the Chapter 11 sale process. During the process, we will continue to prioritize the needs of our customers and I am thankful to the entire Quanergy team for their continued efforts and contributions to the business.”

The Company has filed customary motions with the Bankruptcy Court intended to allow Quanergy to maintain operations in the ordinary course including, but not limited to, paying employees and continuing existing benefits programs, meeting commitments to customers and fulfilling go-forward obligations, including vendor payments. Such motions are typical in the Chapter 11 process and Quanergy anticipates that they will be heard in the first few days of its Chapter 11 case.

For more information about the Company’s Chapter 11 case, including claims information, please visit or call our hotline at 855-613-0451 (for toll-free U.S. and Canada calls) or 949-889-0181 (for tolled international calls).

Cooley LLP is serving as counsel, Young Conaway Stargatt & Taylor LLP is serving as co-counsel, Raymond James & Associates, Inc. is serving as investment banker, and FTI Consulting is serving as financial advisor to Quanergy.

Go to the original article...

CES 2023 Award for Aeva Aeries II 4D LiDAR

Image Sensors World        Go to the original article...

From Businesswire

MOUNTAIN VIEW, Calif.--(BUSINESS WIRE)--Aeva® (NYSE: AEVA), a leader in next-generation sensing and perception systems, today announced that its Aeries™ II sensor has been named a CES® 2023 Innovation Awards Honoree. The prestigious CES Innovation Awards honor outstanding design and engineering in consumer technology products, and were given in advance of CES 2023.

The CES Innovation Award builds on growing recognition for Aeries II and its innovative 4D LiDAR™ technology, which were recently chosen as one of TIME’s Best Inventions of 2022.

“Our next-generation 4D LiDAR technology goes beyond legacy 3D LiDAR systems because of its unique instant velocity detection and long range performance capabilities, in addition to Ultra Resolution,” said Mina Rezk, Co-Founder and CTO at Aeva. “We are honored that Aeries II continues to receive further recognition with this CES Innovation Award because, put simply, we believe Aeva 4D LiDAR has the potential to change the game for passenger cars, commercial vehicles and robotaxis by making vehicle automation safer and more reliable.”

Aeva’s Aeries II 4D LiDAR sensor delivers breakthrough sensing and perception performance using Frequency Modulated Continuous Wave (FMCW) technology to directly detect the instant velocity of each point, in addition to precise 3D position at long range. Its capabilities go beyond legacy time-of-flight 3D LiDAR sensors to enable the next generation of driver assistance and autonomous vehicle capabilities, including:

  • Instant Velocity Detection: Directly measure velocity for each point of detection, in addition to 3D position, to perceive where things are, and precisely how fast they are moving.
  • Long Range Performance: Detect, classify and track objects such as vehicles, cyclists and pedestrians at long distances.
  • Ultra Resolution™: A real-time camera-level image providing up to 20 times the resolution of legacy time-of-flight LiDAR sensors.
  • Road Hazard Detection: Detect small objects on the roadway with greater confidence at up to twice the distance of legacy time-of-flight LiDAR sensors.
  • 4D Localization™: Per-point velocity data enables real-time vehicle motion estimation with six degrees of freedom to enable accurate vehicle positioning and navigation without the need for additional sensors, like IMU or GPS.

Aeries II is the first sensor on the market to integrate Aeva’s unique LiDAR-on-chip technology which integrates all key sensor components including transmitters, receivers and optics onto silicon photonics in a compact module. This design uses no fiber optics, resulting in a highly automated manufacturing process that allows Aeva to scale deployment of its products and lower costs to meet the needs of automotive OEMs and other volume customers.

Detailed information about the CES 2023 Innovation Awards honorees can be found at In January 2023, Aeva will join other honorees to display their products in the Innovation Awards Showcase area at CES 2023. At the Aeva Booth (#6001, LVCC – West Hall), Aeva will showcase its Aeries II 4D LiDAR sensor alongside its unique LiDAR-on-chip technology that integrates all key LiDAR components onto a silicon photonics chip in a compact module.

Go to the original article...

EETimes article on sensor fusion for neuromorphic vision

Image Sensors World        Go to the original article...


Improving Sensor Fusion for Neuromorphic Vision (Nov 21, 2022)

The article links two videos about event cameras. The first one is a tutorial about event cameras from 2020:


The second video shows an example of a commercially available event camera called Davis camera (made by iniVation AG) which has a CMOS image sensor together with an event sensor and allows sensor fusion, giving the best of both worlds:



The article ends by highlighting two key challenges for wider applicability of event-based image sensors: (1) non-standard processing techniques that are different form conventional RGB data processing pipelines, (2) high power requirements of event data processing schemes.

Go to the original article...

Nikon releases the NIKKOR Z 40mm f/2 (SE), a compact and lightweight prime lens for the Nikon Z mount system

Nikon | Imaging Products        Go to the original article...

Go to the original article...

"Burst Vision" using SPAD Cameras

Image Sensors World        Go to the original article...

In a paper titled "Burst Vision Using Single-Photon Cameras", Sizhuo Ma, Paul Mos, Edoardo Charbon and Mohit Gupta from University of Wisconsin-Madison and École polytechnique fédérale de Lausanne write:

Single-photon avalanche diodes (SPADs) are novel image sensors that record the arrival of individual photons at extremely high temporal resolution. In the past, they were only available as single pixels or small-format arrays, for various active imaging applications such as LiDAR and microscopy. Recently, high-resolution SPAD arrays up to 3.2 megapixel have been realized, which for the first time may be able to capture sufficient spatial details for general computer vision tasks, purely as a passive sensor. However, existing vision algorithms are not directly applicable on the binary data captured by SPADs. In this paper, we propose developing quanta vision algorithms based on burst processing for extracting scene information from SPAD photon streams. With extensive real-world data, we demonstrate that current SPAD arrays, along with burst processing as an example plug-and-play algorithm, are capable of a wide range of downstream vision tasks in extremely challenging imaging conditions including fast motion, low light ($<5$ lux) and high dynamic range. To our knowledge, this is the first attempt to demonstrate the capabilities of SPAD sensors for a wide gamut of real-world computer vision tasks including object detection, pose estimation, SLAM, and text recognition. We hope this work will inspire future research into developing computer vision algorithms in extreme scenarios using single-photon cameras.

Full paper is available here:

The paper will be presented at the upcoming Winter Conference on Applications of Computer Vision (WACV) conference in January 2023. 


Video summary

Dealing with motion blur in extremely low light

Dealing with extreme dynamic range

A large dataset of over 50 million binary burst frames for a wide range of computer vision tasks

Go to the original article...

Global CMOS Image Sensor Market to Grow at 6.32% CAGR, Expected to Reach USD 39.54 Billion by 2031

Image Sensors World        Go to the original article...




Research Nester recently published a report on "CMOS Image Sensor Market Analysis by Technology; and by End Use Industry – Global Supply & Demand Analysis & Opportunity Outlook 2018-2031."

The Global CMOS Image Sensor Market is estimated to grow at a CAGR of 6.32% over the forecast period, i.e., 2022-2031. Rising demand for high-definition image-capturing devices is expected to propel the market growth. For instance, Sony Corporation unveiled the IMX485 type 1/1.2 4K-resolution back-illuminated CMOS image sensor and the IMX415 type 1/2.8 4K CMOS image sensor in June 2019. Sony created these two security camera sensors to address the constantly growing demand for security cameras in a range of monitoring applications, such as anti-theft, disaster warning, and traffic monitoring systems, or commercial complexes.

Furthermore, there has been growing demand for CMOS image sensor in healthcare industry. They are usually used in observing patient during the surgeries. A recent report by the National Library of Medicine states that a staggering 310 million major procedures are carried out year around the world, with between 40 to 50 million taking place in the United States and 20 million in Europe.

Global CMOS Image Sensor Market: Key Takeaways

  •  Asia Pacific to hold the largest market revenue
  •  Popularity of smartphones to propel market growth in North America region
  •  Consumer electronics segment to garner the largest revenue

Rising Demand for Security & Surveillance to Drive Market Growth
CMOS image sensor is extensively used for the purpose of security and surveillance. CMOS image sensor has an ability to convert the photoelectrical signal into digital signal. Security is the major concern for everyone. Hence owing to increasing instances of theft and crime, more security cameras having CMOS senor are expected to be installed, allowing market growth. As it is estimated that approximately 82% of burglars check the presence of alarm system before breaking in.
However, they can’t be installed everywhere owing to privacy concerns. Hence many organizations have come up with innovative ideas which are anticipated to fuel the market. For instance, in December 2021, Canon revealed a brand-new outdoor 4K camera that can be used as both a conventional camera and a security camera. Additionally, it can combine every 4K UHD pixel that the 4K UHD CMOS image sensor has ever captured.

Global CMOS Image Sensor Market: Regional Overview
The global CMOS image sensor market is segmented into five major regions including North America, Europe, Asia Pacific, Latin America, and the Middle East and Africa region.
Government Initiative for Smart Cities to Drive Growth in Asia Pacific Region
The CMOS image sensor market in Asia Pacific region is anticipated to garner the largest revenue of USD 17,759.3 Million by the end of 2031. Government initiatives for smart cities is expected to fuel the growth in the market. The Ministry of Electronics and Information Technology in India has tasked ERNET India and IISc with developing the LoRa gateway (pole gateway), a low-cost compute device that can connect to cameras, temperature, humidity, air quality, and other sensors. This is part of the

Internet of Things (IoT) Management Framework for Smart Cities.
Growing Demand for Consumer Electronics to Favour Growth in North America Region
Further, North America Region is expected to grow further by garnering revenue of USD 12579.0 Million by the end of 2031, growing at a CAGR of 6.14% during 2022-2031. Increase in demand for smartphones to drive the market growth. Approximately 85% percent of all mobile users in the US are expected to have a smartphone by 2025. Various electronics item including smart phones, TVs, wearable gadgets and more consist of senor which is in huge demand in this region. Many smartphones manufacturers use image sensor in their smartphones. For instance, the Xiaomi 12S Ultra smartphone contains the world's biggest sensor in a smartphone. As part of the new line, Xiaomi has launched the 12S Series, which includes the Leica-engineered Ultra.
The study further incorporates Y-O-Y growth, demand & supply and forecast future opportunity in:

  •  North America (U.S., Canada)
  •  Europe (U.K., Germany, France, Italy, Spain, Hungary, Belgium, Netherlands & Luxembourg, NORDIC [Finland, Sweden, Norway, Denmark], Poland, Turkey, Russia, Rest of Europe)
  •  Latin America (Brazil, Mexico, Argentina, Rest of Latin America)
  •  Asia-Pacific (China, India, Japan, South Korea, Indonesia, Singapore, Malaysia, Australia, New Zealand, Rest of Asia-Pacific)
  •  Middle East and Africa (Israel, GCC [Saudi Arabia, UAE, Bahrain, Kuwait, Qatar, Oman], North Africa, South Africa, Rest of Middle East and Africa).
  • Global CMOS Image Sensor Market, Segmentation by End Use Industry
  •  Consumer Electronics
  •  Medical
  •  Industrial
  •  Security & Surveillance
  •  Automotive & Transportation
  •  Aerospace & Defense

The consumer electronics segment is estimated to hold the largest revenue of USD 27010.4 Million by the end of 2031. Increasing demand for CMOS in consumer electronics is expected to boost the market growth. This CMOS technology is extensively used in smartphones. CMOS are known for using less power and hence their demand in smartphones are increasing. Instead of capturing the whole image in a single instance it captures image in scanning type way. Moreover, cameras with CMOS sensor gives better saturation capacity owing to which many manufacturers are installing it in their smartphones. For instance, the newest CMOS image sensor in the XGS series was unveiled by ON Semiconductor. A 16Mp sensor called the XGS 16000 offers excellent global shutter imaging for robotics and inspection systems in factories. The XGS 16000 delivers great performance at low power while giving the highest resolutions for typical 29 x 29 mm industrial cameras, consuming just 1 Watt at 65FPS. In North America, the segment generated the largest revenue of USD 8576.4 Million by the end of 2031, while in the Asia Pacific, the segment is projected to register the largest revenue of USD 12124.3 Million by the end of 2031.

Global CMOS Image Sensor Market, Segmentation by Technology

  •  Front Side Illumination (FSI)
  •  Back Side Illumination (BSI)

The back side illumination (BSI) segment is anticipated to garner the largest revenue by the end of 2031, growing at the highest CAGR of 6.68% over the forecast period. The growth can be attributed to the increasing use of BSI technology in high quality and higher pixel cameras. The preference of smartphones producer is increasing for BSI technology which is also expected to lead a boost in demand. For instance, with the 42 megapixel Sony Alpha A7R Mark II, Sony has added a BSI Full-Frame sensor. The Sony Cyber-shot RX10 II and RX100 IV both have "stacked" sensors that enable even faster continuous shooting and high speed video recording. In the Asia Pacific, the segment is projected to grow with a CAGR of 7.34% during the forecast period, while in North America, the front side illumination (FSI) segment is projected to grow with a CAGR of 5.41% during the forecast period.
Few of the well-known market leaders in the global CMOS image sensor market that are profiled by Research Nester are STMicroelectronics International NV, Samsung Electronics America, Inc., Sony Semiconductor Solutions Corporation, ON Semiconductor Components Industries, LLC, Canon, Inc., SK Hynix Inc., OMNIVISION Technologies Inc., Hamamatsu Photonics K.K., Panasonic Industry Co. Ltd., and Teledyne Technologies Inc. and other key players.

Recent Development in in the Global CMOS Image Sensor Market

In December 15, 2021, Canon creates the world's highest resolution 3.2 megapixel SPAD sensor and introduces a breakthrough low-light imaging camera that achieves outstanding colour reproduction even in dimly lit conditions.

In February 14, 2018, Panasonic Corporation revealed that it has created a breakthrough technology that enables simultaneous 450k high-saturation electrons, global shutter photography with sensitivity modulation, and 8K high-resolution (36M pixels) imaging using a CMOS image sensor with an organic photoconductive layer (OPF).

Go to the original article...

Videos of the day [AMS-OSRAM, ESPROS, Sony]

Image Sensors World        Go to the original article...

New Mira global shutter image sensor from ams OSRAM advances 2D and 3D sensing with high quantum efficiency at visible and NIR wavelengths. The Mira sensors come supplied in a chip-scale package, with an optimized footprint and an industry-leading ratio of size to resolution empowered by state-of-the-art stacked back-side illumination technology to shrink package footprint, giving greater design flexibility to manufacturers of smart glasses and other space-constrained products. The Mira image sensors are super small and offer superior image quality in low light conditions and with its many on-chip operations, our image sensors open up many new possibilities for developers.


ESPROS Time-of-Flight products were developed for outdoor use and handle background light very well. These outdoor scenes were taken with our TOFcam-660. In this TOFcam-660 a epc660 is installed, which has a resolution of 320x240 pixels and can easily be used for outdoor applications with a lot of ambient light, even in direct sunlight of 100klux. Thanks to the good resolution, HDR mode, with different integration times and the already mentioned outdoor performance, various applications can be developed that require a clean distance image (depth map).

[Read more...]

Go to the original article...

Tamron 20-40mm f2.8 Di III review

Cameralabs        Go to the original article...

The Tamron 20-40mm f2.8 Di III is a wide-angle zoom designed for full-frame Sony mirrorless. See how it compares to Sony's FE 16-35mm f2.8 GM and Tamron's 17-28mm f2.8 Di III in my review!…

Go to the original article...

New Canon option for semiconductor lithography system back-end process contributes to 3D advanced packaging technologies, enables mass production of dense circuitry with exposure fields of up to 100 mm x 100 mm

Newsroom | Canon Global        Go to the original article...

Go to the original article...

NIT SWIR Portfolio

Image Sensors World        Go to the original article...

Press release from NIT (New Imaging Technologies) about their wide range of SWIR offerings:

NIT is widely known for its large range of SWIR cameras designed for industrial, defense, and medical markets. Less known is that NIT designs and manufactures in-house all the InGaAs sensors embedded into our cameras. We master the design of silicon read-out circuits, InGaAs photodiode arrays, and assembly technologies such as 3D stacking.

Our recent investment into a new clean room facility and back-end process machines will bring our production capacity to several ten thousand sensors per year with the highest quality. ​

Such vertical integration allows us to offer a line of cameras with specific features, all adapted to our customer markets and applications. Our cameras and their performances are unique as they don’t use third-party sensors. The sensitivity, noise level, frame rate, pitches, dynamic range, and pixel numbers of our InGaAs sensors make our cameras the best in their class.​


Go to the original article...

2023 International Solid-State Circuits Conference (ISSCC) Feb 19-23, 2023

Image Sensors World        Go to the original article...

ISSCC will be held as an in-person conference Feb 19-23, 2023 in San Francisco. 

An overview of the program is available here:

Some sessions of interest to image sensors audience below:

Tutorial on  "Solid-State CMOS LiDAR Sensors" (Feb 19)
Seong-Jin Kim, Ulsan National Institute of Science and Technology, Ulsan, Korea

This tutorial will present the technologies behind single-photon avalanche-diode (SPAD)-based solid-state
CMOS LiDAR sensors that have emerged to realize level-5 automotive vehicles and the metaverse AR/VR in mobile devices. It will begin with the fundamentals of direct and indirect time-of-flight (ToF) techniques, followed by structures and operating principles of three key building blocks: SPAD devices, time-to-digital converters (TDCs), and signal-processing units for histogram derivation. The tutorial will finally introduce the recent development of on-chip histogramming TDCs with some state-of-the-art examples.

Seong-Jin Kim received a Ph.D. degree from KAIST, Daejeon, South Korea, in 2008 and joined the Samsung Advanced Institute of Technology to develop 3D imagers. From 2012 to 2015, he was with the Institute of Microelectronics, A*STAR, Singapore, where he was involved in designing various sensing systems. He is currently an associate professor at Ulsan National Institute of Science and Technology, Ulsan, South Korea, and a co-founder of SolidVUE, a LiDAR startup company in South Korea. His current research interests include high-performance imaging devices, LiDAR systems, and biomedical interface circuits and systems.

[Read more...]

Go to the original article...

ESPROS supplies ToF sensing to Starship Technologies

Image Sensors World        Go to the original article...

ESPROS supplies world leader for delivery robots

Sargans, 2022/11/29

Starship Technologies' autonomous delivery robots implement ESPROS’ epc660 Time-of-Flight chip ESPROS' epc660 chip is used by Starship Technologies, a pioneering US robotics technology company, headquartered in San Francisco, with its main engineering office in Estonia, is the world’s leading provider of autonomous last mile delivery services.

What was once considered science fiction is now a fact of modern life: in many countries robots deliver a variety of goods, such as parcels, groceries, medications. Starship’s robots are a common sight on University campuses and also in public areas.

Using a combination of sensors, artificial intelligence, machine learning and GPS to accurately
navigate, delivery robots face the need to operate in darkness, but also in bright sunlight. ESPROS sensors excel in both conditions.

The outstanding operation of the ambient light of ESPROS’ epc660 chip, together with its very high quantum efficiency, provided a valuable breakthrough that Starship Technologies needed to further increase autonomy in all ambient light conditions. It wasn’t possible to achieve the same level of performance, implementing other technologies.

ESPROS’ epc660 is able to detect objects over long distances, using very low power. This, together with its small size, results in lower system costs. The success of this chip lies in the years of development by ESPROS and in its strong technological know-how. The combination of its unique Time-Of-Flight technology, with Starship Technologies' position as the leading commercial autonomous delivery service, lies at the heart of over 3.5 million commercial deliveries and over 4 million miles driven around the world.

"The future of delivery, today: this is our bold promise," says Lauri Vain (VP of Engineering at Starship), adding, "With a combination of mobile technology, our global fleet of autonomous robots, and partnerships with stores and restaurants, we are helping to make the local delivery industry faster, cleaner, smarter and more cost-efficient, and we are very excited about our partnership with ESPROS and its unique chip technology."

Go to the original article...