Russell Kirsch, Inventor of the Pixel, Died

Image Sensors World        Go to the original article...

DigitalTrends, Wikipedia: Russell A. Kirsch recognized as the inventor of the pixel died from a form of Alzheimer’s disease at the adge of 91. He invented the pixel in 1957 when working at the National Bureau of Standards, now known as the National Institute of Standards and Technology. The first image was a digitally scanned photograph of his 3-month old son and had a resolution of 176 x 176 pixels:

Go to the original article...

Organic Photo-Multiplying PD Thesis

Image Sensors World        Go to the original article...

Daegu Gyeongbuk Institute of Science & Technology, Korea, publishes MSc thesis "Fabrication of Ultra-Thin Photo-Multiplication Photodiode by Using a Non Fullerene-Based 2D Planar Small Molecular Semiconductor as an Efficient Optical Sensitizer" by Neethipathi Deepan Kumar.

"In this work, we explore the possibility of using nonfullerene and a planar n-type small molecular semiconductor, 2,2′‐((2Z,2′Z)‐((4,4,9,9‐tetrahexyl‐4,9‐dihydro‐s‐indaceno[1,2‐b:5,6‐b′]dithiophene‐2,7‐diyl)bis(methanylylidene))bis(3‐oxo‐2,3‐dihydro‐1H‐indene‐2,1‐diylidene))dimalononitrile (IDIC) as an optical sensitizer to improve the EQE and reduce the thickness of the photoactive layer to 70 nm. A key idea of this work is utilizing the unique photophysical properties of IDIC with an anisotropic electron transport. As is well known, contrary to spherical PCBMs (PC61BM and PC71BM) with an isotropic charge transport property, the 2D planar IDIC with an inherently anisotropic packing structure tends to hinder the formation of the effective electron percolation pathways. This is a very important requirement for the optical sensitizer of PM-OPDs because it leads to more efficient charge trapping. In addition, IDIC possesses a relatively higher absorption coefficient in the visible range compared to PC71BM, which can contribute to a higher photocurrent. Together with a deeper lowest unoccupied molecular orbital (LUMO) level of IDIC compared to PC71BM, all the mentioned photophysical properties of IDIC can be much more beneficial as optical sensitizers of the PMOPDs. Layer-by-layer deposition of P3HT as a photoactive layer and IDIC as an optical sensitizer enables more effective PM operation, yielding high EQE exceeding 130,000% and specific detectivity over 10^12 Jones at 150-nm-thick active layer. Furthermore, due to more facile spatial confinement of the charge carriers, the photoactive layer thickness was further decreased down to 70 nm while maintaining reasonably high EQE of 60,000% as well as specific detectivity over 1012 Jones. Physical origins of such synergetic effects of using IDIC as an optical sensitizer are fully discussed with various photophysical analyses in the forthcoming sections."

Go to the original article...

Ouster Explains Details of its SPAD LiDAR Sensor

Image Sensors World        Go to the original article...

Ouster 2018 article explains the design choices behind its 850nm SPAD-based LiDAR. Among other blocks, the article covers SPAD sensor and its data processing:

A Novel CMOS ASIC with SPAD Detectors

"CMOS SPADs have many practical advantages over the traditional approaches from other lidar manufacturers. Most important is that they can be directly integrated in a CMOS wafer, which makes it possible to incorporate massive amounts of signal processing on the silicon die right next to the detectors.

As lidar resolutions and data rates continue to increase, on-chip signal processing is essential - the current OS1-64 detector is capable of counting and storing over one trillion photons per second into on-chip memory. This is a titanic amount of data, and we’ve included over 100 GMACs per second (1 GMAC = 1 billion multiply accumulate operations) of signal processing logic in over 10 million transistors to ultimately produce the millions of 3D points per second that our customers use to drive cars, map environments, and identify obstacles.

Processing requirements are only going to go up, and Ouster is leading the way with our custom silicon.

SPADs are also moving along a Moore’s Law-esque performance curve. While today’s SPADs might be 2-5% efficient (efficiency here is measured as the percentage of photons that hit the detector that trigger a binary pulse. Most photons travel through the silicon without causing a pulse to trigger which is a bummer), new SPADs are 20-30% efficient (at 850nm) and efficiencies up to 80% may be possible as the technology matures. Increases in SPAD detector efficiency directly increase a sensor’s range and resolution - a 10x efficiency increase makes an OS1 with 640 lines of resolution not just possible but extremely likely. Unlike the legacy detector technologies used in lidar, our SPAD technology already achieves market leading performance but there is more than an order of magnitude improvement still left to go.
"

Operating in 850nm band requires more attention to sunlight radiation suppression. Ouster approaches this issue it by concentrating its VCSEL emitter light in sparse points and blocking sunlight from outside of these points, remotely similar to Apple iPad LiDAR:

Go to the original article...

HDR Pixel with Tone Mapping

Image Sensors World        Go to the original article...

December 2020 issue of an International Journal on Sensing and Imaging publishes a paper "On Wide Dynamic Range Tone Mapping CMOS Image Sensor" by Waqas Mughal (University of Southampton, UK) and Bhaskar Choubey (Universität Siegen, Germany).

"The dynamic range of a natural scene often covers over 6 decades of intensity from bright to dark areas. Typical image sensors, however, have limited ability to capture this dynamic range available in nature. Even after designing specific wide dynamic range (WDR) image sensors, displaying them on conventional media with limited ability requires computationally complex tone mapping. This paper proposed a novel CMOS pixel which can capture and perform tone mapping during data acquisition. The pixel requires a reference voltage to generate tone mapped response. A number of different reference signals are proposed and generated which can perform WDR operation. Nevertheless, fixed pattern noise (FPN) effects the performance of these pixel. A pixel model with simple parameter extraction procedure is described for a typical tone mapping operator. This model is then used to obtain a simple procedure for pixel calibration leading to reduced FPN. The new proposed pixel response is able to capture upto 6 decades of light intensity and reported FPN correction procedure produces 1% of FPN contrast error."

Go to the original article...

Night Vision Circa 1974

Image Sensors World        Go to the original article...

A Vimeo video "Night Vision R&D 1974 US Army; Research and Development Progress Report No. 53" shows how far the imaging has advanced over the last 45 years:



There is also a Youtube version of this video.

Go to the original article...

Night Vision Circa 1974

Image Sensors World        Go to the original article...

A Vimeo video "Night Vision R&D 1974 US Army; Research and Development Progress Report No. 53" shows how far the imaging has advanced over the last 45 years:



There is also a Youtube version of this video.

Go to the original article...

Xiaomi Smartphone with Omnivision Sensor in the Main Camera Wins Top DXOMark Score

Image Sensors World        Go to the original article...

XDA Developers: Xiaomi announces its new flagship phone, Mi 10 Ultra. In its main camera, it uses "48MP, custom 1/1.32″ sensor (OV48C), 2.4µm pixels after pixel-binning." Most of the previous premium Xiaomi phones used Sony and Samsung sensors in the main camera.

DxOMark reports the highest score ever for the new smartphone:

Go to the original article...

Xiaomi Smartphone with Omnivision Sensor in the Main Camera Wins Top DXOMark Score

Image Sensors World        Go to the original article...

XDA Developers: Xiaomi announces its new flagship phone, Mi 10 Ultra. In its main camera, it uses "48MP, custom 1/1.32″ sensor (OV48C), 2.4µm pixels after pixel-binning." Most of the previous premium Xiaomi phones used Sony and Samsung sensors in the main camera.

DxOMark reports the highest score ever for the new smartphone:

Go to the original article...

Media: Huawei Allocates 10,000 Employees to LiDAR Development

Image Sensors World        Go to the original article...

cnTechPost, EqualOcean, NaijaTechNews, Observer Network, GizmoChina: Huawei optoelectronics R&D center in Wihan employing 10,000 people develops automotive LiDAR, according to Wang Jun, president of the recently established Intelligent Automotive Solutions BU. The goal is to create a 100-line LIDAR in the short term.The future plans include cost reduction to $200 or even $100.

Xu Zhijun, Huawei's rotating chairman, says that Huawei will build LIDAR, millimeter-wave radar and other smart car core sensors to create a new sensor ecology.

With that massive investment, Huawei competes against about 130+ smaller LiDAR companies. When combined, these companies probably have the same 10,000 people employed, give or take. So, there is a competition of 10K-people-strong well organized army of Huawei designers against 130+ smaller teams scattered across the globe and exploring different ideas. It would be interesting to see who wins in the end.

Go to the original article...

History of TV

Image Sensors World        Go to the original article...

Mark Schubin publishes a lecture on TV history, including a review of some early image sensing devices:

Go to the original article...

Axcelis Purion XEmax is Fully Capable of 15MeV Energy

Image Sensors World        Go to the original article...

After initial misunderstanding with 15MeV and 12MeV statements in Axcelis announcements, I got an official clarification from the company:

Dear Vladimir,

I saw your recent article on our system and reference to energy range and I wanted to clear up any confusion. Our system does indeed have an energy range of 15 MeV. I just forgot to update the chart on slide 21 on our IR presentation when we posted earnings last week. That update to our website posting is in the process, thank you for noticing that. I’ve attached the press release on the product launch and the amended IR presentation for reference.

Sorry about the confusion, and thanks for providing such good coverage on the exciting image sensor market.

Best regards,

Maureen Hart
Director, Corporate Communications
Axcelis Technologies, Inc.


Go to the original article...

Hamamatsu Builds New Fab

Image Sensors World        Go to the original article...

Hamamatsu announces completion of construction of a new factory building at the Shingai Factory (Shingai-cho, Minami-ku, Hamamatsu City, Japan) to cope with the increasing sales demand for opto-semiconductors, X-ray image sensors and X-ray flat panel sensors. This new fab will start operations in October this year.

Go to the original article...

Is Yokogawa Minimal Fab Suitable for Image Sensor Production?

Image Sensors World        Go to the original article...

Yokogawa brought AIST, Japan-originated Minimal Fab project to production. It uses 0.5-inch wafers and does not need a clean room to operate:








Minimal Fab process implements most of the steps needed for image sensor production, even a wafer thinning and bonding option. The most notable omission is a high energy ion implantation. So far, only lower than 50KeV implantation is supported. For a higher energy, one needs to process wafers at the large fab.


AIST presents Minimal Fab advantages:


There is a similar Futrfab work in the US promoting 2-inch wafer fab. However, its progress appears to be slow and it is far from being a market-ready solution for now.

Go to the original article...

LiDAR News: Lumotive, Robosense, Light, Microvision

Image Sensors World        Go to the original article...

Lumotive announces Early Access Program (EAP) to accelerate adoption of its LiDAR technology. The EAP provides engineering support and early access to Lumotive's software-defined beam-steering technology to enable customized product design and rapid system integration.

While our 3D-sensing products leverage innovative Liquid Crystal Metasurfaces (LCMs) to provide significant performance and cost advantages in several key markets, we know that customers want to accelerate time-to-market for their sensing systems with differentiated, application-specific features,” said Lumotive co-founder and CEO, William Colleran. “For companies developing LiDAR products targeting automotive, industrial and consumer markets, our EAP delivers access to Lumotive’s technology -- including our software-defined beam-steering API -- well before general availability. In exchange, Lumotive gains a number of early-adopter partners and valuable insight into their product requirements which drives our own core technology development.


BusinessWire: RoboSense launches an 80 laser-beam LiDAR ready for customer delivery with early-bird price of $12,800, and standard price of $15,800. The performance of the RS-Ruby Lite is close to that of the 128 laser-beam LiDAR RS-Ruby, with a vertical angular resolution of 0.1 degrees and 160m @ 10% ranging ability (with the longest detection range of 230 meters), making it suitable to address medium-and-high-speed autonomous driving applications.



IEEE Spectrum publishes an article on Light.Co new plan of 3D camera for cars "Will Camera Startup Light Give Autonomous Vehicles Better Vision than Lidar?" The article starts from Ligh. Co history of 16-lens L16 camera and Nokia 9 smartphone project:

I think our timing was bad,” Light CEO Dave Grannan says. “In 2019 smartphone sales started to shrink, the product became commoditized, and there was a shift from differentiating on quality and features to competing on price. We had a premium solution, involving extra cameras and an ASIC, and that was not going to work in that environment.

Light began an R&D program in early 2019 to further refine its algorithms for use in autonomous vehicles. In mid-2019, with the consumer phone and camera market looking sour, the company announced that it was getting out of that business, and pivoted the entire company to focus on sensing systems for autonomous vehicles.

We can cover a longer range—up to 1000 meters, compared with 200 or so for lidar,” says Grannan. “The systems we are building can cost a few thousand dollars instead of tens of thousands of dollars. And our systems use less power, a key feature for electric vehicles.

In these days, Light is testing the first prototypes, trying different numbers of cameras in the array, a variety of focal lengths, and optimizing the design. So far, Light uses FPGA for the depth map calculations but a dedicated ASIC should be available by early 2021.

The prototype is still in stealth. Light Co. expects to unveil it and announce partnerships with AV companies later this year.


Microvision presents its first automotive MEMS LiDAR capable of 20M points/s rate, 200m range in sunlight, and adaptive FOV:


Go to the original article...

Pointcloud Inc. Presents Coherent ToF Camera

Image Sensors World        Go to the original article...

San Francisco-based startup Pointcould, Opris Consulting, and University of Southampton, UK publish Arxiv paper "A universal 3D imaging sensor on a silicon photonics platform" by Christopher Rogers, Alexander Y. Piggott, David J. Thomson, Robert F. Wiser, Ion E. Opris, Steven A. Fortune, Andrew J. Compston, Alexander Gondarenko, Fanfan Meng, Xia Chen, Graham T. Reed, and Remus Nicolaescu.

"A large-scale two-dimensional array of coherent detector pixels operating as a light detection and ranging (LiDAR) system could serve as a universal 3D imaging platform. Such a system would offer high depth accuracy and immunity to interference from sunlight, as well as the ability to directly measure the velocity of moving objects. However, due to difficulties in providing electrical and photonic connections to every pixel, previous systems have been restricted to fewer than 20 pixels. Here, we demonstrate the first large-scale coherent detector array consisting of 512 (32×16) pixels, and its operation in a 3D imaging system. Leveraging recent advances in the monolithic integration of photonic and electronic circuits, a dense array of optical heterodyne detectors is combined with an integrated electronic readout architecture, enabling straightforward scaling to arbitrarily large arrays. Meanwhile, two-axis solid-state beam steering eliminates any tradeoff between field of view and range. Operating at the quantum noise limit, our system achieves an accuracy of 3.1 mm at a distance of 75 metres using only 4 mW of light, an order of magnitude more accurate than existing solid-state systems at such ranges. Future reductions of pixel size using state-of-the-art components could yield resolutions in excess of 20 megapixels for arrays the size of a consumer camera sensor. This result paves the way for the development of low cost and high performance 3D imaging cameras, enabling new applications from robotics to autonomous navigation."

Go to the original article...

Samsung et al. Explores Subwavelength-Sized Color Pixel Approaches

Image Sensors World        Go to the original article...

Samsung, UCB, and Hong Kong University researches publish a Nature paper "Subwavelength pixelated CMOS color sensors based on anti-Hermitian metasurface" by Joseph S. T. Smalley, Xuexin Ren, Jeong Yub Lee, Woong Ko, Won-Jae Joo, Hongkyu Park, Sui Yang, Yuan Wang, Chang Seung Lee, Hyuck Choo, Sungwoo Hwang, and Xiang Zhang.

"The demand for essential pixel components with ever-decreasing size and enhanced performance is central to current optoelectronic applications, including imaging, sensing, photovoltaics and communications. The size of the pixels, however, are severely limited by the fundamental constraints of lightwave diffraction. Current development using transmissive filters and planar absorbing layers can shrink the pixel size, yet there are two major issues, optical and electrical crosstalk, that need to be addressed when the pixel dimension approaches wavelength scale. All these fundamental constraints preclude the continual reduction of pixel dimensions and enhanced performance. Here we demonstrate subwavelength scale color pixels in a CMOS compatible platform based on anti-Hermitian metasurfaces. In stark contrast to conventional pixels, spectral filtering is achieved through structural color rather than transmissive filters leading to simultaneously high color purity and quantum efficiency. As a result, this subwavelength anti-Hermitian metasurface sensor, over 28,000 pixels, is able to sort three colors over a 100 nm bandwidth in the visible regime, independently of the polarization of normally-incident light. Furthermore, the quantum yield approaches that of commercial silicon photodiodes, with a responsivity exceeding 0.25 A/W for each channel. Our demonstration opens a new door to sub-wavelength pixelated CMOS sensors and promises future high-performance optoelectronic systems."


"Future directions for this work include integration with CMOS readout circuit arrays (Supplementary Fig. 11), extension of the spectral response to the blue end of the visible spectrum and optimization of the geometry to provide color-sorting for obliquely angled excitation.

As the demand for smaller pixel size and higher resolution in imaging and display technologies increases, our work advances the state-of-the-art by showing for the first time, PIN readout, three-color sorting over a two-dimensional surface without sacrificing responsivity. Furthermore, the sub-wavelength sized pixels are demonstrated based on the principle of AH coupling and fabricated via CMOS-compatible processes into vertical shallow junction PIN nanocylinders that efficiently convert optical energy to a clear electrical readout without crosstalk. Our work promises future compact, small pixelated, high-performance optoelectronic systems.
"

Go to the original article...

Infineon Expects ToF Sensor Market to Exceed 1.1B Euro in 2023

Image Sensors World        Go to the original article...

Infineon quarterly earnings report presents the company forecast of 3D ToF sensor market:

Go to the original article...

Axcelis Ships its First 12MeV Implanter to CIS Customer for Evaluation

Image Sensors World        Go to the original article...

Axcelis reports that it has shipped first Purion XEmax evaluation system to a large image sensor customer. In a previous version of this announcement, Axcelis claimed it supports up to 15MeV energies, but now it just says "more than 12MeV."

Go to the original article...

Photodiode Array Cross-Section

Image Sensors World        Go to the original article...

i-Micronews: SystemPlus publishes a comparison of camera modules in Samsung smartphones. One of the image sensor cross-section pictures has an interesting diagonal line crossing a photodiode array:

Go to the original article...

Himax Low Power Sensor + AI Processor Solution Attracts Interest from the Industry

Image Sensors World        Go to the original article...

GlobeNewswire: Himax updates on its imaging business in Q2 2020 earnings report:

"In order for Himax’s WiseEye technology to reach its maximum potential, the Company has adopted a flexible business model whereby, in addition to total solution where it provides processor, image sensor and AI algorithm, the Company also offers those individually as key parts in order to address the market’s different needs and widen its market coverage. For customers who own their own algorithm and wish to develop their own applications, Himax can provide its ultralow power AI processor and image sensor without algorithm. The customer can piggyback on Himax’s technology and focus their effort on bringing AI to edge devices by transforming a wide range of sensor data, including video, sound, movement, gesture, among others, into actionable information, all with extremely low power consumption. For those customers/partners whose main business is to provide AI processors, Himax can offer its ultralow power image sensors without its AI processor and algorithm.

For the total solution offering, Himax launched a computer vision human detection notebook solution which has been well recognized and is being incorporated into the next generation premium notebook models of key OEMs and ODMs. Himax’s total solutions are also being integrated into a wide range of other applications such as TV, doorbell, door lock, air conditioner, etc. by engaging leading players in those industries. For the other type of business model where Himax only offers key parts, the Company’s strategy is to actively participate in the ecosystems led by the world’s leading AI and cloud service providers. A recent illustration of this strategy is an announcement for the collaboration with Google whereby, running on Google’s TensorFlow Lite for Microcontrollers kernel, Himax provided its AI processor with CNN (convolutional neural network) based SDK (software development kit) for developers to generate deep learning inferences with video and voice commands data to boost overall system performance while consuming extremely low power.

Being an official partner of Google’s TensorFlow, Himax gets to enjoy the enormous network of its ecosystem participants. Just over a month after the announcement, Himax is already receiving inquiries from large corporations and individual AI developers alike with application ideas covering a broad range of industries. The Company is very encouraged by the enthusiastic discussions about possible WiseEye applications that are taking place in various user groups for emerging AI market ideas. Last but not least, Himax is working closely with other leading AI and cloud service providers worldwide to incorporate WiseEye edge AI solution into their ecosystems, in an attempt to reach the broadest market coverage possible. Himax is extremely excited about these developments.

Due to the accelerated adoption of work-from-home and online education, demand for Himax’s CMOS image sensor for notebook and IP camera will remain strong during the third quarter.

The Company’s industry-first 2-in-1 CMOS image sensor has penetrated into the laptop ecosystem for the most stylish super slim bezel design with 3 types of popular application features, namely RGB sensor for video conference, RGB/IR sensor for Windows Hello facial recognition, and/or ultralow power AI computer vision for human presence detection. Himax expects to see small volume in certain premium notebook models in late 2020 with more volume expected in the coming years.

For the traditional human vision segments, Himax also sees strong demand in multimedia applications such as car recorders, surveillance, drones, home appliances, and consumer electronics, among others.
"

Himax shows some use cases for its WiseEye solution:

Go to the original article...

Strained Si PD Sensitivity Enhanced into SWIR Band

Image Sensors World        Go to the original article...

Phys.org: Yonsei University, Korea, researches manages to enhance 10nm-thin Si layer sensitivity all the way to 1550nm. Science paper "Breaking the absorption limit of Si toward SWIR wavelength range via strain engineering" by Ajit K. Katiyar, Kean You Thai, Won Seok Yun, JaeDong Lee, and Jong-Hyun Ahn shows the results:

"Silicon has been widely used in the microelectronics industry. However, its photonic applications are restricted to visible and partial near-infrared spectral range owing to its fundamental optical bandgap (1.12 eV). With recent advances in strain engineering, material properties, including optical bandgap, can be tailored considerably. This paper reports the strain-induced shrinkage in the Si bandgap, providing photosensing well beyond its fundamental absorption limit in Si nanomembrane (NM) photodetectors (PDs). The Si-NM PD pixels were mechanically stretched (biaxially) by a maximum strain of ~3.5% through pneumatic pressure–induced bulging, enhancing photoresponsivity and extending the Si absorption limit up to 1550 nm, which is the essential wavelength range of the lidar sensors for obstacle detection in self-driving vehicles. The development of deformable three-dimensional optoelectronics via gas pressure–induced bulging also facilitated the realization of unique device designs with concave and convex hemispherical architectures, which mimics the electronic prototypes of biological eyes."

Go to the original article...

Edge AI Processing Presentation

Image Sensors World        Go to the original article...

Hailo presentation on edge AI vision processing shows that one needs 80K MAC operations per pixel to achieve a 86% image classification accuracy. This can require quite a high power consumption, depending on the image resolution and a frame rate:

Go to the original article...

Omdia: Samsung Reduces Market Share Gap with Sony

Image Sensors World        Go to the original article...

KoreanInvestors quotes Omdia Research claiming that Samsung gradually reduces the market share gap with Sony:

"According to market researcher OMDIA, the global market share gap in the CMOS image sensor between Sony and Samsung has narrowed significantly this year.

Sony’s global market share, which rose to as high as 56.2% in the third quarter of 2019, fell to an estimated 42.5% in the second quarter of 2020.

In the same period, the market share of Samsung, the world’s No. 2 image sensor maker, rose to 21.7% from 16.7%, narrowing the gap with the Japanese company to 20.8 percentage points from 39.5 percentage points.

Analysts attribute Samsung’s advancement in the image sensor market to the increase in shipments of its high-end products and a growing client base such as Chinse electronics firm Xiaomi Corp.

“Image sensors are one of the three key products of Samsung in its system chip product lines that can become the world’s number one,” said Inyup Kang, head of System LSI Business & President at Samsung.

The global spread of the Covid-19 coronavirus also played a role in Samsung’s greater market share, as many U.S. consumer electronics companies are delaying the launch of new products, using Sony’s image sensors, while Samsung’s major clients in China have raised their order volumes of image sensors.

Samsung plans to focus on high-end products to raise its market share as the global market of high-resolution image sensors are estimated to rise at an average annual rate of 87% until 2024.

SK Hynix unveiled new CMOS image sensors under the brand “Black Pearl,” targeting a mid-tier market last year. Its market share rose from around 2% in 2019 to 3.4% in the second quarter of this year.
"

Go to the original article...

Entangled Photon Imaging with SPADs

Image Sensors World        Go to the original article...

EPFL and Glasgow University publish Arxiv paper "Quantum illumination imaging with a single-photon avalanche diode camera" by Hugo Defienne, Jiuxuan Zhao, Edoardo Charbon, and Daniele Faccio.

"Single-photon-avalanche diode (SPAD) arrays are essential tools in biophotonics, optical ranging and sensing and quantum optics. However, their small number of pixels, low quantum efficiency and small fill factor have so far hindered their use for practical imaging applications. Here, we demonstrate full-field entangled photon pair correlation imaging using a 100-kpixels SPAD camera. By measuring photon coincidences between more than 500 million pairs of positions, we retrieve the full point spread function of the imaging system and subsequently high-resolution images of target objects illuminated by spatially entangled photon pairs. We show that our imaging approach is robust against stray light, enabling quantum imaging technologies to move beyond laboratory experiments towards real-world applications such as quantum LiDAR."

Go to the original article...

Brookman ToF Gesture Recognition Demo

Image Sensors World        Go to the original article...

Brookman shows how its BT008D pToF sensors excels in finger gesture recognition in comparison with competing ToF solutions. The demo is prepared in collaboration with Toppan Printing Co., Ltd., one of Brookman's major investors.

Go to the original article...

Synopsys Demos 24Gbps D-PHY/C-PHY IP

Image Sensors World        Go to the original article...

Synopsys shows MIPI C-PHY/D-PHY IP Performance at 24 Gbps. MIPI C-PHY/D-PHY IP interoperating with an unspecified image sensor in C-PHY mode up to 3.5 Gsps per trio and D-PHY mode up to 4.5 Gbps per lane. The IP is available in FinFET processes for camera applications:

Go to the original article...

Newsight Imaging eToF Webinar

Image Sensors World        Go to the original article...

Newsight Imaging publish a webinar explaining its NSI1000 eTOF sensor operation:

Go to the original article...

Imaging Beyond the Speed of Light

Image Sensors World        Go to the original article...

A Youtube video explains seemingly impossible effects described in EPFL and Canon Archive.org paper published a few weeks ago:

Go to the original article...

More about Sony Quarterly Results

Image Sensors World        Go to the original article...

SeekingAlpha publishes Sony earnings call transcript with some updates on image sensor business:

"Next is in IS&S, Image Sensing & Solutions. Fiscal '20 quarter one sales decreased 11% year-on-year to ¥206.2 billion, and operating income decreased ¥24.1 billion to ¥25.4 billion.

Fiscal '20 sales are expected to decrease 7% to ¥1 trillion, and operating income is expected to decrease ¥105.6 billion to ¥130 billion. Now I will explain the state of our sensor business. Fiscal '20 sales of image sensors for mobile products are expected to decrease compared to fiscal '19, primarily due to a decrease in end-user product sales by one of our major customers, the deceleration of the smartphone market and a shift to mid-range and moderately priced models in that market resulting from the impact of the spread of COVID-19 and significant reduction in component and finished goods inventory by Chinese customer. Profitability is expected to be impacted by a decrease in gross margins and an increase in depreciation and manufacturing-related costs associated with production equipment we purchased in the previous fiscal year when we expected growth as well as higher research and development costs.

We do not expect to grow sales of mobile sensing products compared to fiscal '19 because adoption by smartphone makers has been slow and sales of flagship models, which already use our products have decreased due to the shift in market conditions. Sales of image sensors to AV have also decreased due to the contraction of the sensor market for digital cameras, resulting from the impact of the spread of COVID-19. We expect the market to contract in 1 year as much as we had previously expected it would contract over the next approximately 3 years.

In order to respond quickly to the changes in the environment, especially for image sensors for mobile products, we will modify our strategy, mainly in the areas of investment, research and development and customer base. We have already significantly reduced investment in capacity to supply demand in the fiscal year ending March 31, 2022, because we can supply that demand by stockpiling strategic inventory through utilization of our excess production capacity this fiscal year.

The forecast for cumulative capital expenditures for the 3 fiscal years began April 1, 2018, which we explained in the past, has been reduced ¥50 billion from approximately ¥700 billion to approximately ¥650 billion. And we are carefully reviewing the timing of planned capital expenditures in fiscal '21 and beyond. We will review the projects and priorities for research and development spending as well to ensure that they fit with the recent trends in the smartphone market and changes in our major customers' needs. However, in order to maintain and increase our future technological competitive advantage, we will not drastically reduce the number of projects or the budget. We intend to more proactively expand and diversify our customer base, which we're cautious to do previously due to production capacity constraints.

Over the mid- to long term, we will work to expand the applications for image sensors and the market overall by introducing edge-sensing products that use senses equipped with AI processing functionality, and we will steadfastly work to grow this business. We plan to complete within approximately 1 year an enhancement of our business model to adapt to the recent changes in the environment, and we expect to return the business to the path of profit growth from the second half of fiscal '21.

....About sensors, changes in the market and how are the changes occurring. For one thing, all over the world, there is poor sense in the market, deterioration of the market, and that is impacting the sensor sales. And also, the higher-priced products, well, it's, you could say, shifting to the moderate -- more moderate-priced models overall. So for our image sensors, especially the high-end image sensors that we sell, the high-end models are decreasing in sales. So that's impacting our business.

But as far as a large trend is concerned, the phones -- smartphones going larger and using multiple lenses, that will continue. The performance of -- for the cameras required for smartphones, for video and the camera photos, the demand for the higher quality will continue. Therefore, we believe the demand should come back sometime in the future.

...regarding image sensor, the capacity and the capacity factor and the second quarter. So the capacity for this quarter, for fiscal 2020 at the end of first quarter, and that's -- that's 133,000 per month at the master price; and also at the end of second -- of the second quarter, 135,000 per month. So we will gradually increase the capacity. That's our plan.

And also, the number of wafers to be input. The first quarter the actual figure is -- the average of 3 months is 126,000 for mobile and also for digital camera, and there were some adjustments made for production. And also for the projection for second quarter for that, the simple average for 3 months is 112,000. So for mobile and digital camera, I think there's going to be more production adjustment.

And then, well, for Sensing segment, the sales is expected to come down, and what is the magnitude of the impact? Well, last year, actual was a little over of ¥230 billion and it's a strong ¥230 billion. So generally, it's like 1/3 of that is the reduction in sensors or sensing products. That's 1/3. So a big point about that is that as of last year, we thought that the growth can be expected. So we made the capital investment and also, we have increased our R&D expenditures. And that has been the impact.
"

In a separate news, a Twitter post presents CIS market share chart from an unidentified Korean source. It shows a Q2 2020 market share taken away by somebody from both Sony and Samsung:

Go to the original article...

Sony Reports Drop of Image Sensor Sales, Reduces FY Forecast

Image Sensors World        Go to the original article...

Sony reports a drop of image sensor sales in its last fiscal quarter started on April 1, 2020:

Go to the original article...

css.php