GPixel Introduces GLUX BSI sCMOS Family with 0.8 e- Noise

Image Sensors World        Go to the original article...

Gpixel presents GLUX9701BSI, a 1” format BSI sCMOS sensor with a resolution of 1.3 MP (1280 x 1024) and large 9.76 μm x 9.76 μm pixels. The sensor is the first in a new family targeting extreme low-light imaging for both surveillance and scientific use.
GLUX9701BSI supports a dual-gain HDR mode, achieving dynamic range of 90 dB by combining 1.5 e– RMS read noise and 50 Ke– full well charge. A dedicated low noise mode further optimizes imaging performance with read noise of 0.8 e– and power consumption of 180 mW. Dedicated circuit and process engineering improves noise uniformity of GLUX significantly over previous sCMOS models with a close to Gaussian noise distribution behavior as can be seen in below figure.


The sensor offers two types of data output: 4 channel sub-LVDS and MIPI (CSI-2, D-PHY). The default frame rate at HDR of 30 fps can be achieved using both channel options with dedicated operation modes achieving frame rates of up to 120 fps.

A picture below demos the new sensor performance at 25 fps with <0.01 lux light intensity (measurement equipment limited).


A demo video is taken at 25fps under strarlight:





Go to the original article...

More about Sony Acquisition of Intel Patents

Image Sensors World        Go to the original article...

In the end of 2020, there were reports that Sony had acquired a number of Intel patents. An IP analytics blog LexisNexis publishes its analysis of these patents:

"The respective reassignment document lists 35 simple patent families with active US patents. Looking at the drawings shows that the portfolio comprises various technologies like packaging, semiconductor process, systems, computer memory, and image projection. Surprisingly, there are several first page drawings which points to gate-all-around transistor (GAA) technologies, the expected next transistors technology.

This finding was not expected: Sony is not known to be active in advanced CMOS transistor technologies like GAA, or at least not obviously known.

...a quick screening shows that 10 patent families are about to gate-all-around transistors and 3 families relate to finFET transistors. On its face value, the described structures and methods even resemble gate-all-around structures which are published in the literature.

Sony ranks sixth with a small but strong GAA portfolio. Difference between the portfolio size (number of simple patent families) and the portfolio strength (Patent Asset Index) is remarkable: The portfolio strength is more than 10-times larger than the Portfolio Size. This ratio demonstrates the high average quality, measured by the PatentSight Competitive Impact, of Sony's gate-all-around patents, as the Patent Asset Index is the product of Competitive Impact and Portfolio Size."

BusinessKorea believes that Intel GAA patents will be used in future Sony image sensor designs.

Go to the original article...

Google Computational Photography Review

Image Sensors World        Go to the original article...

Google Research and York University, Canada, publish and Arxiv.org paper "Mobile Computational Photography: A Tour" by Mauricio Delbracio, Damien Kelly, Michael S. Brown, Peyman Milanfar.

"The first mobile camera phone was sold only 20 years ago, when taking pictures with one's phone was an oddity, and sharing pictures online was unheard of. Today, the smartphone is more camera than phone. How did this happen? This transformation was enabled by advances in computational photography -the science and engineering of making great images from small form factor, mobile cameras. Modern algorithmic and computing advances, including machine learning, have changed the rules of photography, bringing to it new modes of capture, post-processing, storage, and sharing. In this paper, we give a brief history of mobile computational photography and describe some of the key technological components, including burst photography, noise reduction, and super-resolution. At each step, we may draw naive parallels to the human visual system."

Go to the original article...

RTN in PD Dark Current Thesis

Image Sensors World        Go to the original article...

 Toulouse University publishes a PhD thesis "Defects in silicon: revisiting theoretical frameworks to guide ab initio characterization" by Gabriela Herrero-Saboya.

"In this thesis, we describe the effect of localized defects on the electronic properties of silicon. After 60 years of silicon devices production, one might expect all details of this material to be fully understood, especially considering that the manufacture of nowadays nanometer-sized transistors requires quasi-atomic accuracy. However, as a direct result of such extreme miniaturization, the accidental creation of even one single trapping center can be sufficient to alter the desired electronic properties of the sample, becoming one of the most feared phenomena in the industry. 

Atomistic numerical simulations in silicon, based on the Density Functional Theory, do however typically target specific defect-properties, not giving a complete theoretical picture of the system, often overlooking previous models and experimental evidence. In the present thesis, we provide new insight into iconic defects in silicon through the quantification of long-established atomistic models, making an explicit link with the characterization techniques. Our detailed exploration of the DFT energy surface of the silicon E-center, guided by a simple Jahn-Teller model, confirmed the observed defect-dynamics at different temperature regimes, allowing us to link the presence of such point-like defect to a burst noise in image sensors.

In section 3.2, we analyse the relevance of the silicon E-center for several technologically relevant processes, like the Dark-Current Random Telegraph Signal in image sensors. The former might be defined as a burst noise in electronic devices commonly linked to the finite-temperature dynamics of crystallographic defects, motivating an extensive exploration of the potential energy surface at different temperature regimes. Our DFT and NEB calculations, in excellent agreement with EPR spectroscopy, provide new insight into the defect dynamics, and in particular into the vacancy-mediated dopant diffusion mechanism in silicon."

Go to the original article...

Hamamatsu Compares LiDAR Detector Technologies

Image Sensors World        Go to the original article...

Hamamatsu publishes a Youtube video comparing APD, SiPM, and SPAD in LiDAR applications:


Go to the original article...

ON Semi Celebrates Performance of its PYTHON Sensors in NASA Perseverance Mission

Image Sensors World        Go to the original article...

ON Semi is proud to report the good performance of its sensors used in NASA Perseverance mission:

"The Perseverance mission has a total of 23 cameras, 19 of them are mounted on the rover itself and 16 of those are intended to be used by the rover while it is on the surface. The remaining seven cameras, both on the rover and the entry vehicle, were there largely to support the EDL phase [Entry, Descent, Landing process] of the journey. Of those seven cameras, ON Semiconductor supplied the image sensors for six of them.

Three of the cameras, designated Parachute Uplook Cameras (PUC), were used purely to observe the parachute as it opened on the descent. Two of the other cameras, designated Rover Uplook Camera (RUC) and Rover Downlook Camera (RDC), provided similar insights and operated at 30 fps for the whole of the EDL period.

The sensors used in the PUC, RUC and RDC are the PYTHON1300, a 1.3 megapixel, ½ in CMOS image sensor with 1280 by 1024 pixels. Together, all three cameras captured over 27,000 images during the ‘seven minutes of terror’.

The sixth ON Semiconductor image sensor now residing on Mars is a PYTHON5000, a 5.3 megapixel image sensor with a 1 inch area and pixel array of 2592 by 2048 pixels. This sensor is used in the Lander Vision System and is referred to as the LCAM. This camera has a field of view of 90° by 90° and provides the input to the onboard map localization Terrain-Relative Navigation system."

The PYTHON series of industrial vision sensors has been designed in the company's design centers in Mechelen, Belgium and in Bangalore, India.

Go to the original article...

Near Sensor CNN Processing

Image Sensors World        Go to the original article...

University Florida at Gainesville publishes MDPI paper "Towards An Efficient CNN Inference Architecture Enabling In-Sensor Processing" by Md Jubaer Hossain Pantho, Pankaj Bhowmik, and Christophe Bobda. The paper attracts the attention to the high power consumption of CNN processing that limits the possibilities of its integration onto image sensor:

"The astounding development of optical sensing imaging technology, coupled with the impressive improvements in machine learning algorithms, has increased our ability to understand and extract information from scenic events. In most cases, Convolution neural networks (CNNs) are largely adopted to infer knowledge due to their surprising success in automation, surveillance, and many other application domains. However, the convolution operations’ overwhelming computation demand has somewhat limited their use in remote sensing edge devices. In these platforms, real-time processing remains a challenging task due to the tight constraints on resources and power. Here, the transfer and processing of non-relevant image pixels act as a bottleneck on the entire system. It is possible to overcome this bottleneck by exploiting the high bandwidth available at the sensor interface by designing a CNN inference architecture near the sensor. This paper presents an attention-based pixel processing architecture to facilitate the CNN inference near the image sensor. We propose an efficient computation method to reduce the dynamic power by decreasing the overall computation of the convolution operations. The proposed method reduces redundancies by using a hierarchical optimization approach. The approach minimizes power consumption for convolution operations by exploiting the Spatio-temporal redundancies found in the incoming feature maps and performs computations only on selected regions based on their relevance score. The proposed design addresses problems related to the mapping of computations onto an array of processing elements (PEs) and introduces a suitable network structure for communication. The PEs are highly optimized to provide low latency and power for CNN applications. While designing the model, we exploit the concepts of biological vision systems to reduce computation and energy. We prototype the model in a Virtex UltraScale+ FPGA and implement it in Application Specific Integrated Circuit (ASIC) using the TSMC 90nm technology library. The results suggest that the proposed architecture significantly reduces dynamic power consumption and achieves high-speed up surpassing existing embedded processors’ computational capabilities."

Go to the original article...

Velodyne Founder’s Open Letter to Board of Directors

Image Sensors World        Go to the original article...

BusinessWire: Velodyne founder David Hall publishes an open letter to the company's BoD:

March 9, 2021

Velodyne Lidar, Inc.
5521 Hellyer Avenue
San Jose, CA 95138
Attn: Board of Directors

Dear Velodyne Lidar Board of Directors (the “Board”):

I am writing to you today to directly refute the statements regarding my resignation from the Board included in Velodyne Lidar’s (the “Company”) recent Form 8-K filing. These statements do not accurately depict why I resigned and instead focus on the Company’s decision to publicly censure Marta Hall and I based on unfounded claims which we strongly refute.

To be completely clear: I chose to resign from the Board because I had numerous concerns about the strategic direction and current leadership of Velodyne Lidar.

As the founder and former Chief Executive Officer of Velodyne Lidar, I oversaw years of growth and success that ultimately laid the groundwork for the Company to go public via a merger with a special purpose acquisition company (“SPAC”) in 2020. Despite serving as the Executive Chairman of the Board following Velodyne Lidar’s successful SPAC merger, it became quickly apparent to me that Jim Graf and Michael Dee – joint founders of the SPAC – wanted to curtail my involvement in the quality and selection of products being developed, the contracts negotiated and integrity of the Company’s business moving forward. These actions, in my view, emboldened Chief Executive Officer Anand Gopalan to disregard my views.

I firmly believe that the Board has fostered an anti-stockholder culture and that Velodyne Lidar’s corporate governance is broken. Perhaps most unsettling was the Board’s decision to rubberstamp an increased compensation package for Mr. Gopalan despite the Company releasing weak Q4 2020 earnings and missing year end forecasts.

The Board also recently attempted to manipulate the Company’s corporate machinery by transitioning Christopher Thomas from a Class I director to a Class II director in an apparent move to avoid having him stand for re-election against my nomination of Eric Singer, a highly-qualified director candidate with significant public board experience.

As a whole, I believe the status quo in Velodyne Lidar’s boardroom is unacceptable. The Board lacks prior public company experience, seems to prioritize its own self-interests over stockholders and has overseen the destruction of significant stockholder value.

It was in light of these serious concerns – as well as the Board’s complete disregard for my decades of experience and input – that made me come to the difficult decision of submitting my resignation last week. Unfortunately, the Board as currently constituted appears to have no respect for the principles, values and culture that I spent years building at Velodyne Lidar. My wife, Marta Hall, will remain on the Board and continue to perform her fiduciary duties to best serve all Velodyne Lidar stockholders.

Sincerely,

David Hall
Founder of Velodyne Lidar, Inc.

Go to the original article...

Moore’s Law for iToF Pixels

Image Sensors World        Go to the original article...

Microsoft Azure Depth Platform blog publishes a post on iToF pixel scaling. Few quotes:

"Given we are at the upper limits of MC [Modulation Contrast] and QE, as pixel area is reduced by half, the temporal jitter increases by √2. Does that mean smaller pixels are doomed to poor performance? It turns out there is another lever to improve performance and that is modulation frequency.

So, it would seem like you can recover the performance drop that occurs due to reducing pixel size by increasing the frequency (Figure 2). However, this is only possible if the modulation contrast (MC) doesn’t significantly degrade with higher frequencies. Microsoft ToF technology is a leader where MC at high frequencies is concerned (78% at 320 MHz). Also, higher frequency can increase both chip power and laser optical power. We can discuss ways to mitigate this in future blog posts. Stay tuned."


PRNewswire: Meanwhile, LG Innotek joins the Microsoft Azure Depth Platform program "to unblock access to 3D vision technology and unleash innovation across multiple industry verticals such as: fitness, healthcare, logistics and retail with LG Innotek's ToF (Time of Flight) technology-based 3D camera modules and Microsoft's Azure Depth Platform."

Go to the original article...

Spiking Pixel

Image Sensors World        Go to the original article...

Science China publishes a letter "A variable threshold visual sensing and image reconstruction method based on pulse sequence" and supplementary materials by Jiangtao XU, Peng LIN, Zhiyuan GAO, Kaiming NIE & Liang XU from Tianjin University and Tianjin Key Laboratory of Imaging and Sensing Microelectronic Technology.

"Pulsed-based CMOS image sensor (CIS) outperforms classic CIS in terms of data rate, and is able to achieve ultra-highspeed imaging. A pulsed-based CIS which is able to capture the movement of the hard disk rotating at 6000 r/min, was designed in our previous work.

The structure and working process of the spiking pixel based on variable threshold are shown in Figure 1. The threshold curve consists of periodic ramps. Each ramp consists of many small steps, whose width is set equal to frame period Tu and height is written as Vs. The threshold voltage linearly increases from minimum value Vth0 to maximum value Vth,max by step height Vs every frame cycle during a ramp cycle. Every time a new ramp cycle starts, a ramp pulse is output by a global ramp generator."

Go to the original article...

Image Sensor Technologies at CEA-Leti

Image Sensors World        Go to the original article...

i-Micronews publishes an interview with Agnès Arnaud, head of the Optics and Photonics Department at CEA-Leti "CEA-Leti’s involvement in the CMOS Image Sensor ecosystem." Few quotes:

"CEA-Leti has been involved in CIS development since the mid-1990s. In the early 2000s, CEA-Leti had patents on CIS, including Analog-to-Digital Converters (ADCs), demoisaicing architectures and compression schemes. Some of these technologies have been transferred to the imaging division of STMicroelectronics. STMicroelectronics and CEA-Leti have cooperated for several years on technologies, leading to a boom in imaging applications for mobile telephones. CEA-Leti provided STMicroelectronics with a Through-Silicon-Via (TSV) technology block and processes to make thinner imagers, boost photon collection efficiency and develop innovative architectures. In 2012, a first scientific publication on a Global Shutter Pixel triggered a long-term collaboration on pixel technologies with the imaging division of STMicroelectronics. The cooperation is still on going on advanced concepts, from technologies like dense 3D interconnect announced at IEDM in 2019 to architectures like autonomous imagers announced at VLSI 2020.

Leti has been developing bolometric imagers since 1992 and transferred the technology to start-up Ulis in 2002. Ulis, now Lynred, is a world leading bolometer manufacturer.

You must keep in mind that many innovations require 15 to 20 years before one can find them in a commercial product. Our PhD students are currently working on innovations which may show up in a product in 2040.

Detection in the short wave IR (SWIR) band is very attractive for various applications, such as military, security, telecommunications and medical diagnostics. The SWIR light presents many advantages compared to visible light. It is invisible to the human eye and is less sensitive to extreme weather conditions such as fog and dust. The use of  germanium (Ge) as an active layer in the PiN photodiodes presents many advantages such as its good absorption and its compatibility with the mass-production processes used in the silicon (Si) microelectronics industry. At CEA-Leti, Ge/Si vertical PiN devices have been developed, fabricated, and characterized at room temperature with promising performances such as a low dark current density and good external responsivities. The segmentation of the NIR/SWIR market will ultimately depend on the evolution of InGaAs costs and the improvement of Ge or organic performance. Ge and organic photodiodes are compatible with 300mm diameter silicon wafer production lines. Ge and organic photodiodes are low-cost solutions. Compliance with IC manufacturing make them attractive candidates for consumer products. Yet their intrinsic performance, especially in terms of dark current, is still below InGaAs detectors. And as you know, CEA-Leti is also involved in InGaAs developments. It is CEA-Leti’s mission to investigate the ultimate potential of Ge or organic detectors to offer a reasoned set of technologies.

At CES 2019, CEA-Leti demonstrated a new bioinspired technology for visible image sensors, IR sensors and microdisplays that replicates the curve of the human retina.

This curved image sensor technology breakthrough, called Pixcurve, has several advantages compared to traditional flat image sensors. The form factor of a digital camera module can be reduced by 60% thanks to the reduction of the number of lens elements.

The overall length of the optical system is also shorter. Curved image sensors reduce the cost of the camera module. A lot of markets could be targeted by Pixcurve approach, such as high-end photography, automotive, consumers application or medical."

Go to the original article...

Omnivision Beats its Own Guinness World Record

Image Sensors World        Go to the original article...

BusinessWire: OmniVision announces the OH0TA OVMed medical image sensor—with a package size of just 0.55mm x 0.55mm, featuring a 1.0um pixel and a 1/31” optical format—smaller than the Guinness World Record held by its predecessor for the “Smallest Commercially Available Image Sensor.” The OH0TA also quadruples the RGB image resolution to 400x400, or 160 K Pixels, at 30 fps while reducing the power consumption by 20% to 20mW. This allows designers to add ultra-compact visualization to single-use and reusable endoscopes, as well as catheters and guidewires, with a small outer diameter of 1-2mm. Alternatively, this sensor’s uniquely small size gives medical device OEMs the flexibility to create a larger-diameter scope with a larger working channel.

The trend toward minimally invasive procedures continues to grow, due to their greater success rates and shorter patient recovery times. However, for the narrowest areas of the anatomy, particularly in neuro and cardiac surgeries, previous sensors did not have the necessary combination of high resolution and extremely small size,” said Ehsan Ayar, medical product marketing manager at OmniVision. “The OH0TA is the world’s first sensor to offer this combination, enabling significant endoscope improvements, especially in comparison to traditional videoscopes made with optical fibers, which have limited resolution, poor imager quality and high cost.

To achieve this increase in resolution, along with a smaller pixel size and optical format, the OH0TA is built on PureCel Plus-S stacked die technology. This pixel also provides sensitivity of 3600 mV/lux-sec, along with a SNR of 37.5dB. Additionally, PureCel Plus-S enables the OH0TA’s higher FWC, zero blooming and lower power consumption.

Other key features include a 15.5 degree CRA, enabling the use of lenses with high fields of view and short focus distances. It also supports a 4-wire interface, as well as raw analog data output, both of which can transmit via cables as long as 4 meters with minimal signal noise. For backward compatibility and easy adoption, this sensor interfaces with OmniVision’s existing OV426 ADC bridge chip. Additionally, it is autoclavable for reusable endoscope sterilization.

Samples of the OH0TA are available now, in OmniVision’s hCSP chip scale package with 100 micron thick cover glass and an anti-reflective coating.

BusinessWire: OmniVision also announces the OVMed OCHTA camera module based on the new sensor.

Go to the original article...

Rumor: Apple AR Headset to Include 15 Cameras

Image Sensors World        Go to the original article...

i-Micronews and AppleInsider quote analyst Ming-Chi Kuo saying that Apple rumored AR/MR headset is to be presented to the world in 2022 and will include 15 integrated cameras:

"Eight camera modules, supplied mainly by Largan, are expected to be placed around the wearable "helmet" to facilitate pass-through VR, a technology that allows users to "see through" the enclosed device by feeding exterior images onto interior screens. Apple's product is said to utilize high-resolution MicroOLED displays.

Along with the eight cameras dedicated to pass-through VR, six modules will feed "innovative biometrics," Kuo says. It is unclear if the analyst is referencing user security biometrics — like Face ID — or the ability to capture facial features and body movements of others nearby for inclusion in a simulated experience.

Finally, a single camera module will be installed for environmental detection purposes."

Go to the original article...

Sony Announces Large Format 127.68MP Global Shutter Sensor

Image Sensors World        Go to the original article...

PRNewswireSony announces the upcoming release of a large format 56.73mm diagonal CMOS sensor "IMX661" for industrial equipment with a global shutter and the industry's highest effective pixel count of 127.68MP. Its optical size nearly 10 times larger than the common 1.1-type image sensor corresponded to the C mount for industrial equipment. It also features global shutter "Pregius" pixel and high-speed readout at a data rate nearly four times faster than conventional products. The sampling of the new IMX661 sensor is planned for April 2021.

Go to the original article...

ON Semi Lays-off 740 Employees

Image Sensors World        Go to the original article...

PhoenixBusinessJournal: ON Semiconductor plans to lay off approximately 740 employees across the company and its subsidiaries during the first half of 2021. The company said these terminations are part of its ongoing efforts to refocus on growth drivers and streamline operations.

Here is an official company's SEC disclosure:

"On March 4, 2021, as part of its ongoing efforts to realign its investments to focus on growth drivers and key markets and streamline its operations, ON Semiconductor Corporation (the “Company”) plans to implement certain employee terminations during the first half of 2021 (the “Employment Separations”). The Employment Separations will impact approximately 740 of the Company’s and its subsidiaries’ employees globally. The Company estimates that it will incur between $58 million and $62 million in aggregate costs during the first half of 2021.

The Company plans to reinvest a substantial portion of the savings generated from the Employment Separations into its continuing workforce and certain business initiatives and opportunities. Consequently, the restructuring may not result in a material reduction in the Company’s future operating expenses. The Company intends to continue to evaluate measures to realign its investments to achieve the Company’s strategic and transformational goals."

In December 2020, an activist investor group, NY-based Starboard Value LP, placed two of its preferred directors on the board in an effort to enhance shareholder value.

Go to the original article...

Isorg Organic Photodiode-Based Fingerprint Sensor Gets FBI Certification

Image Sensors World        Go to the original article...

ALANews, BusinessWire: Isorg Fingerprint Acquisition Profile (FAP) 10 module has received FBI certification, the first in this category of organic photodiode (OPD) based optical sensors. The FAP 10 biometrics module is now approved for use in security applications, in particular in mobile device identification for access control at airports and other facilities where the highest security levels are needed.

FAP 10 is manufactured by printing organic photodiode on a TFT backplane. Isorg is the only manufacturer in the world commissioned to mass produce OPD sensors; it is ready for ramp-up to industrial batches at its state-of-the-art plant in Limoges, France.

This FBI certification confirms Isorg’s capacity to deliver biometrics modules based on organic electronics that rise to the challenges of the security market and meet its stringent requirements,” said Jean-Yves Gomez, CEO at Isorg. “We are the very first to gain security approval of an OPD fingerprint sensor that assures the high-level image quality, accuracy and robustness that customers need in border control, access control, voter identification, etc. The security market will continue to benefit from our ongoing developments to achieve certification on higher form factors (up to FAP 60) based on the same scalable OPD technology.

FAP 10 is a complete solution, incorporating an image sensor, dedicated light source, optical filters and driving electronics. To support customer product development, Isorg will provide a reference design with its latest integrated ROIC and software processing for image quality enhancement that is optimized with Isorg’s OPD sensor technology.

The module is a flat, slim design (less than 2mm thick) and robust enough for all outdoor conditions. Isorg offers a roadmap of up to four-finger authentication for a FAP 60 module with scalability for even larger palm-sized surface areas. Anti-spoofing features can also be easily integrated into the hardware and software.

Isorg’s FAP 10 is approved for one finger authentication, with a surface area of 0.5” to 0.65” (1.27 – 1.65cm). The company is planning for FBI certification of larger area biometrics modules, up to four fingers (FAP 60) offering very significant cost advantages for large areas.

Isorg also provides smartphone makers with the slimmest complete solution for large area Fingerprint on Display (FoD) applications. It enables the entire area of the smartphone screen to function as a digital fingerprint scanner.

Go to the original article...

Assorted Videos: Optasensor, ST, Ouster, NASA

Image Sensors World        Go to the original article...

Optasensor publishes a company presentation video:


ST publishes a gesture recognition use case for its ToF sensors:


Ouster publishes a short video explaining its "Digital LiDAR" technology:


NASA publishes "From Pixels to Products: An Overview of Satellite Remote Sensing" video:

Go to the original article...

TechInsights on Small Pixels

Image Sensors World        Go to the original article...

TechInsights has held an excellent webinar on image sensor trends and comparisons on March 3, 2021. The webinar is available for on-demand access now. There is a lot of interesting information there. This is a second post about small pixels:

Go to the original article...

Nikon 17.8MP 1,000fps Sensor – English Version

Image Sensors World        Go to the original article...

Nikon publishes an English version of its February 17th announcement of the 17.8MP 1,000fps stacked sensor presented at ISSCC 2021.

Go to the original article...

Vision Research Unveils 76,000fps @ 1MP Camera

Image Sensors World        Go to the original article...

Ametek Vision Research introduces the Phantom TMX Series, its first high-speed cameras that use BSI sensors to achieve up to 75 Gpx/sec speed and improve light sensitivity. 

The Phantom TMX Series consists of two models, TMX 7510 and TMX 6410. TMX 7510 achieves 76,000 fps at full 1MP resolution of 1280 x 800, over 300,000 fps at 1280 x 192 resolution, and over 770,000 fps at smaller resolutions. With the export-controlled FAST option, TMX 7510 reaches a 1.75 Million fps and 95ns minimum exposure time, eliminating motion blur.

We’re excited to introduce this new class of high-speed performance to the market,” says Jay Stepleton, VP and GM of Vision Research. “In bringing BSI technology to high-speed applications in a new, cutting-edge sensor, we continue to advance high-speed capabilities through innovation. We designed the TMX cameras for speed, to support the very high frame rate requirements we see in many new and cutting-edge applications.



Thanks to TL for the pointer!

Go to the original article...

iToF Multipath Reduction Thesis

Image Sensors World        Go to the original article...

Jonas Gutknecht kindly sent me a video presentation of his MSc Thesis “Multi-Layer-ToF: 3D-ToF-Camera with multiple object distances" from Institute for Signal Processing and Wireless Communications of ZHAW school of engineering in Winterthur, Switzerland, in cooperation with ESPROS Photonics.

Abstract:

"For the three-dimensional acquisition of a scene, 3D cameras based on the Time of Flight (ToF) method have proven themselves for many applications. However, Multi Path Interference (MPI) is widely spread in practice and represents a significant source of error for the ToF method with CW and PN modulation. The aim of this work is not only to suppress this error, but also to separate the individual signal components of real measurements and further determine the distances of the different signal paths.

In a first step a simulation tool was developed with which CW and PN based ToF measurements can be approximated. The simulation takes into account the main properties of a real 3D ToF System and provides reproducible data for any scene. In a second step, besides a closed form method, the two iterative algorithms OMP and PSO were implemented to separate the different signal paths. These three methods are based on a CW modulation and several measurements with variable modulation frequency. By means of different simulations and measurements the suitability of these algorithms for practical use was tested and the performance was compared.

With the three implemented methods the MPI influence can be greatly reduced and the individual signal paths separated. However, the closed form method has a high sensitivity to measurement noise. The PSO method is computationally intensive compared to the other methods and the results show a considerable amount of noise. In contrast, the OMP method has proven itself in practice and has prevailed over the other methods. With an application based on the OMP method, the process can be demonstrated in real time.

With this thesis it could be shown that MP separation for ToF measurements is possible in practice and the influence of MPI can be suppressed. For example, errors caused by stray light can be corrected or the contrast of the distance image can be increased. In addition, several distances per pixel can be resolved and thus semi-transparent objects can be measured. The presented method provides reliable data in difficult conditions and extends the application range of the 3D ToF method."

Go to the original article...

ISOCELL 2.0 Technology Explained

Image Sensors World        Go to the original article...

When Samsung announced its 0.7um pixel in September 2020, the PR mentioned that "the new 0.7μm lineup will adopt enhanced pixel technology with boosted light sensitivity, ISOCELL 2.0, later this year." By that time, ISOCELL 2.0 was said to increase the light sensitivity by 12%.

Now, Samsung explains how ISOCEL 2.0 is different from the previous generations:

"Samsung has now introduced next-level ISOCELL 2.0, which further refines the technology by replacing the metallic grid between color filters with a new material. In the first generation ISOCELL pixels, slight optical loss occurred when the metallic grid between the color filters absorbed small portions of the incoming light.

To solve this problem, ISOCELL 2.0 began with the establishment of ISOCELL Plus, the first phase of ISOCELL 2.0’s development. ISOCELL Plus replaced the metal barriers with an optimized new material. However, the barriers still contained some metal that caused the lower parts of the color filter barriers to absorb light, resulting in some optical loss.

Now, the second phase of ISOCELL 2.0 replaced the lower portion of the color filter barriers with a more reflective material. It further reduces optical loss in each pixel and drastically improves light sensitivity, allowing smartphones to produce even more vivid pictures with reduced noise.

By delivering enhanced light sensitivity, ISOCELL 2.0 allows even smaller pixels in the sensor to absorb more light, giving the sensor the tools it needs to produce photographs that are made up of a greater number of pixels. This enables the production of images with very fine detail without compromising the sensor’s vivid color reproduction."

Go to the original article...

Melexis Gets Exclusive Rights to Use Chronoptics Technology of ToF Multipath and Linearity Errors Correction in Automotive Applications

Image Sensors World        Go to the original article...

Melexis enters into a license agreement with Chronoptics. The agreement grants Melexis exclusive rights of use of Chronoptics’ multipath and linearity error correction technologies in automotive applications. This includes ADAS for autonomous vehicles, and interior monitoring and safety systems.

Multipath interference in ToF cameras can lead to inaccurate depth measurements under specific conditions, such as when a wide FoV is used, or when the scene contains highly reflective objects. It is typically caused by stray light and scattering due to bright reflections in the scene. Chronoptics’ patented multipath correction technology recovers the correct depth values to produce accurate and robust point clouds even in the most challenging scenarios. With future vehicle applications set to demand an even wider FoV, the technology enables Melexis’ customers to address and mitigate potential challenges in advance.

Damien Macq, VP & General Manager Sense and Light business Unit, Melexis, commented: “These new IPs will be available for evaluation in our Time-of-Flight reference design. Our third-generation ToF sensors are a breakthrough in terms of sensor performance, and this innovative technology along with other Melexis’ IPs will further enhance system level performance.

Richard Conroy, CEO, Chronoptics, said: “We are excited to partner with Melexis to deliver robust depth sensing solutions for the automotive industry. We are experts in tailoring fit-for-purpose 3D cameras that leverage our patented depth pipeline technologies and know-how to deliver clean and accurate 3D data for any application.


Chronoptics patent application WO2020130855 gives few more details about the company's approach:

"Binary sequences have been used instead of the pure sinusoidal signals described in the above equations. They have demonstrated the capacity to image through smoke, resolve multi-path interference and enable multiple camera operation with minimum interference. A configurable time of flight camera system and associated data processing system is required to allow for the selection and use of such an arbitrary binary sequence."

Go to the original article...

Hynix to Unveil 0.7um Pixel by the End of 2021

Image Sensors World        Go to the original article...

Korea Economic Daily reports that SK Hynix is is developing a 0.7um 64MP image sensor for smartphones and plans to bring it to market by the end of 2021. Hynix also plans to unveil new image sensors for security cameras and biological applications.

With a variety of products we’re considering, our company aims to raise our presence in the global image sensor market,” said an unnamed SK Hynix official.

SK Hynix’s image sensor market share rose to an estimated 3.2% in 2020 from 2.6% 2019. Its revenue from the image sensor business has increased by 33.6% to $582.2M from $435.8M in the same period.

SK Hynix is making image sensors at two of its semiconductor plants in Icheon, Gyeonggi Province – its M10 chip plant, where the company uses 12-inch wafers, and its foundry affiliate SK Hynix System IC Inc., which uses 8-inch wafers.

Due to a booming DRAM demand, it might be hard for Hynix to allocate enough capacity to imahe sensor production. “It may not be easy for us to switch our DRAM chip fabrication lines to boost production of image sensors. But we’ll make the right decision in accordance with the market situation,” said SK Hynix executive.

Go to the original article...

TSR: Market Share Gap Between Sony and Samsung to Shrink to 10% in 2021

Image Sensors World        Go to the original article...

BusinessKorea, Aju.news: Korean media quote an interesting data from the recent TSR market report:
  • In 2020, Sony market share was 45.1%, while Samsung's - 19.8%
  • The 2020 gap has shrunk by 5.1-5.2% in comparison with 2019
  • TSR expects this gap to shrink further to 10% in 2021
  • Aju.news: The global image sensor market is forecasted to increase by 22.1% from $20.4B this year to $24.9B in 2024
  • BusinessKorea: The global image sensor market is forecasted to increase by 11.4% annually from $17.9B this year to $24.8B in 2024
  • Goodix has emerged as a major image sensor market player with a share of 3.6% in 2020

Go to the original article...

French Government Invests in SWIR Sensors Development

Image Sensors World        Go to the original article...

NIT and the French National Research Institute at Sorbonne University have entered into a research partnership, with the aim of producing SWIR sensors using HgTe quantum dot materials deposited on ROIC.

The Institute of Nano Sciences from Sorbonne University is currently researching and producing quantum dot materials of HgTe sensitive in the SWIR to MWIR wavelength range. Preliminary tests of QCD deposition on NIT ROIC’s have shown impressive results.

This strategy is promising to design low-cost and small pixel pitch focal plane array, as well as to expand the spectral range of the SWIR camera up to 2.5 µm.

This collaborative program is funded by the French National Research Agency. 

After 10 years of researching infrared imaging nanocrystal films we have been able to obtain impressive SWIR images when coupled to a NIT ROIC. Through this project, we now enter into a new step of collaboration to bring this proof of concept to a SWIR commercially available camera“ says Emmanuel Lhuillier – CNRS researcher and NITQuantum project principal investigator.

This is a major breakthrough in the life of NIT as this partnership will allow us to offer a full line of SWIR sensors and cameras in large volumes and at low price. This technology benefits from the overall imaging sensor market as it shares its common manufacturing platforms. No doubt this novel sensor technology will become the standard in SWIR sensors in 3 to 5 years,” says Pierre Potet, CEO, New Imaging Technologies (NIT).


ALA News: Lynred announces an investment of 2.8M euros in a project aiming to develop a new generation of infrared detectors. This project was selected among the winners of the Call for Projects (AAP) Recovery Plan for Industry - Strategic Sectors launched on August 31, 2020. The State must pay a subsidy of € 900,000 for this 24-month project. .

The Lynred project aims to develop a new French sector of near infrared detectors with small pixel pitch. This new sector will make it possible to meet the needs of the infrared imaging markets for applications in industrial control and spectral imaging, for example the sorting of plastics.

In addition to developing around twenty jobs, this project will reduce the level of dependence on imports by relocating part of the development and production activity in France, previously subcontracted abroad - one of the strong axes of the Recovery Plan.

"We are proud that our project is one of the winners of this initiative for the resilience of the economy," welcomes Jean-François Delepau, President of Lynred. “Lynred's raison d'être is to supply state-of-the-art infrared detectors worldwide and to guarantee the sustainability of an autonomous and sovereign French infrared sector. The investment made within the framework of the project meets this ambition for the near infrared, and is therefore fully in line with the resilience objective pursued by the French State in the Recovery Plan for the strategic sector of infrared detectors."

Go to the original article...

Report: Price of 13MP Mobile Sensors Falls to $1.55

Image Sensors World        Go to the original article...

News.hqew.com quotes MoneyDJ Nikkei report that mobile CIS sensor prices decline for the last 3 quarters. Specifically, 13MP sensors wholesale price declined by 3% to $1.55.

Other than Huawei sanctions, the reason for decline is said to be the attempts to reduce the price of 5G phones by simplifying camera modules.

Another version of this report appears on OFweek site:

"[Samsung and Sony start a price war: CMOS image sensor prices have fallen for three consecutive quarters]

According to the news on March 2nd, according to the Nikkei Shimbun’s report on the same day, the block transaction price of CMOS image sensors from January to March dropped by about 3% from October to December 2020. Recently, the two giants in the field of CMOS image sensor, Samsung and Sony, have become more and more fierce in the price war, and the price of CMOS image sensor has fallen for three consecutive quarters.

At present, among the many products of CMOS image sensors, the representative product is 13 million pixels (sensor size 1/3 inch). Prices for the quarter (October-December 2020) fell by 3%."

Go to the original article...

TechInsights on Voltage-Domain Global Shutter Pixels

Image Sensors World        Go to the original article...

TechInsights has held an excellent webinar on image sensor trends and comparisons yesterday. The webinar is available for on-demand access now. There is too much interesting information for one post. I'd start from Smartsens and Omnivision voltage domain global shutters (Microsoft Azure, Sony, and Samsung pixels have charge domain shutters):

Go to the original article...

Assorted Videos: Photon Force, NIT, ST, Brookman

Image Sensors World        Go to the original article...

 Photon Force publishes a video of its PF32 32 x 32 SPAD array with 55ps time resolution:

New Imaging Technologies (NIT) publishes the company's introductory video:

ST publishes a demo of its GS sensor in driving monitoring applications:


Brookman publishes a demo of its high-speed ToF sensing:

Go to the original article...

Ouster’s Digital LiDAR Pitch for Investors

Image Sensors World        Go to the original article...

Ouster publishes its Investor Day presentation dated by Feb. 22, 2021, explaining advantages of its digital LiDAR:

Go to the original article...

css.php