Sony’s View on Stacked CIS Evolution – English Version

Image Sensors World        Go to the original article...

M. kindly sent me an English translation of Sony paper on stacked sensor evolution. The pdf version of the translated paper is available here.

Go to the original article...

MagikEye Announces Invertible Light Technology Development Kit

Image Sensors World        Go to the original article...

BusinessWire: Structured light 3D sensing company Magik Eye announces its latest Invertible Light Technology (ILT) development kits. “With its high speed, low latency and superior surface visibility at near ranges, developers will now have a great new tool for object detection, guidance and gesture recognition. We are excited that developers can now explore optimal use cases for applications such as robotics that are truly effective for near-range applications and easy to use,” said Takeo Miyazawa, Founder & CEO of MagikEye.

The ILT Development Kit (DK-ILT001) combines a simple IR laser projector with a standard CMOS image sensor to achieve 3D imaging at more than 600 fps, or up to 120 FPS with a simple Raspberry Pi host. With ILT’s proprietary algorithm, DK-ILT001 is able to accurately measure the position and shape of a target with ultra-low processing overhead.

DK-ILT001 will be released as an entry-level module for easy evaluation of fast, accurate and responsive 3D sensing technology as well as for enable new software applications development.


Go to the original article...

Sunny View on Smartphone Camera Trends

Image Sensors World        Go to the original article...

Sunny Optical publishes its Investor Day Presentation talking about smartphone camera trends:

Go to the original article...

Plasmonic Diffracting DTI Enhances Pixel Response at 940nm by 5.3x

Image Sensors World        Go to the original article...

OSA publishes Shizuoka University and University of Hyogo paper "Near-infrared sensitivity improvement by plasmonic diffraction for a silicon image sensor with deep trench isolation filled with highly reflective metal" by Atsushi Ono, Kazuma Hashimoto, and Nobukazu Teranishi.

"We propose a plasmonic diffraction structure combined with deep trench isolation (DTI) filled with highly reflective metal to enhance the near-infrared (NIR) sensitivity of image sensors. The plasmonic diffraction structure has a silver grating on the light-illuminated surface of a typical silicon backside-illuminated CMOS image sensor. The structural parameters of the silver grating were investigated through simulations, and the mechanism of the NIR sensitivity enhancement was clarified. Under the quasi-resonant conditions of surface plasmon polaritons, incident NIR light effectively diffracted as a propagating light to the sensor silicon layer. The diffracted light travelled back and forth between the DTIs. The effective propagation length in silicon was extended to six times by silver-filled DTI, resulting in approximately five times improvement of the 3-µm-thick silicon absorption at a wavelength of 940 nm."

Go to the original article...

Quanergy Claims to Achieve Major Improvement in its OPA Scanning LiDAR Range

Image Sensors World        Go to the original article...

Quanergy SPAC merge presentation shows a dramatic improvement of its Optical Phase Array (OPA) LiDAR performance over the course of the last year. At the same time, the company admits that its previous designs had a very short range:

Go to the original article...

Samsung Unveils CornerPixel for Automotive HDR-LFM Applications

Image Sensors World        Go to the original article...

BusinessWireSamsung introduces ISOCELL Auto S5K4AC, an automotive image sensor that offers 120dB HDR and LED flicker mitigation (LFM) especially for surround-view monitors (SVM) or rear-view cameras (RVC) in high-definition resolution (1280 x 960). The new sensor is said to be Samsung’s first imaging solution optimized for automotive applications.

The new ISOCELL Auto 4AC combines Samsung’s innovative and market-proven image sensor technologies with a unique CornerPixel solution for advanced HDR and LFM capabilities, offering exceptional viewing experiences regardless of lighting conditions,” said Duckhyun Chang, EVP of sensor business at Samsung. “Starting with the ISOCELL Auto 4AC, we plan to expand our automotive sensor lineup to areas such as camera monitor systems (CMS), autonomous driving and in-cabin monitoring.

The CornerPixel technology features a specialized pixel structure that mitigates LED flicker at over 90Hz. Within a single pixel area, it embeds two photodiodes, one 3.0µm pixel for viewing low light images, and a 1.0µm pixel placed at the corner of the big pixel for brighter environments. With two photodiodes capturing images in different exposures simultaneously, the sensor offers up to 120dB HDR with minimal motion blur.

To minimize LED flickering, the smaller photodiode’s exposure time can be extended, preventing pulsing LED light from being displayed as flickering on the camera screen.

The Samsung ISOCELL Auto 4AC comes in a 1/3.7-inch optical format with 1.2MP resolution and an ISP embedded within the sensor. The 4AC meets AEC-Q100 Grade 2 qualifications, including a -40°C to 125°C operating temperature range, and is currently in mass production.


Go to the original article...

Sony’s View on Stacked CIS Evolution

Image Sensors World        Go to the original article...

IEICE ESS Fundamentals Review publishes Sony paper "Evolving Image Sensor Architecture through Stacking Devices" by Yusuke OIKE. Unfortunately, there is no English version available and the pdf is protected from copying the text for an on-line translation.

"The evolution of CMOS image sensors and the prospects utilizing advanced imaging technologies promise to improve our quality of life. CMOS image sensors now dominate the market of digital cameras with the advent of parallel ADCs and back-illuminated technology. Moreover stacked CMOS image sensors continue to enhance functionality and user experience in mobile devices. We introduce the evolution of image sensor architecture for accelerating drastic performance improvements and for integrating edge computing connected to the sensor layer through stacking technologies. Furthermore, the fine pitch connection between the pixel and logic layers makes pixel parallel circuit architecture practical for the next stage of evolution."

Go to the original article...

Actlight Continues Work on its Single Photon Sensitivity Technology for "Leading Sensor Company"

Image Sensors World        Go to the original article...

PRNewswire: ActLight has signed the second service agreement based on its Single Photon Sensitivity technology with a leading company in the image sensor market.

'This agreement is the natural continuation of the customer project started in 2020 and successfully completed earlier this year. We are very pleased to continue the joint activities with a market player of this caliber and to add value to their technology roadmap,' said Serguei Okhonin, Co-founder and CEO at ActLight. 'The adoption of Single Photon Avalanche Diode (SPAD) array in 3D sensing applications is today a growing reality and the markets are demanding performance improvements (e.g. pixel scaling, higher QE at near IR and higher performance/costs ratio). ActLight technology is developed to address these specific needs and, thanks to its unique features such as tunable sensitivity, low voltage and digital read-out, is perfectly suited to become the sensing solution of reference for 3D systems used in smartphones, cars and other mainstream products.`

EPFL publishes a PhD thesis by Denis Sallin "A low-voltage CMOS-compatible time-domain photodetector, device & front end electronics" explaining Actlight's device operation:

Go to the original article...

Huawei AV Unit President: Cars will Never Be Fully Autonomous

Image Sensors World        Go to the original article...

Reuters: "Our team's goal is to reach true driverless passenger cars in 2025," Wang Jun, senior executive at Huawei's smart vehicle unit, told at 2021 World Artificial Intelligence Conference.

WorldStockMarket: "The L5 level of autonomous driving will never be reached. Mainly because the definition of the L5 level of autonomous driving covers all scenarios anytime, anywhere and in any weather, no human driver can handle it, so neither can a car be able to handle it," said Su Qing, President and Chief Architect of Huawei's ADS Intelligent Driving Solutions at the same conference ended last week in Shanghai.

Currently, Huawei's ADS autopilot system is equipped with a total of 34 sensors: 3 LiDARs, 13 cameras, 6 mm-wave radars, and 12 ultrasonic sensors.

Go to the original article...

AT&S Makes Miniature Package for AMS

Image Sensors World        Go to the original article...

AT&S shows its miniature package for AMS Naneye:

"The image sensor not only creates sharp images due to its 100,000-pixel resolution, but it also has low power consumption thanks to our smart connection architecture,” says Markus Maier, Global Account Manager at AT&S. AT&S developed the PCB for the sensor, while the sensor itself was built by ams OSRAM.

The interconnect design was implemented using ECP (Embedded Component Packaging) technology. ECP enables both active and passive components to be integrated into laminate-based substrates, i.e. in a minimum of space. “Instead of placing the components on the PCB, they are integrated into the PCB. They ‘disappear’ inside the PCB,” Maier says.

Go to the original article...

Bankrupt CIS Fab is Up for Sale

Image Sensors World        Go to the original article...

Yangtze Evening News, OFweek, EET-China: As has been reported earlier, Huaian Imaging Device Manufacturer (HIDM) went bankrupt scrapping a total investment of 45 billion yuan and ambitious plans to produce 240,000 12-inch CIS wafers per year.

Now, the entire assets of HIDM will be auctioned. The starting price is 1.66 billion yuan. According to the auction list, the overall assets of HIDM in this auction include 19 buildings such as production and office buildings, as well as 171,265.90 sq.m of industrial land, as well as machinery and equipment, electronic equipment, vehicles, raw materials, and other items.

Go to the original article...

Pixart Explains Optical Tracking Sensors Operation

Image Sensors World        Go to the original article...

Pixart publishes a video explaining laser and LED-based optical tracking sensors principles:

Go to the original article...

Sony AI Sensors in Rome

Image Sensors World        Go to the original article...

Sony publishes a video about its IMX500 AI sensor use cases on streets of Rome, Italy:

Go to the original article...

EMVA 1288 Release 4.0 is Official Now

Image Sensors World        Go to the original article...

EMVA officially unveils the new release 4.0 of the EMVA 1288 Standard for objective characterization of industrial cameras. The release takes into account the rapid development of camera and image sensor technology.

Until the previous Release 3.1 dated back December 2016, the application of the EMVA 1288 standard with a simple linear model was limited to cameras with a linear response and without any pre-processing. While this model is being continued with some improvements in the ’Release 4.0 Linear’, a new module ‘Release 4.0 General’ has been added in the latest release. With it, the characterization of a non-linear camera or a camera with unknown pre-processing is possible even without any model due to the universal system-theoretical approach of the EMVA 1288 standard. Just as with the linear camera model, all application-related quality parameters can be measured in this way. With both modules “Linear” and “General” the same measurements are performed. Depending on the camera characteristics, the proper evaluation either according to the linear or general model is applied.

In addition, Release 4.0 includes numerous expansions to characterize the latest generation of image sensors and cameras according to the application. The most important of these are:
  • Extended wavelength range from UV to SWIR range.
  • Raw data of any given image acquisition modality can now be characterized according to the standard.
  • The versatile and universal analysis tools of the EMVA 1288 standard can also be applied to quantities calculated and derived from multiple channels. For polarization image sensors, these are, for example, the degree of polarization and the polarization angle.
  • Inhomogeneities are measured in detail and now decomposed into column, row, and pixel variations. They can now be determined with a new method at all intensity levels from just two captured images.
  • Optionally, cameras with optics or with illumination as given by the position of the exit pupil of the optics for which the image sensor was designed can be measured according to the standard. Thus, the standard is now also suitable for image sensors with pixels shifted towards the edge.
  • A more suitable measure for the linearity of the characteristic curve is introduced.

Go to the original article...

Xiaomi, Sinovation Ventures, and Inno-Chip Invest in Prophesee

Image Sensors World        Go to the original article...

Prophesee announces a strategic partnership and investment by Sinovation Ventures, Xiaomi, and Inno-Chip, an investment firm which has obtained investment and support from Will Semiconductor, owner of OmniVision. The amount of the new investment has not been disclosed.

This round of funding takes us another step closer to establishing Prophesee as a clear leader in applying neuromorphic methods to machine vision and AI. The commitment from these new partners is a strategic part of our evolution and growth plan that will give us expanded global access to customers, particularly in China, for our revolutionary event-based vision sensing and processing approach,” noted Luca Verre, CEO and founder of Prophesee. “As we continue to develop strategies to align with targeted market segments like mobile, automotive, AR and industrial automation, involvement from established leaders such as Sinovation, Xiaomi and Inno-Chip will complement our current expertise and business networks, and position us strongly in the relevant ecosystems.

Go to the original article...

Apple iPhone Cameras Evolution

Image Sensors World        Go to the original article...

SystemPlus publishes a review of Apple iPhone cameras from iPhone 6 to iPhone 12 Max:

Go to the original article...

LargeSense Presents its 140mm x 120mm CMOS Sensor

Image Sensors World        Go to the original article...

LargeSense Founder Bill Charbonnet presents the 4x5 format camera featuring 140mm by 120mm CMOS sensor with 6.7MP resolution:

Go to the original article...

22 Theorems and 43 Lemmas about Pixel Conversion Gain

Image Sensors World        Go to the original article...

Arxiv.org paper "A novel approach to photon transfer conversion gain estimation" by Aaron Hendrickson, apparently from Johns Hopkins University,  spans across 122 pages and has a lot mathematical statements, including 22 theorems, 43 lemmas, and 17 corollaries. All this is said to be necessary to calculate a conversion gain of CCD sensor with non-uniform pixels.

"Nonuniformities in the imaging characteristics of modern image sensors are a primary factor in the push to develop a pixel-level generalization of the photon transfer characterization method. In this paper, we seek to develop a body of theoretical results leading toward a comprehensive approach for tackling the biggest obstacle in the way of this goal: a means of pixel-level conversion gain estimation. This is accomplished by developing an estimator for the reciprocal-difference of normal variances and then using this to construct a novel estimator of the conversion gain. The first two moments of this estimator are derived and used to construct exact and approximate confidence intervals for its absolute relative bias and absolute coefficient of variation, respectively. A means of approximating and computing optimal sample sizes are also discussed and used to demonstrate the process of pixel-level conversion gain estimation for a real image sensor."

Go to the original article...

CAPD iToF Sensors Overview

Image Sensors World        Go to the original article...

Vrije Universiteit Brussel and Melexis publish a MDPI paper "An Overview of CMOS Photodetectors Utilizing Current-Assistance for Swift and Efficient Photo-Carrier Detection" by Gobinath Jegannathan, Volodymyr Seliuchenko, Thomas Van den Dries, Thomas Lapauw, Sven Boulanger, Hans Ingelberts, and Maarten Kuijk. This is a type of iToF sensor offered by Sony, TI, and Melexis.

"This review paper presents an assortment of research on a family of photodetectors which use the same base mechanism, current assistance, for the operation. Current assistance is used to create a drift field in the semiconductor, more specifically silicon, in order to improve the bandwidth and the quantum efficiency. Based on the detector and application, the drift field can be static or modulated. Applications include 3D imaging (both direct and indirect time-of-flight), optical receivers and fluorescence lifetime imaging. This work discusses the current-assistance principle, the various photodetectors using this principle and a comparison is made with other state-of-the-art photodetectors used for the same application."

Go to the original article...

Albert Theuwissen’s Course Avaliable On-Line, 1st Lecture Free

Image Sensors World        Go to the original article...

Albert Theuwissen makes his image sensor course available on-line. Eight videos are available for a video-on-demand training.  The first one is free of charge and is a kind of introduction to the remaining 7 videos (all around 30 min each). 

The course comes together with live session during which participants can discuss the topics presented in the videos as well as discuss the outcome of the exercises and quizzes that are added at the end of every video.  The timing of the live sessions will be adjusted to the geographical location of the participants. 
 
This unique concept of video-on-demand + live interaction is now available here.

Go to the original article...

Image Sensor-based PUF

Image Sensors World        Go to the original article...

IEICE Electronics Express publishes a Ritsumeikan University, Japan, paper "Modeling attacks against device authentication using CMOS image sensor PUF" by Hiroshi Yamada, Shunsuke Okura, Masayoshi Shirahata, Takeshi Fujino.

"A CMOS image sensor physical unclonable functions (CIS PUF) which generates unique response extracted from manufacturing process variation is utilized for device authentication. In this paper, we report modeling attacks to the CIS PUF, in which column fixed pattern noise is exploited in a sorting attack. When the PUF response is generated with pairwise comparison method, unknown responses are predicted with probability over 87.8% with only 0.31% training sample of whole challenge and response pairs."

Go to the original article...

Inivation Company Introduction

Image Sensors World        Go to the original article...

Inivation publishes a company introduction video:

Go to the original article...

Samsung Explains PDAF Pixel Masking in its 108MP Nonacell Sensors

Image Sensors World        Go to the original article...

Electronic Imaging publishes Samsung paper "A new PDAF correction method of CMOS image sensor with Nonacell and Super PD to improve image quality in binning mode" by Yeongheup Jang, Hyungwook Kim, Kundong Kim, Sungsu Kim, Sungyong Lee, and Joonseo Yim.

"This paper presents a new PDAF correction method to improve the binning mode image quality in the world’s first 0.8um 108 mega pixel CMOS Image Sensor with Samsung Nonacell and Super PD technology.

The conventional PDAF correction method is based on bad pixel-correction (BPC), replacing AF pixel to adjacent normal pixels. Recently, in order to suppress AF artifact, sensor provide an advanced algorithm that detect the directionality of the image pattern near the PDAF pixel area, and determine which area to be referred for the correction. For example, when the algorithm detects no directionality in the pattern, the PDAF pixel will be replaced by referring to all of the same color-pixels within the kernel of 7x7 or 9x9 pixels. If Slash pattern is detected, PDAF correction will refer to the pixels which are located in only slash direction as shown in Table 2. In spite of the advanced algorithm, the AF artifact still occur because of a higher AF density, directional misrecognition or exceptional case of direction.

In order to overcome the limitation of conventional method, new correction method, named Dilution mode have been introduced. In Dilution mode, a Nonacell, which contains AF pixels, outputs its own seed value to a binning output and deliver AF information by embedded data, which is a PDAF Tail mode."

Go to the original article...

CIS Stacking Patent Invalidated due to PCB-Based Prior Art

Image Sensors World        Go to the original article...

BloombergUS Patent Trial and Appeal Board (PTAB) dismisses Cellect stacked patent lawsuit against Samsung on the grounds of existing PCB-style prior art from 1995. Cellect's patent US 9,198,565 "Reduced area imaging device incorporated within endoscopic devices" proposes image sensor stacking as a way to reduce endoscope footprint:


PTAB concludes that 1995 camera arrangement proposing to stack drivers on PCB underneath the imager, such as described here, is a known prior art:


As an unrelated note, the PTAB document quotes Kodak market review from 1995, where the company rightfully forecasts that CMOS sensors would conquer the whole mass market, while CCDs would survive only in low-volume niche applications. As the history shows, having the right forecast at the right time has not changed the fate of the company:

Go to the original article...

Modeling of SPADs for LiDAR: Hold-off Time, Afterpulsing, Crosstalk, Pile-up Distortions

Image Sensors World        Go to the original article...

Politecnico di Milano publishes a MDPI paper "Statistical Modelling of SPADs for Time-of-Flight LiDAR" by Alfonso Incoronato, Mauro Locatelli, and Franco Zappa.

"Time-of-Flight (TOF) based Light Detection and Ranging (LiDAR) is a widespread technique for distance measurements in both single-spot depth ranging and 3D mapping. Single Photon Avalanche Diode (SPAD) detectors provide single-photon sensitivity and allow in-pixel integration of a Time-to-Digital Converter (TDC) to measure the TOF of single-photons. From the repetitive acquisition of photons returning from multiple laser shots, it is possible to accumulate a TOF histogram, so as to identify the laser pulse return from unwelcome ambient light and compute the desired distance information. In order to properly predict the TOF histogram distribution and design each component of the LiDAR system, from SPAD to TDC and histogram processing, we present a detailed statistical modelling of the acquisition chain and we show the perfect matching with Monte Carlo simulations in very different operating conditions and very high background levels. We take into consideration SPAD non-idealities such as hold-off time, afterpulsing, and crosstalk, and we show the heavy pile-up distortion in case of high background. Moreover, we also model non-idealities of timing electronics chain, namely, TDC dead-time, limited number of storage cells for TOF data, and TDC sharing. Eventually, we show how the exploit the modelling to reversely extract the original LiDAR return signal from the distorted measured TOF data in different operating conditions."

Go to the original article...

Sony Prepares All-Pixel AF HDR Sensor for Smartphones

Image Sensors World        Go to the original article...

Sony publishes 3 videos presenting a mobile image sensor with all-pixel AF and HDR. Such a product has not been officially announced yet. However, Sony presented a 48MP 0.8um pixel with all-pixel AF at IEDM 2019.

Go to the original article...

Isorg Raises 16M Euros in Series C Round

Image Sensors World        Go to the original article...

ALA News: Isorg announces a capital increase of €16M in series C financing. Two major industrial investors, Sumitomo Chemical and Mitsubishi participated in the round. Greece-based Integrated Systems Development SA and five new French investors represented by fund manager Financière Fonds Privés also joined the round. Legacy shareholders Bpifrance, through its large venture funds, New Science Venture, CEA Investment and Sofimac Group (Limousin Participations) contributed, reaffirming their commitment and confidence in Isorg. The company has raised €47.8M (approx. $58.4M) to date.

This third fundraising marks Isorg’s maturity and readiness to become the industrial player we set out to be at the very start of our venture 11 years ago,” said Jean Yves Gomez, CEO of Isorg. “The addition of new industrial investors from Japan and Greece, alongside our historical partners, is confirmation of our international ambition, the strength of our business model and product maturity.

Isorg will use the new proceeds to:
  • Launch the commercial availability of its organic photodiode technology to provide the security market with increased levels of ID authentication and offer new integration opportunities for multiple fingerprint scanners
  • Deploy a global sales and applications engineering workforce
  • Transform operations to support a fully-fledged industrial company
OPD technology can be used in a variety of applications. We expect the OPD market will soon boom and expand rapidly through Isorg’s technology,” said Isao Kurimoto, executive officer at Sumitomo Chemical Co Ltd.

Besides integration in smartphones, Isorg’s solutions can be applied in a wide range of applications for different industry domains,” declared Yoshiyuki Watanabe, general manager of the business creation & digital strategy unit at Mitsubishi Corporation.

Over the coming months, Isorg plans to achieve several goals:
  • Open a new location in Asia
  • File FBI certification for its FAP20 to FAP60 biometrics modules for security applications
  • Develop palm size modules
  • Design a vein recognition module based on a client-validated sensor with strong NIR sensitivity

Go to the original article...

Automotive PHY Battles: A-PHY vs Auto-Serdes, Sony Evaluates A-PHY

Image Sensors World        Go to the original article...

PRNewswire: Valens announces that it has started an evaluation with Sony Semiconductor Solutions to develop and integrate MIPI A-PHY technology into next-generation image sensor products.

A-PHY Serdes standard was released by the MIPI Alliance in September 2020 aiming for integration of cameras, sensors and displays in vehicles, while also incorporating functional safety and security. Valens is the first company to market with A-PHY compliant chipsets. Recently, Valens announces a PAC deal to list at NYSE at valuation of $1.16B. (EETimes)

"It's highly important for Sony to integrate the cutting-edge technology into our image sensors, and A-PHY serializer integration will provide significant benefits for our customers," said Kenji Onishi, General Manager, Automotive Business Department, Sony Semiconductor Solutions. "The MIPI ecosystem is growing quickly, and we're happy to be early adopters of this automotive connectivity standard. Valens is in a leading position with A-PHY, which is why it is so important for us to start this collaboration. We believe future models will have even higher resolutions. In addition, our company is preparing to integrate several features into their next-generation sensors, including metadata output, higher framerate, and wider bit depth – all of which will require an ultra-high-speed, long-reach connectivity solution such as MIPI A-PHY. We will continue to support not only A-PHY but also D-PHY, proprietary interfaces, and open-standard interfaces."


MIPI A-PHY standard faces a competition of Automotive SerDes Alliance (ASA) that offers quite similar performance and feature set. SemiEngineering discusses the two standards competition:

"The reason for the separate and independent ASA development isn’t publicly clear. In some of their statements and materials, it is positioned as the only standardized alternative to proprietary schemes, without acknowledging the existence of the MIPI/VESA solutions. And some say that, during the A-PHY definition process, the group split, with one side moving to create the new ASA group.

Some further digging revealed that the main concern seems to be that the A-PHY technology comes from one company, Valens, which contributed it to the standard. However, “MIPI A-PHY, like all MIPI specifications, is made available under royalty-free terms,” said [Peter Lefkin, managing director of the MIPI Alliance.]

Still, the issue for ASA members appears to be that only essential patents get a license. Valens has implemented this in a way that includes non-essential patents, and licensees don’t get access to those patents. A statement from the ASA steering committee noted, “There are examples where the solution of one supplier was successfully made a standard, but there are many examples where it did not work.”

The ASA folks are more interested in a process where multiple companies contribute technology without one company dominating. The ASA effort is licensed under FRAND (free or reasonable and non-discriminatory) licensing.

MIPI’s concern is there may be royalty uncertainty prior to acquiring a license. There apparently has been some history in the automotive realm of companies refusing to license essential patents. Regardless, it’s pretty clear that the A-PHY and the ASA PHY will compete head-to-head. How that competition resolves itself is not yet evident."

Go to the original article...

SPAD Imaging with No Pile-Up

Image Sensors World        Go to the original article...

Politecnico di Milano, Italy, publishes an open access paper in Review of Scientific Instruments "Toward ultra-fast time-correlated single-photon counting: A compact module to surpass the pile-up limit" by S. Farinaa,  G. Acconcia, I. Labanca,  M. Ghioni, and  I. Rech.

"Time-Correlated Single-Photon Counting (TCSPC) is an excellent technique used in a great variety of scientific experiments to acquire exceptionally fast and faint light signals. Above all, in Fluorescence Lifetime Imaging (FLIM), it is widely recognized as the gold standard to record sub-nanosecond transient phenomena with picosecond precision. Unfortunately, TCSPC has an intrinsic limitation: to avoid the so-called pile-up distortion, the experiments have been historically carried out, limiting the acquisition rate below 5% of the excitation frequency. In 2017, we demonstrated that such a limitation can be overcome if the detector dead time is exactly matched with the excitation period, thus paving the way to unprecedented speedup of FLIM measurements. In this paper, we present the first single-channel system that implements the novel proposed methodology to be used in modern TCSPC experimental setups. To achieve this goal, we designed a compact detection head, including a custom single-photon avalanche diode externally driven by a fully integrated Active Quenching Circuit (AQC), featuring a finely tunable dead time and a short reset time. The output timing signal is extracted by using a picosecond precision Pick-Up Circuit (PUC) and fed to a newly developed timing module consisting of a mixed-architecture Fast Time to Amplitude Converter (F-TAC) followed by high-performance Analog-to-Digital Converters (ADCs). Data are transmitted in real-time to a Personal Computer (PC) at USB 3.0 rate for specific and custom elaboration. Preliminary experimental results show that the new TCSPC system is suitable for implementing the proposed technique, achieving, indeed, high timing precision along with a count rate as high as 40 Mcps."

Go to the original article...

Omnivision Unveils Disposable 8MP Sensor, More

Image Sensors World        Go to the original article...

BusinessWire: OmniVision announces its next-generation OH08A and OH08B CMOS sensors―the first 8MP sensors for single-use and reusable endoscopes. Additionally, the new OH08B is the first medical-grade image sensor to use Nyxel NIR technology.

The medical-grade OH08A image sensor features a 1/2.5-inch optical format, incorporates 1.4µm PureCel Plus-S pixel and offers 4K2K resolution in a small 7.1 x 4.6mm package for chip-on-tip endoscopes. The OH08B has a 1/1.8-inch optical format, uses a larger 2.0µm PureCel pixel in a 8.9 x 6.3mm package and features OmniVision’s Nyxel technology with enhanced NIR sensitivity.

Our next-generation OH08A/B 8MP image sensors are targeted at endoscopes with a 10-12mm outer diameter, such as gastroscopes, duodenoscopes, amnioscopes, laparoscopes and colonoscopes. They deliver higher image quality, up to 4K2K resolution at 60 fps, greatly improving the doctor’s ability to visualize the human anatomy during these important procedures,” said Richard Yang, senior staff product marketing manager at OmniVision. “In the OH08B, we’ve taken our sensor to the next level by adding Nyxel technology, which offers better performance in color and IR sensitivity, enabling doctors to see sharper video during NIR, fluorescence, chromo-endoscopy and virtual endoscopy procedures. Also, higher sensitivity results in less illumination, thus reducing the heat at the tip of the endoscope.

The OH08A offers 8MP Bayer still frame or 4K video in real time. It features 4-cell three-exposure HDR with tone mapping for improved HDR output at 1080p60 or native 4K2Kp60 resolution and two-exposure staggered HDR support.

The OH08B's Nyxel technology provides 3x QE improvement at both the 850nm and 940nm wavelengths. It allows the use of lower-power IR illumination, resulting in significantly reduced chip-on-tip power consumption.

Other key features include a 15.5 degree CRA for the OH08A and 11 degree CRA for the OH08B, enabling the use of lenses with large field of view and short focus distance; PWM output LED drivers; and 4 lane MIPI output with raw data. These sensors are stereo ready with frame synchronization to support a host of depth perception applications. Additionally, they are autoclavable for reusable endoscope sterilization.

The OH08A/B image sensors are available for sampling now in a chip scale package.

Go to the original article...

css.php