Archives for June 2020

Assorted News: Brookman, Smartsens, AIStorm, Cista, Prophesee, Unispectral, SiLC, Velodyne, Himax

Image Sensors World        Go to the original article...

Brookman demos the absence of interference between its 4 pToF cameras working simultaneously:



Smartsens reports it has garnered three awards from the 2020 China IC Design Award Ceremony and Leaders Summit —co-presented by EE Times China, EDN China, and ESMC China. SmartSens won awards in three categories: Outstanding Technical Support: IC Design Companies, Popular IC Products of the Year: Sensors/MEMS, and Silicon 100.


Other imaging companies on EE Times Silicon 100 list of Emerging Startups to Watch are AIStorm, Cista Systems, Prophesee, Unispectral, SiLC


Bloomberg reports that a blank-check company Graf Industrial Corp. is in talks to merge with Velodyne Lidar in a deal that would take Velodyne public. Graf Industrial Corp. has been established in 2018 as as a blank check company with an aim to acquire one and more businesses and assets, via a merger, capital stock exchange, asset acquisition, stock purchase, and reorganization. Merging with a blank-check company has become a popular way for companies to go public, as the coronavirus pandemic upends the markets.

GlobeNewswire: Himax launches of WiseEye WE-I Plus HX6537-A AI platform that supports Google’s TensorFlow Lite for Microcontrollers.

The Himax WiseEye solution is composed of the Himax HX6537-A processor and Himax Always-on sensor. With support to TensorFlow Lite for Microcontrollers, developers are able to take advantage of the WE-I Plus platform as well as the integrated ecosystem from TensorFlow Lite for Microcontrollers to develop their NN based edge AI applications targeted for Notebook, TV, Home Appliance, Battery Camera and IP Surveillance edge computing markets.

The processor remains in low power mode until a movement/object is identified by accelerators. Afterwards, DSP coped with the running NN inference on TensorFlow Lite for Microcontrollers kernel will be able to perform the needed CV operation to send out the metadata results over TLS (Transport Level Security) protocol to main SOC and/or cloud service for application level operation. The average power consumption for Google Person Detection example inference could be under 5mW. Additionally, average Himax Always-on sensor power consumption can be less than 1mW.

Himax WE-I Plus, coupled with Himax AoS image sensors, broadens TensorFlow Lite ecosystem offering and provides developers with possibilities of high performance and ultra low power,” said Pete Warden, Technical Lead of TensorFlow Lite for Microcontrollers at Google.

Go to the original article...

RevoRing filter adapter review

Cameralabs        Go to the original article...

The RevoRing is a cunning filter adapter from H&Y that saves you from carrying, fitting and swapping multiple step-down rings. A variable mechanism allows it to adapt one larger filter to multiple lenses, and fit it quickly too. Check out our review!…

The post RevoRing filter adapter review appeared first on Cameralabs.

Go to the original article...

Sony Prepares Subscription Service for its AI-Integrated Sensors

Image Sensors World        Go to the original article...

Reuters, Bloomberg, Yahoo: Sony plans to sell software by subscription for data-analyzing sensors with integrated AI processor like the recently announced IMX500.

We have a solid position in the market for image sensors, which serve as a gateway for imaging data,” said Sony’s Hideki Somemiya, who heads a new team developing sensor applications. Analysis of such data with AI “would form a market larger than the growth potential of the sensor market itself in terms of value,” Somemiya said in an interview, pointing to the recurring nature of software-dependent data processing versus a hardware-only business.

Most of our sensor business today can be explained only by revenues from our five biggest customers, who would buy our latest sensors as we develop,” Somemiya said. “In order to be successful in the solution business, we need to step outside that product-oriented approach.

Customer support is currently included in the one-time price of Sony sensors. But Somemiya said Sony would provide the service via separate subscription in the future. Made-by-Sony software tools would initially focus on supporting the company’s own sensors and the coverage may later expand to retain customers even if they decide to switch to non-Sony sensors, he added.

We often get queries from customers about how they can use our exotic products such as polarization sensors, short-wavelength infrared sensors and dynamic vision sensors,” Somemiya said. “So we offer them hands-on support and customized tools.

Sony will seek business partnerships and acquisitions to build out its software engineering expertise and offer seamless support anywhere in the world. Somemiya said the sensor unit’s subscription offering is a long-term plan and shouldn’t be expected to become profitable anytime soon, at least not at meaningful scale.


Go to the original article...

LFoundry Data Shows that BSI Sensors are Less Reliable than FSI

Image Sensors World        Go to the original article...

LFoundry and Sapienza University of Rome, Italy, publish an open source paper in IEEE Journal of the Electron Devices Society "Performance and reliability degradation of CMOS Image Sensors in Back-Side Illuminated configuration" by Andrea Vici, Felice Russo, Nicola Lovisi, Aldo Marchioni, Antonio Casella, and Fernanda Irrera. The data shows that BSI sensors' lifetime in a specific discussed failure mechanism is 150-1,000 times shorter than FSI. Of course, there can be many other failure sources that mask this huge difference.

"We present a systematic characterization of wafer-level reliability dedicated test structures in Back-Side-Illuminated CMOS Image Sensors. Noise and electrical measurements performed at different steps of the fabrication process flow, definitely demonstrate that the wafer flipping/bonding/thinning and VIA opening proper of the Back-Side-Illuminated configuration cause the creation of oxide donor-like border traps. Respect to conventional Front-Side-Illuminated CMOS Image Sensors, the presence of these traps causes degradation of the transistors electrical performance, altering the oxide electric field and shifting the flat-band voltage, and strongly degrades also reliability. Results from Time-Dependent Dielectric Breakdown and Negative Bias Temperature Instability measurements outline the impact of those border traps on the lifetime prediction."


"TDDB measurements were performed on n-channel Tx at 125C, applying a gate stress voltage Vstress in the range +7 to +7.6V. For each Vstress several samples were tested and the time-to-breakdown was measured adopting the three criteria defined in the JEDEC standard JESD92 [21]. For each stress condition, the fit of the Weibull distribution of the time-to-breakdown values gave the corresponding Time-to Failure (TTF). Then, the TTFs were plotted vs. Vstress in a log-log scale and the lifetime at the operating gate voltage was extrapolated with a power law (E-model [22]).

NBTI measurements were performed on p-channel Tx at 125C, applying Vstress in the range -3 to -4V. Again, several Tx were tested. Following the JEDEC standard JESD90 [23], in this case, lifetime is defined as the stress time required to have a 10% shift of the nominal VT. The VT shift has a power law dependence on the stress time and the lifetime value at the operating gate voltage could be extrapolated.
"


"Noise and charge pumping measurements denoted the presence of donor-like border traps in the gate oxide, which were absent in the Front-Side Illuminated configuration. The trap density follows an exponential dependence on the distance from the interface and reaches the value 2x10e17 cm-3 at 1.8 nm. Electrical measurements performed at different steps during the manufacturing process demonstrated that those border traps are created during the process loop of the Back-Side configuration, consisting of wafer upside flipping, bonding, thinning and VIA opening.

Traps warp the oxide electric field and shift the flat-band voltage with respect to the Front-Side configuration, as if a positive charge centroid of 1.6x10e-8 C/cm2 at 1.7 nm was present in Back-Side configuration, altering the drain and gate current curves.

We found that the donor-like border traps affect also the Back-Side device long term performance. Time Dependent Dielectric Breakdown and Negative Bias Temperature Instability measurements were performed to evaluate lifetime. As expected, the role of border traps in the lifetime prediction is different in the two cases, but the reliability degradation of Back-Side with respect to Front-Side-Illuminated CMOS Image Sensors is evident in any case.
"

Update: Here is comment from Felice Russo:

The following comments intend to clarify the scope of the paper “Performance and reliability degradation of CMOS Image Sensors in Back-Side Illuminated configuration”.

The title reported in the Image Sensor Blog, “LFoundry Data shows that BSI Sensors are Less Reliable than FSI”, leads to a conclusion different from the intent of the authors. The purpose of the paper was to evaluate potential reliability failure mechanisms, intrinsic to a particular BSI process flow, rather than highlighting a general BSI reliability weakness. BSI sensors produced at LFoundry incorporate numerous process techniques to exceed all product reliability requirements.

It is widely accepted [Ref.1-3] that the BSI process is sensitive to charging effects, independent of the specific process flow and production line. It may cause an oxide degradation, mainly related to the presence of additional distributions of donor-like traps in the oxide, located within a tunneling distance from the silicon-oxide interface (border/slow traps) and likely linked to an oxygen vacancy.

The work, published by the University, was based on wafer level characterization data, collected in 2018 using dedicated test structures fabricated with process conditions properly modified to emphasize the influence of the main BSI process steps on the trap generation.

To address these potential intrinsic failure mechanisms, several engineering solutions have been implemented to meet all reliability requirements up to automotive grade. Our earlier published work, [Ref.4], shows BSI can match FSI TDDB lifetime with the properly engineered solutions. Understandably not all solutions can be published.

Results have been used to further improve the performance of BSI products and to identify subsequent innovative solutions for the future generations of BSI sensors.

References:
[1] J. P. Gambino et al., “Device reliability for CMOS image sensors with backside through-silicon vias”, in Proceedings of the IEEE International Reliability Physics Symposium (IRPS), 2018
[2] Lahav et al., “BSI complementary metal-oxide-semiconductor (CMOS) imager sensors”, in High performance Silicon Imaging, Second Edition, Edited by D. Durini, 2014
[3] S. G. Wuu et al., “A manufacturing BSI illumination technology using bulk-Si substrate for Advanced CMOS Image sensors”, in Proceedings of the International Image Sensor Workshop, 2009
[4] A Vici et al., “Through-silicon-trench in back-side-illuminated cmos image sensors for the improvement of gate oxide long term performance,” in Proceedings of the International Electron Devices Meeting, 2018.

Go to the original article...

Imec Presentation on Low-Cost NIR and SWIR Imaging

Image Sensors World        Go to the original article...

SPIE publishes an Imec presentation "Image sensors for low cost infrared imaging and 3D sensing" by Jiwon Lee, Epimetheas Georgitzikis, Edward Van Sieleghem, Yun Tzu Chang, Olga Syshchyk, Yunlong Li, Pierre Boulenc, Gauri Karve, Orges Furxhi, David Cheyns, and Pawel Malinowski (available after free SPIE account registration.)

"Thanks to state-of-the-art III-V and thin-film (organics or quantum dots) material integration experience combined with imager design and manufacturing, imec is proposing a set of research activities which ambition is to innovate in the field of low cost and high resolution NIR/SWIR uncooled sensors as well as 3D sensing in NIR with Silicon-based Time-of-Flight pixels. This work will present the recent integration achievements with demonstration examples as well as development prospects in this research framework."

Go to the original article...

1/f and RTS Noise Model

Image Sensors World        Go to the original article...

IEEE open source Journal of the Electron Devices Society publishes Hong Kong University of Science and Technology paper "1/f Low Frequency Noise Model for Buried Channel MOSFET" by Shi Shen and Jie Yuan.

"The Low Frequency Noise (LFN) in MOSFETs is critical to Signal-to-Noise Ratio (SNR) demanding circuits. Buried Channel (BC) MOSFETs are commonly used as the source-follower transistors for CCDs and CMOS image sensors (CIS) for lower LFN. It is essential to understand the BC MOSFETs noise mechanism based on trap parameters with different transistor biasing conditions. In this paper, we have designed and fabricated deep BC MOSFETs in a CIS-compatible process with 5 V rating. The 1/f Y LFN is found due to non-uniform space and energy distributed oxide traps. To comprehensively explain the BC MOSFETs noise spectrum, we developed a LFN model based on the Shockley-Read-Hall (SRH) theory with WKB tunneling approximation. This is the first time that the 1/f Y LFN spectrum of BC MOSFET has been numerically analyzed and modeled. The Random Telegraph Signal (RTS) amplitudes of each oxide traps are extracted efficiently with an Impedance Field Method (IFM). Our new model counts the noise contribution from each discretized oxide trap in oxide mesh grids. Experiments verify that the new model matches well the noise power spectrum from 10 to 10k Hz with various gate biasing conditions from accumulation to weak inversion."

Go to the original article...

ST ToF Products Tour

Image Sensors World        Go to the original article...

ST publishes a nice presentation "Going further with FlightSense" at Sensor+Test 2020 virtual exhibition. There is also a short presentation about Flightsense applications.

Go to the original article...

v2e and Event-Driven Camera Nonidealities

Image Sensors World        Go to the original article...

ETH Zurich publishes an Arxiv.org paper "V2E: From video frames to realistic DVS event camera streams" by Tobi Delbruck, Yuhuang Hu, and Zhe He. The V2E open source tool is available here.

"To help meet the increasing need for dynamic vision sensor (DVS) event camera data, we developed the v2e toolbox, which generates synthetic DVS event streams from intensity frame videos. Videos can be of any type, either real or synthetic. v2e optionally uses synthetic slow motion to upsample the video frame rate and then generates DVS events from these frames using a realistic pixel model that includes event threshold mismatch, finite illumination-dependent bandwidth, and several types of noise. v2e includes an algorithm that determines the DVS thresholds and bandwidth so that the synthetic event stream statistics match a given reference DVS recording. v2e is the first toolbox that can synthesize realistic low light DVS data. This paper also clarifies misleading claims about DVS characteristics in some of the computer vision literature. The v2e website is this https URL and code is hosted at this https URL."


The paper also explains some of the misconceptions about DVS sensors:

"Debunking myths of event cameras: Computer vision papers about event cameras have made rather misleading claims such as “Event cameras [have] no motion blur” and have “latency on the order of microseconds” [7]–[9], which were perhaps fueled by the titles (though not the content) of papers like [1], [10], [11]. Review papers like [5] are more accurate in their descriptions of DVS limitations, but are not very explicit about the actual behavior.

DVS cameras must obey the laws of physics like any other vision sensor: They must count photons. Under low illumination conditions, photons become scarce and therefore counting them becomes noisy and slow. v2e is aimed at realistic modeling of these conditions, which are crucial for deployment of event cameras in uncontrolled natural lighting.
"

Go to the original article...

v2e and Event-Driven Camera Nonidealities

Image Sensors World        Go to the original article...

ETH Zurich publishes an Arxiv.org paper "V2E: From video frames to realistic DVS event camera streams" by Tobi Delbruck, Yuhuang Hu, and Zhe He. The V2E open source tool is available here.

"To help meet the increasing need for dynamic vision sensor (DVS) event camera data, we developed the v2e toolbox, which generates synthetic DVS event streams from intensity frame videos. Videos can be of any type, either real or synthetic. v2e optionally uses synthetic slow motion to upsample the video frame rate and then generates DVS events from these frames using a realistic pixel model that includes event threshold mismatch, finite illumination-dependent bandwidth, and several types of noise. v2e includes an algorithm that determines the DVS thresholds and bandwidth so that the synthetic event stream statistics match a given reference DVS recording. v2e is the first toolbox that can synthesize realistic low light DVS data. This paper also clarifies misleading claims about DVS characteristics in some of the computer vision literature. The v2e website is this https URL and code is hosted at this https URL."


The paper also explains some of the misconceptions about DVS sensors:

"Debunking myths of event cameras: Computer vision papers about event cameras have made rather misleading claims such as “Event cameras [have] no motion blur” and have “latency on the order of microseconds” [7]–[9], which were perhaps fueled by the titles (though not the content) of papers like [1], [10], [11]. Review papers like [5] are more accurate in their descriptions of DVS limitations, but are not very explicit about the actual behavior.

DVS cameras must obey the laws of physics like any other vision sensor: They must count photons. Under low illumination conditions, photons become scarce and therefore counting them becomes noisy and slow. v2e is aimed at realistic modeling of these conditions, which are crucial for deployment of event cameras in uncontrolled natural lighting.
"

Go to the original article...

LiDAR News: Trioptics, Blickfeld, Apple

Image Sensors World        Go to the original article...

Trioptics publishes ats presentation at the recent Autosens On-Line conference "From Lab to Fab – Assembly and testing of optical components for LiDAR sensors in prototyping and serial production" by Dirk Seebaum:



Blickfeld publishes a datasheet for its Cube LiDAR based MEMS mirror scanning and SPAD array. The datasheet includes performance in bright sunlight:


BusinessWire: Apple announces iPadOS 14 that features support for iPad Pro LiDAR: "ARKit 4 delivers a brand new Depth API that allows developers to access even more precise depth information captured by the new LiDAR Scanner on iPad Pro®. Developers can use the Depth API to drive powerful new features in their apps, like taking body measurements for more accurate virtual try-on, or testing how paint colors will look before painting a room."


Hong Kong University and Cornell University publish a paper "Depth Sensing Beyond LiDAR Range" by Kai Zhang, Jiaxin Xie, Noah Snavely, and Qifeng Chen.

Go to the original article...

LiDAR News: Trioptics, Blickfeld, Apple

Image Sensors World        Go to the original article...

Trioptics publishes ats presentation at the recent Autosens On-Line conference "From Lab to Fab – Assembly and testing of optical components for LiDAR sensors in prototyping and serial production" by Dirk Seebaum:



Blickfeld publishes a datasheet for its Cube LiDAR based MEMS mirror scanning and SPAD array. The datasheet includes performance in bright sunlight:


BusinessWire: Apple announces iPadOS 14 that features support for iPad Pro LiDAR: "ARKit 4 delivers a brand new Depth API that allows developers to access even more precise depth information captured by the new LiDAR Scanner on iPad Pro®. Developers can use the Depth API to drive powerful new features in their apps, like taking body measurements for more accurate virtual try-on, or testing how paint colors will look before painting a room."


Hong Kong University and Cornell University publish a paper "Depth Sensing Beyond LiDAR Range" by Kai Zhang, Jiaxin Xie, Noah Snavely, and Qifeng Chen.

Go to the original article...

GPixel Announces 103MP, 28fps, 12b Global Shutter Sensor

Image Sensors World        Go to the original article...

Gpixel announces the GMAX32103, a large format Global Shutter CMOS sensor for industrial applications. The sensor is based on 3.2 µm charge domain GS pixel, provides 11276(H) x 9200(V) resolution (103 MP), and supports up to 28fps with 12bit output. GMAX32103 is aimed to the demanding machine vision applications and aerial imaging.

The 3.2 um pixel achieves a full well capacity of 10k e-, read noise less than 2 e- and maximum DR of 66dB. With the implementation of micro lens and light pipe technologies, the sensor provides a peak QE of 65%, a shutter efficiency of 1/15,000 and excellent angular response. GMAX32103 offers a large FOV to expand single-shot capabilities and a nearly square aspect ratio (1.27:1), which is optimal for inspection applications.

GMAX32103 uses 52 pairs of sub-LVDS channels, each run at a maximum speed of 960MHz. The sensor supports channel multiplexing for lower data rate implementations, and integrates a variety of read-out functions including up to 32 regions of horizontal windowing (region of interest), sub sampling and image flipping. GMAX32103’s is packaged in 209-pin uPGA ceramic package with an outer dimension of 49.5 mm x 48.1 mm.

We are very thrilled with the introduction of GMAX32103. The further expansion of Gpixel’s line up of extremely high-resolution sensors based on an industry proven and widely accepted platform, empowers our customers to tackle demanding applications and to address the industry’s needs for ever increasing image accuracy and throughput. This product is part of our fast growing GMAX product family, which will be further expanded in the very near future with other exciting products,” says Wim Wuyts, CCO of Gpixel.

GMAX32103 engineering samples are expected in November 2020.

Go to the original article...

Canon Presents 1MP SPAD Imager Prototype

Image Sensors World        Go to the original article...

Canon has developed a prototype of what it calls "the world’s first single photon avalanche diode (SPAD) image sensor with signal-amplifying pixels capable of capturing 1-megapixel images."

The SPAD image sensor developed by Canon overcomes the longstanding difficulties of achieving high SPAD pixel counts. By adopting a new circuit technology, Canon was able to realize a digital image resolution of 1MP. Exposure time can be shortened to as little as 3.8ns. In addition, the sensor is capable of up to 24,000 fps with 1 bit output, thus enabling slow-motion capture of fast movement within an extremely short time frame.

The sensor also features a high time resolution as precise as 100 ps. With a high resolution of 1MP and high-speed image capture, it is also able to accurately perform 3D distance measurements.

The camera was jointly developed with scientists at the Swiss Federal Institute of Technology in Lausanne and published in OSA Optica.

Go to the original article...

Miscellaneous News: UMC, Innoviz, Samsung

Image Sensors World        Go to the original article...

Semiconductor Engineering quotes said David Uriu, technical director of product management at UMC, saying that CIS are the drivers at 65nm and 40nm process nodes. "CIS use 65nm/55nm. Some CIS devices will start to use 40nm, but this is not a significant part of the current CIS volume yet. 40nm will expand for some high-end pixel designs, but it is not expected to be a widely accepted node due to costs."

Electronics360 reports that fabs' investments into image sensor manufacturing equipment rise 60% over 2020 and add another 35% rise in 2021.

Innoviz publishes its webinar comparing LiDAR, camera, and radar in ADAS and AV applications:



Samsung publishes a promotional video of its 50MP, 1.2um pixel ISOCELL GN1 image sensor:


Go to the original article...

Panasonic Lumix G100 review

Cameralabs        Go to the original article...

The Panasonic Lumix G100 is a compact mirrorless camera designed for vlogging and creative video, as well as photography. I tried it out for my first-looks review!…

The post Panasonic Lumix G100 review appeared first on Cameralabs.

Go to the original article...

Stratio Unveils Ge-Based SWIR Camera

Image Sensors World        Go to the original article...

After 7 years in development, Stratio unveils BeyonSense, said to be "the world’s first germanium-based smartphone-compatible camera." The 11 x 8 pixel BeyonSense Pre camera is expected to be available for sale in a month from now, if COVID situation allows. The company says:

"Due to COVID-19, our fabrication facilities in Silicon Valley have been closed for the past few months and there is no clear timeline for when they will reopen.

As this is an incredibly dynamic situation, we can only expect to ship BeyonSense® Pre with 11x8 pixels in a month following the reopening of our facilities. You can be assured our team is working around the clock to make it possible to deliver BeyonSense® to you.
"

"The Stratio idea was conceived by three PhD students in a small corner desk at Stanford University.

As PhD students in Electrical Engineering, they knew about the myriad of advantages with infrared imaging – from material analysis to night vision. However, the technology was prohibitively expensive so that only a few could benefit from it. One day, they discussed how a new sensor material called germanium (Ge) could be responsive to infrared light waves in real life. They began digging deeper, consulted experts, and conducted countless experiments to find out how they would achieve low cost, small size, and low power consumption. It turned out to be a years-long journey, but a fruitful one. Hence Stratio was born, in January 2013.
"


Stratio shows a short demo video of its new camera:

Go to the original article...

Yole: Image Sensor Market Keeps Growing, Defies Coronavirus Troubles

Image Sensors World        Go to the original article...

i-Micronews: At the end of 2019, the CIS price rose of nearly 10% because of production reaching maximum worldwide capacity.

"Even though the COVID-19 lockdown led to a drop in smartphone shipments, the demand for mobile camera modules will maintain a 7% year-over-year (YoY) growth in 2020. In the COVID-19 situation, no evident substantial impact on the CIS supply chain has been identified, including on the purchase of raw materials by giant players. The overall impact will be slower growth this year, with respect to the 25% YoY growth last year.

Demand from mobile devices will keep thriving. The overall attachment rate for CIS cameras per phone will move beyond 3.4 in 2020. Also, the growth rate for CIS attachment is still expected to be over 10% in the automotive space. The short term impact of COVID-19 has led to a substantial decrease of car production in the range of -30%. The end point for 2020 is very uncertain, and the long-term horizon is at best flat. The downturn in car production will be mitigated by increased attachment rates for automotive cameras. Looking at all markets the demand is still growing. The expansion of investment in CIS and capacity transition from DRAM to CIS continues for most players.
"

Go to the original article...

Event-Based Camera Tutorial

Image Sensors World        Go to the original article...

ETH Zurich Robotics and Perception Group publishes a video presentation "Event Cameras: Opportunities and the Road Ahead (CVPR 2020)" by Davide Scaramuzza


Go to the original article...

Demosaicing First or Denoising First?

Image Sensors World        Go to the original article...

University of Inner Mongolia, China, and CNRS, France, publish a paper "A Review of an Old Dilemma: Demosaicking First, or Denoising First?" by Qiyu Jin, Gabriele Facciolo, and Jean-Michel Morel.

"Image denoising and demosaicking are the first two crucial steps in digital camera pipelines. In most of the literature, denoising and demosaicking are treated as two independent problems, without considering their interaction, or asking which should be applied first. Several recent works have started addressing them jointly in works that involve heavy weight neural networks, thus incompatible with low power portable imaging devices. Hence, the question of how to combine denoising and demosaicking to reconstruct full color images remains very relevant: Is denoising to be applied first, or should that be demosaicking first? In this paper, we review the main variants of these strategies and carry-out an extensive evaluation to find the best way to reconstruct full color images from a noisy mosaic. We conclude that demosaicking should applied first, followed by denoising. Yet we prove that this requires an adaptation of classic denoising algorithms to demosaicked noise, which we justify and specify."

Go to the original article...

Few More iPad LiDAR Pictures

Image Sensors World        Go to the original article...

SystemPlus Consulting publishes Apple iPAD Pro 2020 LiDAR module reverse engineering report with few more pictures in addition to many that have already been published:

"This rear 3D sensing module is using the first ever consumer direct Time-of-Flight (dToF) CMOS Image Sensor (CIS) product with in-pixel connection.

The 3D sensing module includes a new generation of Near Infrared (NIR) CIS from Sony with a Single Photon Avalanche Diode (SPAD) array. The sensor features 10 µm long pixels and a resolution of 30 kilopixels. The in-pixel connection is realized between the NIR CIS and the logic wafer using hybrid Direct Bonding Interconnect technology, which is the first time Sony has used 3D stacking for its ToF sensors.

The LiDAR uses a vertical cavity surface emitting laser (VCSEL) coming from Lumentum. The laser is designed to have multiple electrodes connected separately to the emitter array. A new design with mesa contact is used to enhance wafer probe testing.

A wafer level chip scale packaging (WLCSP), five-side molded driver integrated circuit from Texas Instruments generates the pulse and drives the VCSEL power and beam shape. Finally, a new Diffractive Optical Element (DOE) from Himax is assembled on top of the VCSEL to generate a dot pattern.
"

Go to the original article...

3D News: MIT, Intel, Sharp

Image Sensors World        Go to the original article...

MIT Vivienne Sze's presentation on energy efficient processing has a part about low power ToF imaging:


Intel announces an long range version of its active stereo 3D camera Realsense D455:

"The D455 camera increases the optimal range to 6 meters, making it twice as accurate as the current D400 cameras without sacrificing field of view. The D455 also includes global shutters for the depth and RGB sensors to improve correspondence between the two different data streams and to match the field of view between the depth sensors and the RGB sensor. In addition, this camera also integrates an IMU to allow for refinement of its depth awareness in any situation where the camera moves.

The D455 achieves less than 2% Z-error at 4 meters with several improvements. First, the depth sensors are located 95 millimeters apart, providing greater depth accuracy at a longer range. Second, the depth and RGB sensors are placed on the same stiffener, resulting in an improved alignment of color and depth. Lastly, the RGB sensor has the same field of view as the depth sensors, further improving correlation of depth and color points.
"


Sharp presentation on its distance measuring sensors explains their operation:


Sharp also makes SPAD-based ToF distance sensors:

Go to the original article...

SmartSens Acquires Allchip, Expands into Automotive Market

Image Sensors World        Go to the original article...

PRNewswire: SmartSens Technology has completed acquisition and merger with Shenzhen-based 8-year old Allchip Microelectronics. SmartSens expects its acquisition and integration of Allchip further improve its cost structure and competitiveness in the automotive market while accelerating its innovation in smart car CIS solutions. Allchip products include a series of SOC image sensors that have been widely deployed in automobile cameras and other miniaturized video surveillance applications.

"The increasing adoption of image sensors in automobiles has brought new momentum to the imaging market. According to a projection by research firm Yole Développement, the volume of camera modules in the global automobile market will exceed US$8B by 2025. Our acquisition of Allchip Microelectronics is a strategic move for SmartSens that will significantly broaden our leadership and capacity in addressing this market," said Richard Xu, Founder and CEO of SmartSens. "Our combined advantage -- utilizing shared resources and technologies -- will deliver a true win-win for us and our customers, for years to come."

"We are thrilled to be part of the SmartSens family. We share the same set of core values, which emphasize the pursuit of technology innovation in service of our customers' needs. We look forward to combining Allchip's technical know-how in the automotive industry with SmartSens' excellent business channel to successfully launch class-leading products for Automotive ADAS systems and other smart sensing applications," said Mike Hu, current VP of Technology at SmartSens and former CEO of Allchip. Mr. Hu is a veteran in CMOS image sensor field since his key role in BYD Microelectronics time as the CTO back in ten years ago.

Go to the original article...

Quanta Burst Photography

Image Sensors World        Go to the original article...

University of Wisconsin–Madison: In a dark room or a motion-heavy scene, conventional cameras face a choice: a quick look that freezes movement nicely but turns out dark, or a longer exposure that captures more light but blurs moving parts.

That’s always been a fundamental trade-off in any kind of photography,” says Mohit Gupta, a University of Wisconsin–Madison computer sciences professor. “But we are working on overcoming that trade-off with a different kind of sensor.

The researchers are using SPADs for what they call quanta burst photography — taking many images in bursts, and then processing those many images to squeeze one good picture from a poorly lit or fast-moving subject. The EPFL The SwissSPAD array from Edoardo Charbon's group used in the burst photography work is fast enough to record 100,000 single-photon frames per second.

The result is good image quality in low-light, with reduced motion blur, as well as a wide dynamic range,” says Gupta, whose work is supported by the DARPA. “We have had good results even when the brightest spot in view is getting 100,000 times as much light as the darkest.


The paper also compares the algorithms running on SPAD and Jot-based sensors:

Go to the original article...

LiDAR News: Outsight-Velodyne, Aeye, UCB, Livox

Image Sensors World        Go to the original article...

BusinessWire: It appears that Outsight gives up on its own LiDAR hardware and switches over to Velodyne as the companies jointly announce a multi-year partnership agreement.

Velodyne’s lidar sensors enable the 3D Semantic Camera solution to capture 3D data and track people and objects in a way that preserves anonymity and trust. The autonomous system merges lidar data capture with RGB color data with an embedded AI processing unit to create a premises-wide, detailed situational understanding of facilities, such as airports, shopping malls and train stations.

Velodyne’s lidar sensors play an essential role in helping our platform capture, process and understand congested environments so operators can work to increase operational efficiency and security,” said Cedric Hutchings, CEO, Outsight. “The Velodyne lidar sensors allow us to track each individual person and object with centimeter-level precision. They enable our solutions to precisely monitor movements, velocity and interactions between all persons and objects in real time.

AEye publishes "Time of Flight vs. FMCW LiDAR: A Side-by-Side Comparison" whitepaper which is supposed to debunk FMCW companies' claims:


UCB publishes a PhD Thesis on FMCW LiDAR "FMCW Lidar: Scaling to the Chip-Level and Improving Phase-Noise-Limited Performance" by Phillip Sandborn.

"In this dissertation, I present my work in chip-scale integration of optical and electronic components for application in coherent lidar techniques. First, I will summarize the work to integrate a typically bulky FMCW lidar control system onto an optoelectronic chip-stack. The chip-stack consists of an SOI silicon-photonics chip and a standard CMOS chip. The chip was used in an imaging system to generate 3D images with as little as 10um depth precision at standoff distances of 30cm.

Second, I will summarize my work in implementing and analyzing a new post-processing method for FMCW lidar signals, called "multi-synchronous re-sampling" (MK-re-sampling).
"


Forbes publishes an interview-based article "The Big Bend Theory And Beyond- Are We There Yet?"

"I interviewed 4 companies that approach solid-state LiDAR in different ways – technically and from a market perspective. They include Quanergy, Lumotive, Draper Labs and Baraja. I would like to thank them for being transparent and sharing the level of detail they have."

Livox extends the range of its Tele-15 automotive LiDAR to 320m:

"Now, objects with low reflectivity have an increased detection range of 60% from 200 meters to 320 meters at 10% reflectivity, and it will also detect objects at 500m with 50% reflectivity, previously requiring 80% reflectivity at that distance. Additionally, Tele-15 now supports a custom firmware, increasing detection range to a maximum of 1000 meters.

The Livox Tele-15 sensor is available for purchase today for $1,499.
"

Go to the original article...

Low Light Imaging with CFA 3.0

Image Sensors World        Go to the original article...

Applied Research LLC; Rockville, MD, USA publishes MDPI paper "Demosaicing of CFA 3.0 with Applications to Low Lighting Images" by by Chiman Kwan, Jude Larkin, and Bulent Ayhan.

"Low lighting images usually contain Poisson noise, which is pixel amplitude-dependent. More panchromatic or white pixels in a color filter array (CFA) are believed to help the demosaicing performance in dark environments. In this paper, we first introduce a CFA pattern known as CFA 3.0 that has 75% white pixels, 12.5% green pixels, and 6.25% of red and blue pixels. We then present algorithms to demosaic this CFA, and demonstrate its performance for normal and low lighting images.

In addition, a comparative study was performed to evaluate the demosaicing performance of three CFAs, namely the Bayer pattern (CFA 1.0), the Kodak CFA 2.0, and the proposed CFA 3.0. Using a clean Kodak dataset with 12 images, we emulated low lighting conditions by introducing Poisson noise into the clean images. In our experiments, normal and low lighting images were used. For the low lighting conditions, images with signal-to-noise (SNR) of 10 dBs and 20 dBs were studied. We observed that the demosaicing performance in low lighting conditions was improved when there are more white pixels.

Moreover, denoising can further enhance the demosaicing performance for all CFAs. The most important finding is that CFA 3.0 performs better than CFA 1.0, but is slightly inferior to CFA 2.0, in low lighting images."

Go to the original article...

Sigma 100-400mm f5-6.3 DG DN review

Cameralabs        Go to the original article...

The Sigma 100-400mm f5-6.3 DG DN OS is an affordable telephoto zoom designed for full-frame mirrorless cameras and available in Sony e and L-mount versions. Find out how it performs in my review!…

The post Sigma 100-400mm f5-6.3 DG DN review appeared first on Cameralabs.

Go to the original article...

Visionox Unveils Under-OLED Front Camera Solution

Image Sensors World        Go to the original article...

i-Micronews: Chinese OLED screen company Visionox is the first to present under-display selfie camera solution for mass-produces smartphones, Visionox Inv See:

Go to the original article...

Axcelis Ships First 15MeV Implanter to "Leading Image Sensor Manufacturer"

Image Sensors World        Go to the original article...

PRNewswire: Axcelis announces that it has shipped the first Purion XEmax high energy system to a "leading CMOS image sensor manufacturer." The system is a new tool evaluation and has been shipped in Q2.

"The new Purion XEmax high energy implanter was designed for emerging, high performance image sensor applications," Bill Bintz, Executive Vice President, Product Development explained. "The enhanced beamline features multiple filtration systems to eliminate energetic metal contaminants which can otherwise result in compromised dark current and white pixel count levels. The new system is built on the industry leading Purion XE high energy implant platform and features Axcelis' new patented Boost Technology, which delivers beam energies up to 15 MeV. With innovative new technology, the Purion XEmax is delivering the tightest angle and overall process control to enable higher quality photodiode performance for next generation CIS devices."

Go to the original article...

192MP+ Camera Support Becomes Standard in Qualcomm New Products

Image Sensors World        Go to the original article...

Qualcomm announces Snapdragon 690 5G platform with ISP than rivals high-end smartphones from 2 years ago:

"Vivid 4K HDR video recording is new to this series and have fun with up to 192 MP photo capture, both produce professional-quality photos and videos. And now, with 5G connectivity, 4K HDR streaming delivers vibrant, immersive entertainment."


Even Qualcomm Robotics RB5 platform supports high resolution imaging:

"Qualcomm Spectra™ 480 Image Signal Processor (ISP) captures fast, professional-quality photos and videos, and can process 2 Gigapixels per second. This Gigapixel speed supports superior camera features, including Dolby Vision video capture, 8K video recording (at 30 FPS) and 200-megapixel photos, and simultaneously captures 4K HDR video (at 120 FPS) and 64 MP photos with zero shutter lag. The hardware accelerator utilizes the Engine for Video Analytics (EVA) to handle all Computer Vision (CV) tasks. Additional ISP features include HEIF photo capture, slow motion video capture, and advanced video capture formats (including Dolby Vision, HDR10, HDR10+, HEVC and HLG). Seven concurrent cameras facilitate simultaneous localization and mapping (SLAM), object detection and classification, autonomous navigation and path planning to efficiently and safely perform tasks in complex indoor and outdoor settings."

Go to the original article...

Gpixel Announces 16.7MP BSI Scientific Sensor with 15um Pixels

Image Sensors World        Go to the original article...

Gpixel announces the further expansion of its GSENSE product family with the GSENSE1516BSI, a large format BSI CMOS sensor for high-end scientific applications. The sensor is designed around Gpixel’s 15 µm rolling shutter pixel, provides 4096 x 4096 resolution (16.7 MP), and supports up to 9fps in dual gain HDR mode.

Like other sensors in the GSENSE family, the GSENSE1516BSI can read out a single exposure with two different gain settings, providing two separate images that when recombined achieve up to 90dB intra-scene dynamic range. Using the low gain channel, the sensor’s full well capacity is 134ke–, maximizing signal to noise in the bright parts of the image. Through the high gain channel, the sensor achieves a read noise of 4e–, perfect for the measurement of faint signals.

The 15 µm pixel has 95% peak QE. Engineering samples of the GSENSE1516BSI will be available for evaluation in July 2020.


Go to the original article...

css.php