Archives for July 2020

How many photons does it take to form an image?

Image Sensors World        Go to the original article...

ResearchGate: Glasgow University, UK, paper "How many photons does it take to form an image?" by Steven D. Johnson, Paul-Antoine Moreau, Thomas Gregory, and Miles J. Padgetta tries to answer on a somewhat philosophical question:

"If a picture tells a thousand words, then we might ask ourselves how many photons does it take to form a picture? In terms of the transmission of the picture information, then the multiple degrees of freedom (e.g., wavelength, polarization, and spatial mode) of the photon mean that high amounts of information can be encoded such that the many pixel values of an image can, in principle, be communicated by a single photon. However, the number of photons required to transmit the image information is not necessarily, at least technically, the same as the number of photons required to image an object. Therefore, another equally important question is how many photons does it take to measure an unknown image?

For intensity images, it seems that one detected photon per image pixel is a realistic guide, but this may be reduced by making further assumptions on the sparsity of an image in a chosen basis, such as spatial frequency. In this last respect, the advent of machine learning, knowledge-based reconstruction, and similar techniques alleviates the need for a user to explicitly define the sparse basis, but rather the prior is determined from a library of previously recorded images of a similar type. This machine learnt prior can then potentially be designed into the optimum measurement strategy. It seems likely therefore that future imaging systems will combine state-of-the-art single photon detectors with knowledge-based processing both in the design of the system itself and in the processing of the collected data to yield images or decisions based on these data on the basis of extremely low numbers of photons, potentially well below one photon per image pixel.

Once we are at single-photon imaging, the International SPAD Sensor Workshop (ISSW 2020) held as an on-line event in June published a nice SPAD photon-accumulation video of the city of Edinburgh:

Go to the original article...

Cedar Lane Technologies Sues Huawei over CIS Data Transmission Patents

Image Sensors World        Go to the original article...

Canada-based Cedar Lane Technologies sues Huawei over infringement on 7 US patents, 3 of which describe image sensor data transmission schemes: 6,473,527; 6,972,790; and 8,537,242.

Go to the original article...

Smartsens Sees Automotive Sensors as its Future Growth Engine

Image Sensors World        Go to the original article...

Smartsens talks about its strategical move to automotive imaging market:

"Autonomous Driving presents both challenges and new opportunities for the CIS industry in China. The current shipments of automotive chips show that the gap between domestic and foreign semiconductor companies remains wide and presents an ongoing challenge for Chinese companies. It is, however, an opportunity for SmartSens.

We believe that SmartSens’ strengths in the field of security system can create an advantage in moving into the automotive industry by providing superior night vision imaging performance combined with other in-vehicle electronics technologies such as LED flicker suppression technology, and PixGain HDR technology, just to name a few. In addition, SmartSens recently acquired Shenzhen-based Allchip Microelectronics, positioning us perfectly in research and development for the next-generation automotive sensor technology.

“In the past, the semiconductor business in China relied heavily on overseas technology and research. With the rise of the local semiconductor development and the maturity of domestic CIS technology in recent years, however, we are seeing a seismic shift towards China and Asia,” said Mr. [James Ouyang, the newly appointed Deputy General Manager at SmartSens.]

Go to the original article...

Blackmagic Announces 80MP 60fps Camera

Image Sensors World        Go to the original article...

BusinessWire: Blackmagic Design announces URSA Mini Pro 12K digital film camera with a 12,288 x 6,480 12K Super 35 image sensor, 14 stops of DR and 60 fps frame rate in 12K at 80MP per frame.

"The Blackmagic URSA Mini Pro 12K features a revolutionary new sensor with a native resolution of 12,288 x 6480, which is an incredible 80 megapixels per frame. The Super 35 sensor has a superb 14 stops of dynamic range and a native ISO of 800. The new 12K sensor has equal amounts of red, green and blue pixels and is optimized for images at multiple resolutions. Customers can shoot 12K at 60 fps or use in-sensor scaling to allow 8K or 4K RAW at up to 110 fps without cropping or changing their field of view."

The brand ambassador John Brawley shares his knowledge in cinematographers mailing list:
  • Brand new sensor, 3 years in the making.
  • 79 MP.
  • Native 800 iso.
  • 14 stops (that’s probably a bit conservative, they haven’t been able to properly check it because the models are based on Bayer sensors...:-)
  • It’s not Bayer, but it has a very small pixel pitch of 2.2 microns. (Alexa is 8)
  • Instead of Bayer 2x2 grid of GRBG it has a 6x6 grid. 6G, 6B and 6R plus 18 W pixels.
  • The W are clear or “white” pixels. This overcomes the reduced sensitivity issue of a 2.2 micron pitch.

The sensors are shown in Blackmagic presentation video:

The new sensor's readout speed is

12K (full FOV) : ~15.5ms
8K/4K (full FOV) : ~8.5ms*
6K crop : ~7.8ms
4K crop : ~4.25ms*

*Blackmagic hopes to improve this slightly in an update

Thanks to PF and others for the pointer!

Go to the original article...

Interview with Omnivision on Disposable Endoscopy

Image Sensors World        Go to the original article...

Yole Developpement publishes an interview with Tehzeeb Gunja, Director of Medical Marketing at OmniVision. Few quotes and slides:

"With more than 500 customers and approximately 600 active projects, OmniVision possesses deep knowledge of the medical industry, and strong connections to all leading ecosystem partners and end customers globally.

Technological advancements will also continue to drive disposable adoption. CMOS imagers continue to shrink, which will allow endoscopes with smaller ODs to be designed using chip-on-tip technology. Wafer-level modules will also support large optical format imagers, thus enabling disposable, 1080p resolution for the larger OD endoscopes used in gastrointestinal and laparoscopic procedures. Additionally, there is a growing trend toward multimodal imaging and diagnosis, where the imager is used to position an ultrasound or OCT probe inside the body.

The extremely small size of newer imagers makes it feasible to be integrated directly into endoscopic tools, allowing direct line-of-sight visualization. Additionally, there is growing interest in a range of applications beyond white-light endoscopy, including the use of ultraviolet and near infrared light for fluorescence, chromo-endoscopy and virtual endoscopy. There are also novel endoscopic applications that are moving toward mainstream adoption, including narrow band imaging, multispectral imaging and light polarized imaging, among others.

Go to the original article...

Sensor with AI-Controlled Per-Pixel Exposure

Image Sensors World        Go to the original article...

Stanford University, University of Manchester, and IBM Research in Zurich publish a paper "Neural Sensors: Learning Pixel Exposures for HDR Imaging and Video Compressive Sensing With Programmable Sensors" by Julien N.P. Martel, Lorenz K. Mueller, Stephen J. Carey, Piotr Dudek, and Gordon Wetzstein.

"Camera sensors rely on global or rolling shutter functions to expose an image. This fixed function approach severely limits the sensors’ ability to capture high-dynamic-range (HDR) scenes and resolve high-speed dynamics. Spatially varying pixel exposures have been introduced as a powerful computational photography approach to optically encode irradiance on a sensor and computationally recover additional information of a scene, but existing approaches rely on heuristic coding schemes and bulky spatial light modulators to optically implement these exposure functions. Here, we introduce neural sensors as a methodology to optimize per-pixel shutter functions jointly with a differentiable image processing method, such as a neural network, in an end-to-end fashion. Moreover, we demonstrate how to leverage emerging programmable and re-configurable sensor–processors to implement the optimized exposure functions directly on the sensor. Our system takes specific limitations of the sensor into account to optimize physically feasible optical codes and we demonstrate state-of-the-art performance for HDR and high-speed compressive imaging in simulation and with experimental results."

Thanks to PD for the link!

Go to the original article...

Yole on Coronavirus Impact on CIS Market

Image Sensors World        Go to the original article...

Yole Developpement publishes "The CMOS image sensor market stands firm during the pandemic – Live Market Briefing" by Pierre Cambou:

Go to the original article...

LiDAR News: Benewake, Ouster, Quanergy, Ibeo, Conti

Image Sensors World        Go to the original article...

Benewake LiDAR is used to check toilet occupancy in an airport:

"In order to save time for passengers to use the toilet and reduce congestion in public toilets, some time ago, "Urumchi Diwopu International Airport" in China adopted Benewake LiDAR (TF-Luna). The use of TF-Luna can detect the toilet traffic and remaining squatting space. Both data can be displayed on the screen outside the toilet. This system solution not only relieves the congestion of public toilets, but also saves the time for users to select toilets, and plays a significant role in improving the utilization rate of public toilets and passenger satisfaction."

EETimes reporter Junko Yoshida publishes an article about Ouster LiDAR internals:

"In an interview with EE Times last week, Ouster’s founder and CEO Angus Pacala boasted that his company has already picked up 700 design wins over 15 different industries in 50 countries.

Impressive, but where’s Outster’s advantage?

Pacala said, “We chose technology designed to work in many markets.” Ouster has developed a lidar platform built on “all-CMOS semiconductors.” That makes Ouster’s products “digital lidars,” according to Pacala.

Ouster’s competitors, including Velodyne and Waymo, deploy hundreds of off-the-shelf discrete components to make their spinning lidars work. In contrast, Ouster has developed tightly integrated custom vertical cavity surface emitting lasers (VCSELs) and another ASIC that incorporates single photon avalanche diodes (SPADs) arrays.

Ouster’s platform also includes Xilinx’s FPGA, responsible for processing massive amount of data.

Quanergy unveils the MQ-8 3D LiDAR and perception software which are part of Quanergy’s Flow Management platform. Designed with a new smart beam configuration, the MQ-8 solution delivers up to 140 m continuous tracking range, enabling up to 15,000 m2 coverage with a single sensor for flow management applications like security, smart city, social distancing and smart space industries.

EPIC Online Technology Meeting on ADAS and Autonomous Driving has Ibeo and Continental LiDARs presentations:

Go to the original article...

Trinamix Beam Profile Analysis for 3D Imaging and Material Detection

Image Sensors World        Go to the original article...

trinamiX introduces a novel technology called Beam Profile Analysis to measure distance and, simultaneously, obtain material features from projected laser spots. At the core of the technology is a new class of algorithms, which provides features derived from the analysis of the two-dimensional intensity distribution of each projected spot. These features correlate with distance and material properties and can be further processed by machine-learning approaches on mobile, embedded or PC type platforms.. A Beam Profile Analysis module can be built from components available at scale and consists of a standard CMOS camera and a dot projector.

"Beam Profile Analysis uses that spot shape using physically inspired features to directly measure distance and material information. Among the most important physical properties are the specifics of diffuse scattering (Lambertian scattering, volume scattering), laser speckles and lens convolutional properties (for example, several kinds of aberrations). In other words, Beam Profile Analysis consists of a recipe for specific periodic laser projection grids and a collection of specifically derived filter kernels and functions thereof to extract both distance and material classes."

Go to the original article...

Holst Centre Non-spoofable Biometric ID Sensor can be Integrated in Smartphones

Image Sensors World        Go to the original article...

Researchers at Holst Centre have combined the organic NIR PD with an oxide thin-film transistor backplane and a focusing lens to create an NIR image sensor measuring 2.4 x 3.6 cm, large enough to image the palm of a hand or multiple fingers at a distance. It's 500-ppi resolution is state-of-the-art for biometric image sensors, enabling high-quality images of the vein pattern. In addition, the sensor achieves external QE (EQE) of 40% at 940 nm and a dark current of around 10e-6 mA/cm2.

"Together with a NIR light source, the prototype image sensor opens the door to contactless biometric security through vein pattern detection. Our thin-film technologies make for extremely thin and potentially flexible sensors that could be easily integrated into existing displays and things like mobile phones or cash machine screens, eliminating the need for separate ID and credit cards," says Daniel Tordera, Senior Scientist at Holst Centre.

Having demonstrated the potential of large-area NIR sensors for vein detection, Holst Centre is continuing to refine the technology and push its sensitivity deeper into the NIR region. With PDs efficient up to 1100 nm, these latest developments could enable new applications such as eye tracking, quality control in food production, condition monitoring of pipes and non-invasive in-body medical imaging including large area oxygen saturation (SpO2) measurements, conformable optical brain scans and cuffless blood pressure monitoring.

Go to the original article...

Fujifilm Develops Multispectral Camera Based on Polarization-Sensing CIS

Image Sensors World        Go to the original article...

Fujifilm develops a new multispectral camera based on polarization-sensing image sensor:

  • High-performance multispectral camera system is equipped with a lens fitted with newly-developed filters, a polarization image sensor that can capture specific directional polarization image, and a cutting-edge image processing function. This system can simultaneously record images of different wavelength ranges in high definition and presenting them in real time.
  • The newly-developed filters serve as “polarizer” that lets light in a specific direction of polarization pass through as well as “optical bandpass filter” that passes light of a specific wavelength range. The system uses three filters to split light into up to nine wavelength bands, while also polarizing the light of each wavelength band into a specific oscillation direction. (Figure 1)
  • The polarization information of light in each wavelength band that has passed through the filters is recorded by the polarization image sensor and applied with the cutting-edge image processing function for visual presentation in high resolution and at a high frame rate (Figure 2). The system also allows users to choose an optical bandpass filter of the optimum wavelength band for their monitoring object.

Once we are at polarization sensing devices, OSA Image of the Week shows a nice visualization of mechanical stresses in plastic cutlery:

Go to the original article...

Cameras and LiDARs in ADAS/AD Systems

Image Sensors World        Go to the original article...

ResearchInChina publishes its summary of ADAS/AD approaches of different car manufactures. Some of them rely mostly on cameras and radars, while others use many LiDARs:

Go to the original article...

NHK Develops 3-Layer Organic Sensor

Image Sensors World        Go to the original article...

NHK has developed a three-layer color image sensor using organic films that detect only blue and only green light, layered vertically over a CMOS image sensor that detects red light.

"Incident light passes the first organic layer, which absorbs only the blue light component and converts to an electrical signal, and is transparent to the green and red components. The second organic layer absorbs only the green component, and the red component is detected by the CMOS image sensor. The organic layers are combined with transparent thin-film transistors, and the signals output from each of the layers can be combined to reproduce a color image.

This structure enables all color information of red, green and blue to be obtained within a single pixel, achieving a high-resolution image sensor that uses light more efficiently. We will continue to work reducing the pixel size and increasing the number of pixels, and accelerate R&D toward realizing a compact, high-resolution, single-chip camera.

Go to the original article...

Assorted News: Kospet, Corephotonics, LG Innotek, Toshiba

Image Sensors World        Go to the original article...

GizmoChina quotes a rumor that the next generation Kospet Prime 2 smartwatch will have a 3D camera. "The new sensor will improve the smartwatch’s camera support and will aid in measuring the size of any object more accurately."

The current generation Kospet Prime smartwatch released earlier this year already has a dual 5MP+2MP camera with software upscaling to 8MP:

TheElec updates on Samsung-Corephotonics patent infringement lawsuit against Apple camera module supplier LG Innotek:

"At their latest hearing on Friday at Seoul Central District Court, when asked by the judge on whether they have an outside appraiser to suggest, a LG InnoTek representative said they had someone in mind from overseas but not in South Korea and said they will look for one. LG also said that the patent invalidation judgment was expected at the end of August and asked the next hearing to be set in September.

The trial has been effectively put on hold for the past eight months due to the issue of choosing an appraiser to evaluate the patent in question.

Corephotonics filed the lawsuit back in November, 2018, alleging that LG InnoTek violated its patents related to small field of view lens assembly. LG countered with its own request to South Korea’s patent office to invalidate the patents in question on June, 2019.

Toshiba proposes a 3D camera based on a different blur for objects closer and farther than its focal plane:

"Until now, it was considered theoretically too difficult to measure distance based on the shape of the blur, which is the same for objects both near and far when they are equidistant from the focal point (Figure 3). However, analytical results revealed a substantial difference between the blur shapes of near and far objects, even when equidistant from the focal point (Figure 4). With that, Toshiba successfully analyzed blur data from captured images by a deep learning module trained with a deep neural network model.

When light passes through a lens, the shape of the blur created is known to change depending on the light's wavelength and its position in the lens. In the developed network, position and color data are processed separately to properly perceive changes in blur shape, and then, after passing through a weighted attention mechanism, are used to control where on the brightness gradient to focus in order to correctly measure the distance (Figure 5). Through learning, the network is then updated to reduce any error between the measured distance and actual distance.

Using this AI module, Toshiba has confirmed that a single image captured with a commercially available camera realizes the same distance measurement accuracy secured with stereo cameras.

Toshiba will confirm the versatility of the system with commercially available cameras and lenses and speed up the image processing, aiming for public implementation in fiscal year 2020.

Go to the original article...

Nikon Z 24-200mm f4-6.3 VR review so far

Cameralabs        Go to the original article...

The Nikon Z 24-200mm f4.0-6.3 VR is the first super-zoom designed for Nikon’s mirrorless cameras with Z-mount and corrected for full-frame sensors. Check out our review-so-far!…

The post Nikon Z 24-200mm f4-6.3 VR review so far appeared first on Cameralabs.

Go to the original article...

Toshiba Announces 200m-range Flash LiDAR Prototype

Image Sensors World        Go to the original article...

Toshiba announces a high-resolution, long-range technology for flash LiDAR, probably the next version of one presented at ISSCC 2020. At its heart is Toshiba’s compact, high-efficiency silicon photo-multiplier (SiPM).

In general, SiPM are suitable for long-range measurement as they are highly light sensitive. However, the light-receiving cells composed on SiPM require recovery time after being triggered, and in strong ambient light condition they also need a large number of cells, since they must have reserve cells to react to reflected laser light.

Toshiba’s SiPM applies a transistor circuit that reboots the cells to reduce the recovery time. The cells function more efficiently and fewer are needed, securing a smaller SiPM, as shown in Figure 1. This realizes a higher-resolution SiPM array while maintaining high sensitivity, as shown in Figures 2 and 3.

Field trials with a LiDAR prototype, shown in Figure 4, using commercially available lenses, from wide-angle to telephoto lenses, have demonstrated the system’s effectiveness over a maximum distance of 200m (Figure 5).

Go to the original article...

Will Samsung and Hynix Close the Gap with Sony?

Image Sensors World        Go to the original article...

SK Hynix publishes an article "Accelerated Multi-Camera Competition in Smartphone Market: Will Mobile-Centric CIS Demand Continue to Grow?" written by Yuak Pak, analyst at KIWOOM Securities. The analyst presents his view on the competition with Sony, among other things:

"For CIS, more demand for 12-inch wafers is being seen as focus shifts from 8-inch wafers. In addition, as the number of pixels increases to more than 40 million, the process started to move from 90nm to 32nm or less.

In particular, the manufacturing process of CIS is very similar to that of DRAMs, and the trench technology of DRAMs is applied to the process of high-pixel products. Therefore, it is highly likely that DRAM manufacturers will have a cost advantage over time.

SK hynix is actually applying DRAM trench process technology to eliminate light interference between pixels, while several experiments are underway to prevent the absorption of photons when using metal partition walls. In addition, ISOCELL of Samsung Electronics has also adopted DRAM process technology and is making efforts to refine their process to a 32nm level.

SK hynix, and Samsung Electronics, two of the world’s leading semiconductor memory manufacturers, are expected to close the technological gap with leading CIS players such as Sony thanks to their superior DRAM technology.

SK hynix is currently operating 8- and 12-inch CIS lines. This year’s capacity of the 12-inch CIS line has increased by more than 60% compared to last year. In addition, since some lines are being redeployed for CIS, actual performance is expected to become more visible from the end of this year.

Samsung Electronics is also expanding the capacity of its 12-inch line in addition to its existing 8-inch CIS line, mainly by utilizing old DRAM lines. Starting with 11 lines in 2018, the company is planning to convert 13 lines into a CIS line in 2020.

The industry’s number one CIS manufacturer Sony continues to increase its 12-inch capacity in Japan, which is expected to intensify the competition for market share from the second half of this year. While Sony has to construct a new line, SK hynix and Samsung Electronics are converting and deploying existing DRAM lines, which would make their products highly competitive at cost, advantaging them in competition to secure market share.

The article is complemented by the CIS market data from KIWOOM Securities:

"As of 2018, the share of demand for CIS by major industries shows the mobile industry dominating at an unbeatable 68% – followed by compute (9%), consumer (8%), security (6%), automotive (5%), and industrial (4%). In the future, this demand is expected to grow mainly within the mobile, automotive, and industrial sectors, while the market’s size – which was valued around USD 13.7 billion in 2018 – is expected to increase to USD 19 billion by 2022.

In addition to this, packaging technology integrating CIS, ISP, and DRAM is now being introduced to ultra-high-speed cameras, which is proving to be a beneficial change for companies producing both DRAMs and CIS in the mid to long term.

Go to the original article...

Image Algorithmics on RGBW Color Filter Misconceptions

Image Sensors World        Go to the original article...

Image Algorithmics kindly sent me a presentation with the company view on RGBW CFA advantages:

"There is a strong preconception in the market that RGBW does not work well. This is understandable given the failure of previous attempts. What's worse, many engineers now believe that it fundamentally cannot work well. I am attaching a slide deck to address this misconception.

We have tested our algorithms on 0.8u, 1.0u, 1.12u and 2.8u RGBW sensors. RGBW has a 6dB+ SNR advantage over Bayer in low light, read noise limited conditions and 3dB+ SNR advantage over Bayer in bright light, shot noise limited conditions. RGBW also has a 6dB dynamic range advantage.

Go to the original article...

In-Pixel Temperature Sensors with an Accuracy of ±0.25 °C

Image Sensors World        Go to the original article...

Delft University of Technology and Harvest Imaging publish MDPI paper "In-Pixel Temperature Sensors with an Accuracy of ±0.25 °C, a 3σ Variation of ±0.7 °C in the Spatial Domain and a 3σ Variation of ±1 °C in the Temporal Domain" by Accel Abarca, and Albert Theuwissen.

"This article presents in-pixel (of a CMOS image sensor (CIS)) temperature sensors with improved accuracy in the spatial and the temporal domain. The goal of the temperature sensors is to be used to compensate for dark (current) fixed pattern noise (FPN) during the exposure of the CIS. The temperature sensors are based on substrate parasitic bipolar junction transistor (BJT) and on the nMOS source follower of the pixel. The accuracy of these temperature sensors has been improved in the analog domain by using dynamic element matching (DEM), a temperature independent bias current based on a bandgap reference (BGR) with a temperature independent resistor, correlated double sampling (CDS), and a full BGR bias of the gain amplifier. The accuracy of the bipolar based temperature sensor has been improved to a level of ±0.25 °C, a 3σ variation of ±0.7 °C in the spatial domain, and a 3σ variation of ±1 °C in the temporal domain. In the case of the nMOS based temperature sensor, an accuracy of ±0.45 °C, 3σ variation of ±0.95 °C in the spatial domain, and ±1.4 °C in the temporal domain have been acquired. The temperature range is between −40 °C and 100 °C."

Go to the original article...

LiDAR News: Waymo, Velodyne, Luminar, Aeye, Innoviz, Livox, Aurora

Image Sensors World        Go to the original article...

EETimes publishes an article about Waymo Laser Bear Honeycomb LiDAR:

"It’s been well over 16 months since Waymo announced a plan to license its lidar, called Laser Bear Honeycomb, to non-automotive companies.

Waymo is promising that its perimeter lidars, placed at four points around a vehicle, offer “an unparalleled field of view including up to 95 ° vertical field of view, and up to 360 ° horizontal field of view.”

This translates into fewer sensors for AVs to see more area. Waymo also claims that its lidars suffer little interference regardless of proximity; they are able to detect and avoid objects at very close range.

Leading AV companies — Waymo, GM Cruise and Argo AI — have either already acquired lidar technology companies or have developed lidars internally. Even Mobileye, an Intel company, is crafting its own lidar tech, Amnon Shashua, Mobileye’s CEO acknowledged, in a recent interview with EE Times.

Most industry analysts agree that many of the 70-plus lidar startups that have sprung up in the past several years are unlikely to survive in the Covid-19 economy. The public health crisis exacerbates the reality that the arrival of commercial AVs is no longer as imminent as once predicted.

TheVerge: A number of US-based LiDAR companies uses federal Paycheck Protection Program:
  • Velodyne, the top LIDAR manufacturer in the US, received a loan in the range of $5M to $10M to retain 450 jobs.
  • Luminar, an Orlando-based company that is making LIDAR laser sensors for Volvo, Toyota, and other automakers working on autonomous vehicles, got a loan between $5M and $10M to retain 341 jobs.
  • Aeye got a loan in the $2M to $5M range to save 85 jobs.

Innoviz announces samples availability of its antomotive qualifies Innoviz One product:

SystemPlus Consulting publishes a reverse engineering of Hamamatsu edge-emitting laser diode and a photodiode inside Livox Horizon LiDAR:

"LiDARS are manufactured around four main components: the pulsed laser diode, avalanche photodiodes, opto-mechanical system (to scan the environment in front of the car), and the processor.

System Plus Consulting proposes an analysis of the pulsed laser and the photodiode in the Horizon LiDAR from Livox: a Chinese company that sells a LiDAR system for automotive ADAS.
The LiDAR sensing module includes a custom six-photodiode array die from Hamamatsu, specifically developed for this LiDAR application. The design is particularly optimized to increase the sensibility of the six avalanche photodiodes. The photodiode dies are assembled in a package with a 905nm narrow bandpass filter.

This LiDAR uses six edge-emitting lasers designed to have three epitaxially stacked emitters. The six laser dies are assembled horizontally with an inclined mirror to send the light perpendicular. Thermal management is performed by a sophisticated substrate.

Aurora announces its FirstLight Lidar based on Blackmore acquired technology and intended for Aurora’s next-generation test vehicles:

Go to the original article...

Canon RF 85mm f2 Macro review

Cameralabs        Go to the original article...

The Canon RF 85mm f2 Macro IS STM is a short telephoto lens for the full-frame EOS R mirrorless system, and one of the most affordable in the system to date. Perfect for portraits and close-ups, find out why it's a no-brainer for EOS R owners in my review!…

The post Canon RF 85mm f2 Macro review appeared first on Cameralabs.

Go to the original article...

Canon RF 100-500mm f4.5-7.1L review

Cameralabs        Go to the original article...

The Canon RF 100-500mm is a telephoto zoom for the full frame EOS R mirrorless system. Physically it’s only 15mm longer and an impressive 200g lighter than the earlier EF 100-400, while zooming 100mm further. In my full review I'll compare them both!…

The post Canon RF 100-500mm f4.5-7.1L review appeared first on Cameralabs.

Go to the original article...

Canon EOS R6 review

Cameralabs        Go to the original article...

The Canon EOS R6 is an upper mid-range full-frame mirrorless camera with 20 Megapixels, 4k 60p video, built-in stabilisation, bursts up to 20fps and a fully-articulated touchscreen. Find out how it performs for photography in my latest update!…

The post Canon EOS R6 review appeared first on Cameralabs.

Go to the original article...

Canon RF 600mm / 800mm f11 review

Cameralabs        Go to the original article...

The Canon RF 600mm and 800mm f11 are very compact super-telephoto lenses for the full-frame EOS R mirrorless system. Find out why they're more usable than you'd think in my full review!…

The post Canon RF 600mm / 800mm f11 review appeared first on Cameralabs.

Go to the original article...

First Camera Based on Sony InGaAs Stacked SWIR Sensor

Image Sensors World        Go to the original article...

Vision System Design: A new Japan-based camera company Aval Data announces the first camera based on Sony IMX990 sensor, the ABA-013VIR.

Thanks to TL for the link!

Go to the original article...

IniVation announces Event-Based Sensor Eye Tracker

Image Sensors World        Go to the original article...

iniVation introduces the Foveator eye tracking technology. Foveator uses AI-enabled neuromorphic technology to follow your eye movements at fast speed, with high accuracy and near-zero latency. Working like a tiny version of retina and visual system, Foveator powers tracking up to 1 kHz with latency below 3 ms.
Foveator technology enables next-generation VR and AR experiences, including:
  • Foveated rendering
    Better graphics and huge improvements in battery life
  • Foveated streaming
    Save >50% of bandwidth across 4G/5G networks
  • Foveated graphics transport
    Reduce graphics bandwidth needs
  • Ultra-low-power human interaction
    Lower speeds for lightweight, all-day AR battery life
Foveator is powered by the iniVation neuromorphic Dynamic Vision Platform.

Go to the original article...

Jenoptik Acquires Trioptics

Image Sensors World        Go to the original article...

JENOPTIK AG acquires TRIOPTICS GmbH known for its active alignment and lens testing technology. Both parties to the contract have agreed not to disclose details of the purchase price.

Go to the original article...

X-Fab Foundry Shut Down by Cyber Attack

Image Sensors World        Go to the original article...

BusinessWire: On July 5, 2020, X-FAB Group was the target of a cyber security attack. Following the advice of leading security experts engaged by X-FAB, all IT systems have been immediately halted. As an additional preventive measure, production at all six manufacturing sites has been stopped.

X-FAB has promptly engaged with the relevant authorities to investigate the unprecedented incident. In addition, a team of internal and external security experts has been put in place to resolve the problem and to recover all systems. X-FAB also decided to immediately start the temporary fabrication facility shutdowns that were initially planned to take place later in the third quarter in the context of X-FAB's Covid-19 cost-saving initiative.

At this stage, it cannot be estimated for how long and to which degree X-FAB's operations will be disrupted. It is also too early to assess if there will be any financial impact.

Due to the current unavailability of X-FAB's IT systems, this press release was distributed via Business Wire only and is available on the Company's website.

X-FAB foundry manufactures CMOS image sensors, among other products. As far as I know, this is the first time when a semiconductor fab has been been halted due to hackers.

Thanks to BB for the link!

Go to the original article...

Sony FE 12-24mm f2.8 GM review

Cameralabs        Go to the original article...

The Sony FE 12-24mm f2.8 G Master is an ultra-wide zoom with a constant f2.8 aperture, designed for Alpha full-frame mirrorless cameras. It becomes the widest full-frame f2.8 zoom and means Sony now offers focal lengths from 12 to 200mm with a constant f2.8 aperture. Find out more in my review!…

Go to the original article...

TSMC Builds a Dedicated 28nm Fab for Sony Orders

Image Sensors World        Go to the original article...

IFNews reports that TSMC will build a OEM production line exclusively for Sony to produce high-end CIS in 28nm process at the Nanke 14B fab. Also, TSMC is planning to build a new CIS packaging capacity in Zhunan. The plan will be officially started in the beginning of July 2020. It is expected to be completed in mid-2021 and costs NTD300B. In the past, the 14B fab in Nanke used to manufacture 12nm and 16nm chips for HiSilicon, Nvidia, and Mediatek.

Chinese-language UDN site writes (in Bing translation):

"Japanese sources familiar with the situation revealed that TSMC and Sony are very confidential about this cooperation, the two sides signed the first batch of cooperation content at the beginning of this year, first in TSMC Nanke 14A plant for Sony to build a monthly production of 20,000 pieces of CIS foundry line, cooperation is very smooth, Sony further asked TSMC to provide exclusive, equal to the contract factory area, monthly orders and increase several times.

TSMC is actively engaged in the "Cage for Birds" project in order to meet this "super-large single", including the purchase of the Nanke plant adjacent to TSMC, and asked Homeden to move at the fastest speed, that is, to split the 14B plant and 14A plant public plant, in order to create 14B plant for Sony's higher-level CIS replacement plant.

Japanese people familiar with the situation revealed that Sony is already TSMC customers, the past cooperation to logic chip-based, early this year Sony changed the past CIS full home-made strategy, breaking the ground will be CIS to TSMC power workers, by force TSMC all-out market occupation, shocked the industry.

It is understood that Sony's previous CIS order to TSMC, is in TSMC South Branch 14A plant to 40 nanometer process production, TSMC and for this purpose to purchase new equipment designated by Sony, is intensive installation, scheduled for August trial production, the first quarter of next year mass production, the initial monthly production capacity of 20,000 pieces.

Now the cooperation between the two sides extended to 28 nanometers, and decided to build Sony's exclusive foundry area at TSMC Nanke 14B, Sony is equal to the Bao-Yuan 14B plant, the number of orders will be 14A factory several times.

In the face of Samsung's close pursuit, Sony decided to expand its partnership with TSMC, hoping to win 60% of the global Market Share of CIS Image Sensors by 2025.

Go to the original article...