Archives for June 2019

QNAP TVS-472XT NAS review

Cameralabs        Go to the original article...

The QNAP TVS-472XT is a four-bay network attached storage device (NAS), sporting 10 Gigabit ethernet as well as dual Thunderbolt 3 ports, allowing not only quicker network access than most NAS units but also supporting direct connections that are fast enough for video editing. Find out how it’s fitted into my workflow in my 6-month review!…

The post QNAP TVS-472XT NAS review appeared first on Cameralabs.

Go to the original article...

In Defense of Sony Semiconductor Spin-off

Image Sensors World        Go to the original article...

BusinessWire: Activist investor Third Point Highlights publishes explanations of its proposal to spin-off Sony image sensor business, opens A Stronger Sony dedicated web site promoting that, and publishes a 102-page long presentation with a lot of interesting market data, primarily from TSR.

The presentation gives a nice definition of image sensor on slide 27:

"Image sensor is an analog semiconductor with average price of $2 that processes light from the outside world and transforms those wavelengths into 1s and 0s."

Go to the original article...

Omnivision Unveils its First 0.8um Pixel Product

Image Sensors World        Go to the original article...

PRNewswire: OmniVision announces the OV48B, its first 48MP image sensor featuring a 0.8um pixel built on OmniVision's PureCel Plus stacked die technology.

"Consumers continue to demand ever-higher resolution in high end and mainstream mobile phones," said Arun Jayaseelan, senior marketing manager at OmniVision. "The OV48B features premium resolution and image quality that is ideal for both high end and mainstream smartphones. Its 0.8 micron pixels provide 48 MP resolution in the industry's smallest die size, enabling a 1/2" optical format."

The OV48B integrates an on-chip 4-cell color filter array and hardware re-mosaic, which provides high quality, 48 MP Bayer output in real time. In low light conditions, this sensor can use near-pixel binning to output a 12 MP image with four times the sensitivity.

This new sensor also features both DPHY and CPHY MIPI interfaces, which enables fast frame rates using fewer pins. Additionally, the OV48B includes 2x2 microlens phase detection autofocus (ML-PDAF) to boost accuracy, especially in low light.

Output formats include 48 MP at 10 fps with remosaic, 12 MP with 4-cell binning at 30 fps, 4K2K video at 60 fps, 1080p video at 240 fps and 720p video at 480 fps. Pad locations on the left and right of the image sensor, in combination with the industry's smallest 48 MP die size, provide smartphone camera module designers with additional flexibility.

OV48B samples are available now, and volume production is expected in Q4 2019.

Go to the original article...

Panasonic Lumix S PRO 70-200mm f4 review so far

Cameralabs        Go to the original article...

The Panasonic Lumix S PRO 70-200mm f4 OIS is a telephoto zoom designed for full-frame mirrorless cameras employing the L-mount. Delivering a classic range for portrait shooters or nearby sports and wildlife, it’s the most affordable telephoto for the L-system to date. Find out how it performs in my review-so-far!…

The post Panasonic Lumix S PRO 70-200mm f4 review so far appeared first on Cameralabs.

Go to the original article...

LiDAR News: Innovusion, Ouster, Bad Weather Performance

Image Sensors World        Go to the original article...

BusinessWire: Innovusion announces the availability of its Cheetah system. Based on its innovative rotating polygon optical architecture, Innovusion has melded together proprietary detector electronics, advanced optics and software algorithms.

Key features:
  • Detection range of 200 meters on objects with 10% reflectivity
  • Clearly detecting objects out to 280 meters
  • 1550nm laser
  • Resolution of 300 vertical pixels, while simultaneously maintaining a frame rate of 10Hz
  • Picture like resolution of 0.13 deg over 40 deg vertical FOV and 0.14 deg resolution over 100 deg horizontal FOV.
  • Power consumption under 40W, said to be the most energy efficient system of any high- performance LiDAR currently available.
  • Sensor head dimension 112 mm (h) x 145 mm (w) x 105 mm(d), with a roadmap to substantially reduce the future footprint and formfactor
  • Single unit price is $35,000 for small quantities


PRNewswire: Frost & Sullivan recognizes Ouster OS-1:64 LiDAR with the 2018 Price/Performance Value Leadership Award for its enhanced capabilities, durability, and power efficiency, at affordable price points and compact size.

The OS-1:64 lidar sensor's simplified architecture and 7th-generation custom silicon design can measure 1.3M points per second using less than 17W of power, a feat that was previously difficult for high-performance lidar. Additionally, it has the range to detect objects up to a distance of 120m despite being 30 times smaller than competing solutions.

"Ouster's sensors are unique among high-resolution lidar sensors as they operate at the near-infrared 850nm wavelength. Its patented light filtering technology allows it to use the 2x signal found at 850nm, while avoiding the penalty of the 5x noise that is typical to this range," said Mariano Kimbara, senior industry analyst. "The 850nm wavelength has been shown to have lower atmospheric water vapor absorption and more consistent operation compared to other available lidar operating wavelengths. This translates to an operating wavelength that is not as absorbed by humid air or fog, and a heightened sensitivity of its low-cost silicon complementary metal-oxide semiconductor (CMOS) technology."


HAL archive paper "LIDAR sensor simulation in adverse weather condition for driving assistance development" by Mokrane Hadj-Bachir and Philippe de Souza gives some performance degradation data in different weather conditions:

Go to the original article...

Z2 FET Promises High Photodetection Gain

Image Sensors World        Go to the original article...

An early access paper to be published in IEEE Journal on Electron Devices promises high gain in light-controlled FET: "Dynamic coupling effect in Z2 FET and its application for photodetection" by J. Liu, XY. Cao, BR. Lu, YF. Chen, A. Zaslavsky, S. Cristoloveanu, J. Wan from Fudan University, China, Brown University, USA, and INP-Grenoble/Minatec, France.

"In this paper, the application of the Z-FET (zero subthreshold swing and zero impact ionization FET) for photodetection is studied with TCAD simulation. Dynamic coupling effect is utilized to form carrier injection barriers in the partially depleted silicon-on-insulator (PD-SOI) film. Photoelectron accumulation at the front gate interface lowers the hole injection barrier and modulates the turn-on voltage. The light-triggering threshold of the device can be tuned by the front gate voltage, which controls the injection barrier height. We explore two operation modes suited to different applications, and demonstrate the operation of a one-transistor active pixel sensor array. Unlike other image sensors that utilize only one type of carrier, the Z-FET photodetector uses photo-generated holes to induce high electron currents through internal amplification, leading to a high sensitivity of up to 1.8e5 e-/(lux∙s)."

Go to the original article...

ST Promotes Automotive HDR Sensor + ISP Combo

Image Sensors World        Go to the original article...

ST promotes VG6768+STV0971 sensor - ISP combo for e-mirror and ADAS:

Go to the original article...

Activist Investor Calls on Sony to Spin-off Image Sensor Business

Image Sensors World        Go to the original article...

Nikkei, Reuters, Financial Times: Daniel Loeb, the activist investor behind hedge fund Third Point that has invested $1.5b in Sony, is calling the company to spin off its semiconductor business.

In the year ended in March, the semiconductor unit accounted for 16% of Sony’s total operating profit of roughly $8b.

"When you think of Sony, you think of the Walkman, you think of the consumer electronics business, you know they own a movie studio and some music, but you don't think of them as a Japanese national champion in technology, with a $20 billion going to $35 billion valuation business in sensors," Loeb told the Financial Times. As of now, the whole Sony company is valued at about $62b.

Reuters also quotes valid reasons to reject the spin-off demand, though."For one, some 90% of Sony’s chips revenue comes from smartphones, Macquarie analysts reckon. That makes the business vulnerable as a tech cold war between Washington and Beijing escalates. Huawei, the Chinese titan that has been banned from working with U.S. tech firms, is a major Sony customer, for instance. Analysts at Jefferies recently slashed their operating profit forecast for the Japanese chips unit by 45% as a result.

The longer-term outlook looks shinier. Global smartphone sales are falling, but the number of cameras per phone is going up. Sony will also benefit as automakers and consumers embrace autonomous driving, which will require cars to be equipped with ever more image sensors.
"

Go to the original article...

Yole on ISP and Vision Processor Trends

Image Sensors World        Go to the original article...

Yole Developpement report "Image Signal Processor and Vision Processor Market and Technology Trends 2019" tells:

"AI has completely disrupted hardware in vision systems, and has had an impact on entire segments, as Mobileye has in automotive, for example. Image analysis adds a lot of value. Image sensor builders are therefore increasingly interested in integrating a software layer to their system in order to capture it. Today, image sensors must go beyond taking images – they must be able to analyze them.

However, to run these types of software, high power computing and memory are necessary, which led to the creation and development of vision processors. The ISP market offers a steady compound annual growth rate (CAGR) from 2018 to 2024 of 3%, making the total market worth $4.2B in 2024. Meanwhile, the vision processor market is exploding, with a 18% CAGR from 2018 to 2024, making the market worth $14.5B in 2024!

It is important to note that historical players have struggled to react to AI’s arrival. That has allowed other companies to get into the business, including smartphone companies like Apple and Huawei, startups like Mobileye, and companies in other segments, like NVIDIA in automotive applications. However, because the trend is towards low-power, low-consumption, always-on computing hardware, the historical players are coming back into the game.
"

Go to the original article...

2019 IISS Exceptional Lifetime Achievement Award goes to James R. Janesick

Image Sensors World        Go to the original article...

Jim Janesick is a Distinguished Engineer at Sarnoff Inc., developing high-performance CMOS imagers for various scientific and government projects. In the beginning of his career Jim was with the Jet Propulsion Lab for 22 years, where he was group leader of the Advanced CCD Sensors Development Group with a focus on scientific CCD test and characterization. He pioneered scientific CCD and support electronic designs for several NASA space-borne imaging systems. Jim authored the text books Scientific Charge-Coupled Devices and Photon Transfer.

He received the NASA Exceptional Engineering Achievement Medal in 1982 and 1992. Over his career, he has had a great impact on characterization methodology of image sensors, particularly for scientific devices but applicable to nearly every CCD and CMOS imager.

For example, while at JPL, Jim developed the Photon Transfer Curve (PTC), world famous among image sensor technologists. This characterization method for image sensors makes it possible to characterize an imager without knowing particular details of the device. The technique is used in academia as well as in industry, and many devices are tested daily around the world making use of the PTC method.

The International Image Sensor Society is pleased to recognize Jim’s contribution to the imaging technology field by presenting him with the 2019 IISS Lifetime Achievement Award at the 2019 IISW at Snowbird in June.

Congratulations and thank you Jim!

Go to the original article...

Microlens Fabrication by Laser Catapulting

Image Sensors World        Go to the original article...

EuerekaAlert points to OSA Optical Materials Express paper "Geometry-controllable micro-optics with laser catapulting" by Salvatore Surdo, Alberto Diaspro, and Martí Duocastella from Istituto Italiano di Tecnologia, Genova.

"Individual or arrayed microlenses offer remarkable opportunities in optics and photonics. However, their usage is currently limited by the lack of manufacturing technologies capable of tailoring the lens geometry to target devices. Here, we demonstrate how laser catapulting (LCP), a recent laser-based additive manufacturing technique, enables the preparation of microlenses with controlled geometry and curvature. LCP exploits single laser pulses to catapult polymeric microdisks into user-selectable positions on a substrate, which are converted into microlenses following a thermal reflow treatment. By shaping the irradiance distribution of the incident laser beam, we obtained arrays of circular, triangular, and cylindrical microlenses with a radius between 50-250 µm and 100% fill-factor. The good quantitative agreement between beam shape and microlens geometry, combined with the in-situ fabrication capabilities and high-throughput of LCP, can help the consolidation of laser additive methods for micro-optics in scientific and industrial applications."

Go to the original article...

e2v and TowerJazz Start Production of 67MP APS-C Sensor with Global Shutter Pixels

Image Sensors World        Go to the original article...

GlobeNewswire: A year after the first announcement, Teledyne e2v and TowerJazz say that Teledyne e2v’s Emerald 67M image sensor is now available.

The sensor is a member of Teledyne e2v’s Emerald family and features TowerJazz’s smallest 2.5µm low-noise global shutter pixel, on TowerJazz’s 65nm platform, in its Uozu, 300mm Japanese manufacturing facility. The pixel is integrated with a unique light pipe technology, offering wide angular response, more than 80dB shutter efficiency, in a small size, and extremely low noise of 1e-.

Special features include HDR modes with up to 120dB DR and also a ROI mode which allows multiple ROI to be captured under different exposure conditions, further improving the DR of an image.

Emerald 67M has a square shape with 8k resolution per side, enabling 95% utilization of the image area for the next generation of display manufacturing, gen 10.5. The high resolution optimizes vision system movements in large product inspection, reducing system complexity and removing instabilities. It is available in two different speed grades (ultra-high speed, 60fps and high speed, 30fps).

Rafael Romay, VP of Professional Imaging at Teledyne e2v, said “The Emerald 67M is the first sensor of its kind and is enjoying strong interest in inspection applications due to its ultra-high resolution and fast speed. It also shares many of the smart features within other Emerald sensors making it easier to deploy within vision systems. The great technology innovation and support from TowerJazz allowed us to bring to market this best-in-class solution, customized to product and application needs.

Avi Strum, TowerJazz SVP and GM of the Sensor BU said, “Teledyne e2v’s design and supply chain along with TowerJazz’s state of the art 65nm process have again brought a game changing sensor to machine vision applications. We are proud to have developed this together and look forward to further collaboration with Teledyne e2v to serve the machine vision market.

Go to the original article...

Emberion to Launch Graphene-Based Linear Sensor at the End of June

Image Sensors World        Go to the original article...

NovusLight, IMV Europe: Emberion is launching a VIS-SWIR graphene photodetector at Laser World of Photonics, to be held on June 24-27 June in Munich, Germany. The technology has been shortlisted for an innovation award at the trade fair.

The linear array wide spectral range spans from the visible at 400nm into SWIR up to 1,800nm. Emberion estimates that replacing a system using silicon and InGaAs sensors with its graphene photodetector would result in a 30% cost reduction.

Graphene’s unique properties and its compatibility to combine with other nanomaterials allowed us to create this cost-effective array for spectrometers,” explained Tapani Ryhänen, CEO of Emberion. “Providing broad spectrum capabilities, without the expense of traditional InGaAs sensors, the VIS-SWIR graphene photodetector provides a digital output using Emberion’s inhouse designed read out circuits (ROIC) — without the need to translate analogue data with additional components.

With sensitivity and noise levels comparable to InGaAs detectors, our graphene-based sensors are a more affordable option for businesses who were previously faced with costly products.


The detector cannot compete with silicon CMOS for visible light detection, Dr Vuokko Lantz, product manager at Emberion, told IMV Europe in an article about hyperspectral imaging. But to extend into the shortwave infrared, the graphene photodetector becomes ‘a very interesting alternative’, Lantz said.

Lantz added: ‘Our detector can offer similar performance to InGaAs in the near and shortware infrared region, and we outperform InGaAs in the visible region.


InVision publishes Emberion article: "The first commercial product utilizing this technology is a 512×1 linear array sensor with a 16bit digital output. The pixel geometry (25×500µm) follows the conventional requirements of the spectrometer systems. The VIS-SWIR linear array sensor will be available in September 2019. The linear array will be followed by a VGA (512×640) image sensor product available in spring 2020. The image sensor has versatile applications, e.g. in machine/night vision and hyperspectral imaging."

Go to the original article...

Sense Photonics Raises $26M in A-Round

Image Sensors World        Go to the original article...

PRNewswire, Techcrunch: San Francisco-based flash LiDAR developer Sense Photonics raises $26M in A-round. The company's CEO and co-founder Scott Burroughs says “It starts with the laser emitter. We have some secret sauce that lets us build a massive array of lasers — literally thousands and thousands, spread apart for better thermal performance and eye safety.

We can go as high as 90 degrees for vert which i think is unprecedented, and as high as 180 degrees for horizontal. And that’s something auto makers we’ve talked to have been very excited about.


The second innovation is "the sensor, normally part and parcel with the lidar unit, can exist totally separately from the emitter, and is little more than a specialized camera. That means that while the emitter can be integrated into a curved surface like the headlight assembly, while the tiny detectors can be stuck in places where there are already traditional cameras: side mirrors, bumpers, and so on.

The camera-like architecture is more than convenient for placement; it also fundamentally affects the way the system reconstructs the image of its surroundings. Because the sensor they use is so close to an ordinary RGB camera’s, images from the former can be matched to the latter very easily.

Our LiDAR system operates similarly to a camera by using a solid-state, high-powered, laser emitter coupled with a solid-state sensor with many pixels. Our system does not scan in any way—no spinning, no rotating mirrors, no pivoting MEMS mirrors—no mechanical movement of any kind.

Solid-state technology simplifies manufacturing, enhances reliability and eliminates concerns about vibrations or recalibration during the product’s lifetime.
"

Go to the original article...

Smartsens Announces Cost-Effective 4K Sensor

Image Sensors World        Go to the original article...

PRNewswire: SmartSens announces SC8238 as the company's latest product in its SmartClarity technology line. SC8238 supports 1/2.7" optical format 4K consumer video applications. Most 4K imaging sensors are said to underperform at low light conditions, have high power consumption, and a high price.

With its cost-effectiveness and superior product performance, SC8238 enables many use cases at lower cost and lower power consumption for such applications as smart home surveillance, security devices, video-conferencing, and sports cameras.

The 8MP device uses 1.5um BSI pixel and has the lowest power consumption among products of the same type. The product can reach a sensitivity of 1160mV/Lux-s with a maximum SNR of 36dB. Additionally, the sensor's DR reaches up to 100dB in HDR mode and 70dB in linear mode. SC8238 operation temperature range spans from -30C to over 85C.

"The cost-effectiveness and utility for consumer video applications makes SC8238 the most powerful device in our SmartClarity product line to date," said William Ma, COO of SmartSens Technology. "The sensor's capabilities make it ideal for providing unparalleled image quality while being able to satisfy the demands of multiple consumer segments at a competitive price point."

SmartSens will begin the mass production of SC8238 this June.

Go to the original article...

How to Shrink Raman Spectrometer to CIS Chip Size

Image Sensors World        Go to the original article...

Imec article "How to shrink a Raman spectroscope and analyze complex samples on-the-go?" proposes a way to fit a spectrometer onto CIS chip:

"The spectrometer chip is based on SiN-based waveguide photonics, implemented on top of a CMOS image sensor (CIS) used for electrical readout. The spectrometer chip as used in the current design consists of an array of massively parallel evanescently-coupled Fabry-Perot interferometers, varying in length in the range 2.2-152.8µm (in linear steps of 0.2µm). Incident light is coupled into the waveguide structures using a grating based in-coupler (GC). Sloped metal output mirrors are used to couple the light from the waveguide to the readout pixels of the CIS (see also insert with a crosssection of the chip in figure above). An illustration of the layout showing a top view of the F-P resonators together with the grating incoupler and sloped metal output mirror is shown in the figure below.

Go to the original article...

Sony FE 200-600mm f5.6-6.3 G OSS review so far

Cameralabs        Go to the original article...

The Sony FE 200-600mm f5.6-6.3 G OSS is a super-telephoto zoom for Alpha mirrorless cameras aimed at sports, aviation and wildlife photography without breaking the bank. I went hands-on with the new lens, trying it out for football, birds and jetskis. Find out how I got on in my review-so-far!…

The post Sony FE 200-600mm f5.6-6.3 G OSS review so far appeared first on Cameralabs.

Go to the original article...

Sony FE 600mm f4 GM OSS review so far

Cameralabs        Go to the original article...

The Sony FE 600mm f4 GM OSS is a super-telephoto lens for the Alpha mirrorless system aimed at professional sports and wildlife photographers. It’s the 10th G Master lens and becomes the longest focal length in the native e-mount catalogue. I tried it out for my hands-on review-so-far!…

The post Sony FE 600mm f4 GM OSS review so far appeared first on Cameralabs.

Go to the original article...

Innoviz Closes $170M C-Round

Image Sensors World        Go to the original article...

PRNewswire: Innoviz, an Israeli LiDAR startup, announces that it has expanded its Series C funding round by another $38M bringing it to $170M. Then initial $132M in Series C funding was announced in March 2019. The close of the Series C round brings Innoviz's total funding to $252M:

Go to the original article...

Yole on Automotive Cameras

Image Sensors World        Go to the original article...

Yole Developpement article on ADAS and automotive camera adoption is mostly based on its "Imaging for Automotive 2019" report:

"Cameras are now standard equipment for automobiles, with 124M image sensors shipped in 2018. Automotive camera modules have reached $3B and are expected to grow at an 11% CAGR, reaching $5.7B by 2024.

Over the past five years, viewing applications have been at the core of market growth – with rearview, surround-view, and black box becoming ubiquitous. Advanced Driver Assistance Systems (ADAS) cameras, which currently represent 40% of the business, will provide additional growth for years to come thanks to growing adoption rates.

Interestingly, the eruption of imaging in automotive has not fully benefit big incumbents like Bosch, Denso, Sony, and Samsung, which now must double their efforts in order to get back in the game. Intel and Sony will certainly use their respective strength and #1 overall position to gain market share in automotive, but already-established players like Xilinx, Toshiba, ON Semiconductor, and Omnivision definitely have an edge in this conservative, price-sensitive market.
"

Go to the original article...

Sony Introduces 4.2MP 2.9um Pixel Sensor

Image Sensors World        Go to the original article...

Sony adds 1/1.8-inch IMX347LQR sensor to its Starvis security and surveillance lineup. The new sensor features 2.9um pixels and has 4.17MP resolution with 16:9 aspect ratio. Nothing is unusual in this new sensor, it just fills a small gap in the company's expansive portfolio:

Go to the original article...

Panasonic ToF Module Reverse Engineering

Image Sensors World        Go to the original article...

SystemPlus publishes a reverse engineering report of Panasonic ToF module found in Vino Nex Dual Display smartphone:

"For 3D depth sensing, three approaches have been considered in consumer applications: active stereo vision (AS), structured light (SL) and Time-Of-Flight (ToF) sensing. SL was developed by Apple, which brought it to the market for the first time in 2017. It’s based on a complex system requiring several components, including a Global Shutter (GS) image sensor and a dot projector. The latter has been considered difficult and expensive to make due to the precision required. The ToF approach could be less complex and less expensive. You just need a ToF Image sensor and a flood illuminator to bring depth sensing to a system. In this field, only three known companies have solutions. In 2016, Infineon was the first to bring out its 3D ToF image sensor, developed with pmd, for the Google Tango Project. Today, Sony has the major share of the market with several design wins starting from low-end smartphones, such as the Oppo RX17 Pro in 2018, to high-end ones, such as the Samsung S10 5G and Huawei P30 Pro in 2019. This year, Panasonic has surprised the market with a new ToF Image Sensor in the Vivo Nex Dual Display."

Go to the original article...

2018 International SPAD Sensor Workshop Presentations

Image Sensors World        Go to the original article...

This is a year-old news, but I missed it at the time. Thanks to EF for pointing to that. 2018 International SPAD Sensor Workshop (IISW) presentations are published at IISS site. There is a lot of interesting stuff, for example:
  • AMS SPAD update
  • ST 40nm SPAD process
  • Fastree 3D LiDAR
  • Entangled photon imaging
  • Much more...

Go to the original article...

Analog Devices ToF Sensor Development

Image Sensors World        Go to the original article...

Analog Devices posts a video of the presentation of its ToF imaging solution at Embedded World 2019 held in Nuremberg, Germany, in March:



Analog Devices Garage, the internal new ideas research unit, posts more info on its development of ToF sensor, first presented at CES 2019:

"ADI is among a short list of companies with the expertise to enable high-performance, cost-effective ToF solutions and the Analog Devices solution has been in the market for several years. However, when faced with the challenge of operating multiple ToF cameras together – at least 10 with a roadmap to 64 cameras operating simultaneously in close proximity – the marketing and applications team turned to Analog Garage.

Atulya Yellepeddi, research scientist at Analog Garage, added, “There was a clear constraint on the solution side in that we knew we had to use the sensor and camera module with the ADI AFE. But we needed to similarly constrain the problem to be solved. Following the Analog Garage process provided the structure our teams needed to effectively collaborate.”

According to Pat O’Doherty, vice president of emerging business and head of Analog Garage, “The interference reduction algorithms that resulted from the ToF AGI are a shining example of going beyond silicon to create and capture value. It’s important to note that the innovation was the result of combining the BU’s deep understanding of the application and the device architecture, and the algorithm expertise of the Analog Garage scientists.



Analog Devices is also working on automotive LiDAR platform, presented in another video:




Interestingly, the platform does mention Vescent Photonics OPA beam steering technology that ADI acquired in 2016.

Go to the original article...

Qualcomm on Nokia 9 Image Fusion

Image Sensors World        Go to the original article...

Qualcomm article gives some details on Nokia-Light Co. 5-camera smartphone image processing:

"The Nokia 9 powered by the Qualcomm Snapdragon 845 Mobile Platform is a major achievement in computational photography. Thanks to a collaboration with HMD, Light, and Qualcomm Technologies, the Nokia 9 is the world’s first smartphone to feature a five-camera array. Every time you take a photo, the cameras collectively capture and process up to 240 megapixels of data, which is then used to create one stunning 12-megapixel photo and a corresponding 12-megapxiel depth map. This is the largest amount of photography data ever captured and processed by a Snapdragon 845 device.

The five-camera modules that make this possible are nearly the same. They contain the same image sensors, which have 1.2μm size pixels, and lenses, which have 28mm focal length with bright f/1.8 apertures. The only difference among the cameras is that two of the image sensors can collect color via the color filter, while the other three cameras use the same sensors without color filters. These are typically called “monochrome” image sensors and are capable of capturing 3X more light. So each camera captures the same image, but at varying exposures so you can shoot both very bright and dark imagery. These images are then merged with color images to build one master photo containing contrasting bright, dark, and color details.

the Qualcomm Adreno GPU is inspecting the 240 megapixels of data to render a 12-megapixel depth map. Most smartphones create a depth map that is a megapixel or less containing three to seven focal planes, but the Adreno GPU creates a massive 12-megapixel depth map filled with up to 1,500 focal planes. This level of depth enables photos with extremely realistic looking Bokeh and massive control over the blur-intensity of the Bokeh in the background and foreground. It also provides an astounding number of regions to shift focus to, so you can shoot now and focus later.

The depth maps are created as “Gdepth” files – Google’s official file format for depth maps, which can be used and stored in Google Photos.
"

Go to the original article...

Imec Low Cost THz and IR Imagers

Image Sensors World        Go to the original article...

Imec Future Summits publishes a short video about its quantum dot IR sensors:



Another video presents low cost THz imaging:

Go to the original article...

2019 IISS Exceptional Service Award Goes to Savvas Chamberlain

Image Sensors World        Go to the original article...

The International Image Sensor Society (IISS) Exceptional Service Award is presented for exceptional service to the image sensor specialist community. In 2019, this award will be presented to Dr. Savvas Chamberlain for his “Contributions to the organization of what is now the International Image Sensor Workshop”. The award ceremony will be held during the 2019 IISW at Snowbird.

After the 1986 and 1990 Workshops organized by Prof. Eric Fossum, Savvas chaired the 1991 IEEE CCD Workshop and 1993 IEEE Workshop on CCD and Advanced Image Sensors in Waterloo, Canada. Until the Workshop name transitioned to International Image Sensor Workshop, the name of “IEEE Workshop on CCD and Advanced Image Sensors” had been used. He also served as an organizer for the 1995 Workshop. During this period, the Workshop grew substantially and the workshop organization was established.

Savvas is well-known as the founder of DALSA. He started DALSA as image sensor design and image sensor /camera manufacturing in 1980 and led its growth to a 1,000-employee company. His bio is posted here and here.

Thank you very much for your big contribution to our community, Savvas.

Go to the original article...

Megapixel Photon-Counting Color Imager

Image Sensors World        Go to the original article...

OSA Optics Express publishes a paper "Megapixel photon-counting color imaging using quanta image sensor" by Abhiram Gnanasambandam, Omar Elgendy, Jiaju Ma, and Stanley H. Chan from Purdue University and Gigajot. The new paper is a version of the previously published one in Arxiv.org.

"Quanta Image Sensor (QIS) is a single-photon detector designed for extremely low light imaging conditions. Majority of the existing QIS prototypes are monochrome based on single-photon avalanche diodes (SPAD). Passive color imaging has not been demonstrated with single-photon detectors due to the intrinsic difficulty of shrinking the pixel size and increasing the spatial resolution while maintaining acceptable intra-pixel cross-talk. In this paper, we present image reconstruction of the first color QIS with a resolution of 1024 × 1024 pixels, supporting both single-bit and multi-bit photon counting capability. Our color image reconstruction is enabled by a customized joint demosaicing-denoising algorithm, leveraging truncated Poisson statistics andvariance stabilizing transforms. Experimental results of the new sensor and algorithm demonstrate superior color imaging performance for very low-light conditions with a mean exposure of as low as a few photons per pixel in both real and simulated images."

Go to the original article...

3D Imaging at 150m with QVGA SPAD Camera

Image Sensors World        Go to the original article...

Nature paper "Long-range depth imaging using a single-photon detector array and non-local data fusion" by Susan Chan, Abderrahim Halimi, Feng Zhu, Istvan Gyongy, Robert K. Henderson, Richard Bowman, Stephen McLaughlin, Gerald S. Buller, and Jonathan Leach from Heriot-Watt University, University of Edinburgh, and University of Bath uses 670nm wavelength to improve the SPAD sensor QE:

"In LIDAR (light detection and ranging) applications, single-photon sensitive detection is an emerging approach, offering high sensitivity to light and picosecond temporal resolution, and consequently excellent surface-to-surface resolution. The use of large format CMOS (complementary metal-oxide semiconductor) single-photon detector arrays provides high spatial resolution and allows the timing information to be acquired simultaneously across many pixels. In this work, we combine state-of-the-art single-photon detector array technology with non-local data fusion to generate high resolution three-dimensional depth information of long-range targets. The system is based on a visible pulsed illumination system at a wavelength of 670 nm and a 240 × 320 array sensor, achieving sub-centimeter precision in all three spatial dimensions at a distance of 150 meters. The non-local data fusion combines information from an optical image with sparse sampling of the single-photon array data, providing accurate depth information at low signature regions of the target."

Go to the original article...

LIDAR News: Quanergy, Velodyne, Cepton, Ouster, Blickfeld, Koito

Image Sensors World        Go to the original article...

BusinessWire: Quanergy legal battle with Velodyne goes on with no end in sight:

Quanergy plans to appeal the ruling by the Patent Trial and Appeal Board (PTAB) regarding the validity of Velodyne’s US Patent 7,969,558. Quanergy additionally announced that it is considering enforcement options of its intellectual property against Velodyne.

The ‘558 patent describes and claims a device and a process that is obvious to a person of ordinary skill in the art, according to Quanergy. Spinning electromagnetic sensors with emitters and receivers have been around for decades. Quanergy provided the PTAB with prior art sufficient to invalidate the relevant claims of the ‘558 patent. Quanergy believes the PTAB ruling will be overturned on appeal.

Quanergy is currently considering assertion of one or more of its patents against Velodyne. Prior to seeing Quanergy’s innovative design, Velodyne’s LiDARs all included a spinning external housing that included the optical components, such as the HDL-64E and HDL-32E models. These LiDARs proved unreliable for continued use and manufacturing, says Quanergy. The main design change that allowed Velodyne to switch from its original design and make a significantly more reliable and manufacturing-worthy puck-type product line came about through implementation of Quanergy’s intellectual property.

We are fully confident that Quanergy will prevail in this battle, as we are the true innovators and veterans in the space,” said Louay Eldada, CEO and co-founder of Quanergy. “We will not rest until our intellectual property based on decades of innovation and hard work is respected, and we receive the financial damages resulting from any infringement. We have seven issued patents that we intend to use to examine all LiDAR competitors’ products and protect our intellectual rights.

Velodyne Lidar Inc. is the inventor of the surround view lidar and we were confident that our patent would be upheld,said Marta Hall, President of Velodyne. “The ruling was not a surprise because real-time surround view lidar was invented by our Founder, David Hall, and the company holds a number of foundational patents relative to this technology. We are an invention-based company and will always be inventing and innovating technologies, so we take protecting our hard-earned intellectual property seriously. In response to the ruling, we’ll be evaluating our enforcement options moving forward.

BusinessWire: Cepton unveils SORA-P60 LiDAR. Based on Cepton’s Micro-Motion Technology (MMT) the lidar provides 1,200 scan lines per second for scanning fast moving objects. In combination with Cepton’s edge-compute hardware, the SORA-Edge, it becomes a powerful, mobile object classification and volumetric measurement device which can send its data over Ethernet, Wi-Fi or LTE to a central processing server.

Cepton’s SORA-P60’s three scan lines, each scanning at 400Hz, enables accurate scanning for advanced classification of objects traveling at highway speeds,” said Jerone Floor, Cepton’s Head of Product. “To put it in perspective, 400Hz translates to a scan line every five centimeters for an object traveling at 50 miles per hour. This means you can measure the size of a tow hitch and trailer on a vehicle traveling on a highway in real time.

This new and unique technology has the potential to revolutionize the automated road tolling industry. Deploying Cepton’s high-speed scanning lidar as the prime sensor can reduce the cost of system installation by using fewer ground loops,” said Neil Huntingdon, Cepton’s VP of Business Development. “The SORA-P60 can complement automatic number-plate recognition (ANPR) systems by pinpointing the location of a vehicle license plate, reducing the computing power required by traditional computer vision ANPR systems.


Ouster introduces two weeks lead time guarantee for its older OS-1 series of LiDARs:

"Unfortunately, the lidar industry has a reputation for long lead times for delivery, and we often hear customers assuming that it will take months to get their hands on our sensors.

Today, we’re changing that and announcing the Ouster Lead Time Guarantee.

In the same way we’ve opened up the lidar industry with transparent pricing and honest specs, we’re upping the ante again by guaranteeing that if you purchase at least one OS-1-16 or -64 sensor from us or our distributors, we’ll ship your first two sensors in two weeks or less. No more waiting around for your sensors to ship. Your projects are too important for that.
"

Blikfeld and Koito announce that they will explore technologies to develop a LiDAR sensor that can be fully integrated into a headlight. The integration of Blickfeld’s LiDAR into Koito headlamps will enable automobile manufacturers to possess LiDAR technology in which the sensor is fully integrated into the vehicle.

Before Blickfeld, Koito has announced partnerships with Quanergy and Cepton, but these announcements can't be found on Koito site anymore. Hopefully, the new partnership with Blickfeld would not follow the same way.

Go to the original article...

css.php