Image Sensors World Go to the original article...
Chinese smartphone maker Doogee publishes a Youtube video demoing 3D face unlock in its Mix 2 smartphone:BBC on Fast Cameras
Image Sensors World Go to the original article...
BBC Click channel on Youtube publishes a report from Ishikawa Watanabe Lab in University of Tokyo:ON Semi Hayabusa Sensors Feature Super-Exposure Capability
Image Sensors World Go to the original article...
BusinessWire: ON Semiconductor announces Hayabusa CMOS sensor platform for automotive applications such as ADAS, mirror replacement, rear and surround view systems and autonomous driving. The Hayabusa platform features a 3.0um BSI pixel design that delivers a charge capacity of 100,000e-, said to be highest in the industry, on-chip Super-Exposure capability for HDR with LED flicker mitigation (LFM), and real-time functional safety and automotive grade qualification.“The Hayabusa family enables automakers to meet the evolving standards for ADAS such as European NCAP 2020, and offer next-generation features such as electronic mirrors and high-resolution surround view systems with anti-flicker technology. The scalable approach of the sensors from ½” to ¼” optical sizes reduces customer development time and effort for multiple car platforms, giving them a time-to-market advantage.” said Ross Jatou, VP and GM of the Automotive Solutions Division at ON Semiconductor. “ON Semiconductor has been shipping image sensors with this pixel architecture in high-end digital cameras for cinematography and television. We are now putting this proven architecture into new sensors developed from the ground up for automotive standards.”
The high charge capacity of this pixel design enables every device in the Hayabusa family to deliver Super-Exposure capability, which results in 120dB HDR images with LFM without sacrificing low-light sensitivity. With the widespread use of LEDs for front and rear lighting as well as traffic signs, the LFM capability of the platform makes certain that pulsed light sources do not appear to flicker, which can lead to driver distraction or, in the case of front facing ADAS, the misinterpretation of a scene by machine vision algorithms.
The first product in this family, the 2.6MP AR0233 CMOS sensor, is capable of running 1080p at 60 fps. Samples are available now to early access customers.
Digitimes: Sony to Allocate More Resources to Automotive Sensors
Image Sensors World Go to the original article...
Digitimes sources say that Sony efforts to penetrate automotive imaging market are to start to bear fruits in 2018:"Sony is looking to allocate more of its available production capacity for CIS for advanced driver assistance systems (ADAS) and other automotive electronics applications in a move to gradually shift its focus from smartphones and other mobile devices, said the sources. Currently, about half of Sony's CIS capacity is being reserved by the world's first-tier handset vendors.
Mobile devices remain the largest application for CIS, but self-driving vehicles have been identified by CIS suppliers as the "blue ocean" and will overtake mobile devices as the leading application for CIS, the sources indicated. CIS demand for ADAS will be first among all auto electronics segments set to boom starting 2018, the sources said."
News from Australia
Image Sensors World Go to the original article...
BusinessWire: Renesas, Australian Semiconductor Technology Company Pty Ltd (ASTC), and VLAB Works, a subsidiary of ASTC, announce a joint development of the VLAB/IMP-TASimulator virtual platform (VP) for Renesas’ R-Car V3M, an automotive SoC for ADAS and in-vehicle infotainment systems. The VP simulates image recognition and cognitive IPs in the R-Car V3M SoC and realizes embedded software development using a PC only, which enables the VP to shorten development time as well as improve software quality.BrainChip Holdings announces that it has shipped its first BrainChip Accelerator card to a major European automobile manufacturer.
As the first commercial implementation of a hardware-accelerated spiking neural network (SNN) system, BrainChip Accelerator will be evaluated for use in ADAS and Autonomous Vehicle applications.
BrainChip Accelerator is said to increase the performance of object recognition provided by BrainChip Studio software and algorithms. The low-power accelerator card can detect, extract and track objects using a proprietary SNN technology. It provides a 7x improvement in images/second/watt, compared to traditional convolutional neural networks accelerated by GPUs.
Bob Beachler, BrainChip’s SVP for Marketing and Business Development, said: “Our spiking neural network provides instantaneous “one-shot” learning, is fast at detecting, extracting and tracking objects, and is very low-power. These are critical attributes for automobile manufacturers processing the large amounts of video required for ADAS and AV applications. We look forward to working with this world-class automobile manufacturer in applying our technology to meet its requirements.”
A few slides from the recent company presentation:
Omnivision Unveils First Nyxel Product for Security Applications
Image Sensors World Go to the original article...
PRNewswire: OmniVision introduces the OS05A20, the first image sensor to implement OmniVision's Nyxel NIR technology. This 5MP color image sensor leverages both the PureCel pixel and Nyxel technology and achieves a significant improvement in QE when compared with OmniVision's earlier-generation sensors. However, no QE numbers have been released so far.Nyxel technology combines thick-silicon pixel architectures with extended DTI and careful management of wafer surface texture to improve QE up to 3x for 850nm and up to 5x for 940nm, while maintaining all other image-quality metrics.
Sony Releases Stacked Automotive Sensor Meeting Mobileye Spec
Image Sensors World Go to the original article...
PRNewswire: Sony releases IMX324, a 1/1.7-inch stacked CMOS sensor with 7.42MP resolution and RCCC (Red-Clear-Clear-Clear) color filter for forward-sensing cameras in ADAS. The IMX324 is expected to offer compatibility with the "EyeQ 4" and "EyeQ 5" image processors currently being developed by Mobileye, an Intel Company. Until now now, Mobileye reference designs relied mostly on ON Semi sensors.Sony will begin shipping samples in November 2017. IMX324 is said to be the industry's first automotive grade stacked image sensor meeting quality standards and functions required for automotive applications.
This image sensor is capable of approximately three times the horizontal resolution of conventional products (IMX224MQV), enabling image capture of objects such as road signs of up to 160m away (with FOV 32° lens). The sensor's pixel binning mode achieves the low lighting sensitivity of 2666 mV. The sensor is equipped with a unique function that captures dark sections at high-sensitivity settings as well as bright sections at high resolution alternatively, enabling high-precision image recognition when combined with post-signal processing.
Smartphone Imaging Market Reports
Image Sensors World Go to the original article...
InstantFlashNews quotes Taiwan-based Isaiah Research on 3D-imaging capable smartphones saying that market keep to be dominated by Apple till 2019 when iPhone is supposed to have 55% out of the total 290M units:Isaiah Research increases forecast for dual camera phones to 270M units in 2017:
The earlier Isaiah reports from July 2016 gave significantly lower numbers of 170-180M units:
Gartner reports a decrease in the global image sensor revenue from 2015 to 2016: "The CMOS image sensor market declined 1.7% in 2016, mainly because of a price drop and saturation of smartphones, especially high-end smartphones." The top 5 companies accounted for 88.9% of global CIS revenue in 2016 with the top 3 companies have 78.9% of the market:
3 Layer Color and Polarization Sensitive Imager
Image Sensors World Go to the original article...
A group of researchers from University of Illinois at Urbana-Champaign, Washington University in St. Louis, and University of Cambridge, UK published a paper "Bio-inspired color-polarization imager for real-time in situ imaging" by Missael Garcia, Christopher Edmiston, Radoslav Marinov, Alexander Vail, and Viktor Gruev. The image sensor is said to be inspired by mantis shrimps vision, although it more reminds me quite approach of Foveon:"Nature has a large repertoire of animals that take advantage of naturally abundant polarization phenomena. Among them, the mantis shrimp possesses one of the most advanced and elegant visual systems nature has developed, capable of high polarization sensitivity and hyperspectral imaging. Here, we demonstrate that by shifting the design paradigm away from the conventional paths adopted in the imaging and vision sensor fields and instead functionally mimicking the visual system of the mantis shrimp, we have developed a single-chip, low-power, high-resolution color-polarization imaging system.
Our bio-inspired imager captures co-registered color and polarization information in real time with high resolution by monolithically integrating nanowire polarization filters with vertically stacked photodetectors. These photodetectors capture three different spectral channels per pixel by exploiting wavelength-dependent depth absorption of photons."
"Our bio-inspired imager comprises 1280 by 720 pixels with a dynamic range of 62 dB and a maximum signal-to-noise ratio of 48 dB. The quantum efficiency is above 30% over the entire visible spectrum, while achieving high polarization extinction ratios of ∼40∼40 on each spectral channel. This technology is enabling underwater imaging studies of marine species, which exploit both color and polarization information, as well as applications in biomedical fields."
A Youtube video shows nice pictures coming out of the camera:
Infineon, Autoliv on Automotive Imaging Market
Image Sensors World Go to the original article...
Infineon publishes an "Automotive Conference Call" presentation dated by Oct. 10, 2017. Few interesting slides showing camera and LiDAR content in cars of the future:Autoliv CEO presentation dated by Sept. 28, 2017 gives a bright outlook on automotive imaging:
Trinamix Distance Sensing Technology Explained
Image Sensors World Go to the original article...
BASF spin-off Trinamix publishes a nice technology page with Youtube videos explaining its depth sensing principles. They call it "Focus-Induced Photoresponse (FIP):""FIP takes advantage of a particular phenomenon in photodetector devices: an irradiance-dependent photoresponse. The photoresponse of these devices depends not only on the amount of light incident, but also on the size of the light spot on the detector. This phenomenon allows to distinguish whether the same amount of light is focused or defocused on the sensor. We call this the “FIP effect” and use it to measure distance.
The picture illustrates how the FIP effect can be utilized for distance measurements. The photocurrent of the photodetector reaches its maximum when the light is in focus and decreases symmetrically outside the focus. A change of the distance between light source and lens results in such a change of the spot size on the sensor. By analyzing the photoresponse, the distance between light source and lens can be deduced."
Trinamix also started production of Hertzstück PbS SWIR photodetectors with PbSe ones to follow:
Invisage Acquired by Apple?
Image Sensors World Go to the original article...
Reportedly, there is some sort of acquisition deal reached between Invisage and Apple. A part of Invisage employees joined Apple. Another part is looking for jobs, apparently. While the deal has never been officially announced, I got unofficial confirmations of it from 3 independent sources.Update: According to 2 sources, the deal was closed in July this year.
Somewhat old Invisage Youtube videos are still available and show the company's visible-light technology, although Invisage worked on IR sensing in more recent years:
Update #2: There are few more indications that Invisage has been acquired. Nokia Growth Partners (NGP) that participated the 2014 investing round shows Invisage in its exits list:
InterWest Partners too invested in 2014 and now lists Invisage among its non-current investments:
Samsung VR Camera Features 17 Imagers
Image Sensors World Go to the original article...
Samsung introduces the 360 Round, a camera for developing and streaming high-quality 3D content for VR experience. The 360 Round uses 17 lenses—eight stereo pairs positioned horizontally and one single lens positioned vertically—to livestream 4K 3D video and spatial audio, and create engaging 3D images with depth.With such cameras getting widely adopted on the market, it can easily become a major market for image sensors:
Google Pixel 2 Smartphone Features Stand-Alone HDR+ Processor
Image Sensors World Go to the original article...
Ars Technica reports that Google Pixel 2 smartphone features a separate Google-designed image processor chip, "Pixel Visual Core." It's said "to handle the most challenging imaging and machine learning applications" and that the company is "already preparing the next set of applications" designed for the hardware. The Pixel Visual Core has its own CPU, a low power ARM A53 core, DDR4 RAM, the eight IPU cores, and a PCIe and MIPI interfaces. Google says the company's HDR+ image processing can run "5x faster and at less than 1/10th the energy" than it currently does on the main CPU. The new core will be enabled in the forthcoming Android Oreo 8.1 (MR1) update.The new IPU cores are intended to use Halide language for image processing and TensorFlow for machine learning. A custom Google-made compiler optimizes the code for the underlying hardware.
Google also publishes an article explaining its HDR+ and portrait mode that the new core is supposed to accelerate. Google also publishes a video explaining the Pixel 2 camera features:
Basler Compares Image Sensors for Machine Vision and Industrial Applications
Image Sensors World Go to the original article...
Basler presents EMVA 1288 measurements of the image sensors in its cameras. It's quite interesting to compare CCD with CMOS sensors and Sony with other companies in terms of QE. Qsat, Dark Noise, etc.:5 Things to Learn from AutoSens 2017
Image Sensors World Go to the original article...
EMVA publishes "AutoSens Show Report: 5 Things We Learned This Year" by Marco Jacobs, VP of Marketing, Videantis. The five important things are:- The devil is in the detail
Sort of obvious. See some examples in the article. - No one sensor to rule them all
Different image sensors, Lidars, each optimized for a different sub-task - No bold predictions
That is, nobody knows what the autonomous driving arrives to the market - Besides the drive itself, what will an autonomous car really be like?
- Deep learning a must-have tool for everyone
Sort of a common statement although the approaches vary. Some put the intelligence into the sensors, others keep sensors dumb while concentrating the processing in a central unit.
DENSO and Fotonation Collaborate
Image Sensors World Go to the original article...
BusinessWire: DENSO and Xperi-Fotonation start joint technology development of cabin sensing based on image recognition. DENSO expects to significantly improve the performance of its Driver Status Monitor, an active safety product used in tracks since 2014. Improvements of such products also will be used in next-generation passenger vehicles, including a system to help drivers return to driving mode during Level 3 of autonomous drive.Using FotoNation’s facial image recognition and neural networks technologies, detection accuracy will be increased remarkably by detecting much more features instead of using the conventional detection method based on the relative positions of the eyes, nose, mouth, and other facial regions. Moreover, DENSO will develop new functions, such as those to detect the driver’s gaze direction and facial expressions more accurately, to understand the state of mind of the driver in order to help create more comfortable vehicles.
“Understanding the status of the driver and engaging them at the right time is an important component for enabling the future of autonomous driving,” said Yukihiro Kato, senior executive director, Information & Safety Systems Business Group of DENSO. “I believe this collaboration with Xperi will help accelerate our innovative ADAS product development by bringing together the unique expertise of both our companies.”
“We are excited to partner with DENSO to innovate in such a dynamic field,” said Jon Kirchner, CEO of Xperi Corporation. “This partnership will play a significant role in paving the way to the ultimate goal of safer roadways through use of our imaging and facial analytics technologies and DENSO’s vast experience in the space.”
Using FotoNation’s facial image recognition and neural networks technologies, detection accuracy will be increased remarkably by detecting much more features instead of using the conventional detection method based on the relative positions of the eyes, nose, mouth, and other facial regions. Moreover, DENSO will develop new functions, such as those to detect the driver’s gaze direction and facial expressions more accurately, to understand the state of mind of the driver in order to help create more comfortable vehicles.
AutoSens 2017 Awards
Image Sensors World Go to the original article...
AutoSens conference held on Sept. 20-21 in Brussels, Belgium publishes its Awards. Some of the image sensor relevant ones:Most Engaging Content
- First place: Vladimir Koifman, Image Sensors World (yes, this is me!)
- Highly commended: Junko Yoshida, EE Times
Hardware Innovation
- First place: Renesas
- Highly commended: STMicroelectronics
Most Exciting Start-Up
- Winner: Algolux
- Highly commended: Innoviz Technologies
LG, Rockchip and CEVA Partner on 3D Imaging
Image Sensors World Go to the original article...
PRNewswire: CEVA partners with LG to deliver a high-performance, low-cost smart 3D camera for consumer electronics and robotic applications.The 3D camera module incorporates a Rockchip RK1608 coprocessor with multiple CEVA-XM4 imaging and vision DSPs to perform biometric face authentication, 3D reconstruction, gesture/posture tracking, obstacle detection, AR and VR.
"There is a clear demand for cost-efficient 3D camera sensor modules to enable an enriched user experience for smartphones, AR and VR devices and to provide a robust localization and mapping (SLAM) solution for robots and autonomous cars," said Shin Yun-sup, principal engineer at LG Electronics. "Through our collaboration with CEVA, we are addressing this demand with a fully-featured compact 3D module, offering exceptional performance thanks to our in-house algorithms and the CEVA-XM4 imaging and vision DSP."
Ambarella Loses Key Customers
Image Sensors World Go to the original article...
The Motley Fool publishes an analysis of Ambarella performance over the last year. The company lost some of its key customers GoPro, Hikvision and DJI, while the new Google Clips camera opted for non-Ambarella processor as well:"Faced with shrinking margins, GoPro needed to buy cheaper chipsets to cut costs. It also wanted a custom design which wasn't readily available to competitors like Ambarella's SoCs. That's why it completely cut Ambarella out of the loop and hired Japanese chipmaker Socionext to create a custom GP1 SoC for its new Hero 6 cameras.
DJI also recently revealed that its portable Spark drone didn't use an Ambarella chipset. Instead, the drone uses the Myriad 2 VPU (visual processing unit) from Intel's Movidius. DJI previously used the Myriad 2 alongside an Ambarella chipset in its flagship Phantom 4, but the Spark uses the Myriad 2 for both computer vision and image processing tasks.
Google also installed the Myriad 2 in its Clips camera, which automatically takes burst shots by learning and recognizing the faces in a user's life.
Ambarella needs the CV1 to catch up to the Myriad 2, but that could be tough with the Myriad's first-mover's advantage and Intel's superior scale.
To top it all off, Chinese chipmakers are putting pressure on Ambarella's security camera business in China."
Pikselim Demos Low-Light Driver Vision Enhancement
Image Sensors World Go to the original article...
Pikselim publishes a night-time Driver Vision Enhancement (DVE) video using its low-light CMOS sensor behind the windshield of the vehicle with the headlights off (sensor is operated in the 640x512 format at 15 fps in the Global Shutter mode, using an f/0.95 optics and off-chip de-noising):Yole on Automotive LiDAR Market
Image Sensors World Go to the original article...
Yole Developpement publishes its AutoSens Brussels 2017 presentation "Application, market & technology status of the automotive LIDAR." Few slides form the presentation:Sony Announces Three New Sensors
Image Sensors World Go to the original article...
Sony added three new sensors to its flyers table: 8.3MP 2um pixel based IMX334LQR and 4.5um global shutter pixel based 2.9MP IMX429LLJ and 2MP IMX430LLJ. The news sensors are said to have high sensitivity and aimed to security and surveillance applications.Yole Image Sensors M&A Review
Image Sensors World Go to the original article...
IMVE publishes article "Keeping Up With Consolidation" by Pierre Cambou, Yole Developpement image sensor analyst. There is a nice chart showing the large historical mergers and acquisitions:"For the source of future M&A, one should rather look toward the decent number of machine vision sensor technology start-ups, companies like Softkinetic, which was purchased by Sony in 2015, and Mesa, which was acquired by Ams, in 2014. There are a certain number of interesting start-ups right now, such as PMD, Chronocam, Fastree3D, SensL, Sionyx, and Invisage. Beyond the start-ups, and from a global perspective, there is little room for a greater number of deals at sensor level, because almost all players have recently been subject to M&A."
Waymo Self-Driving Car Relies on 5 LiDARs and 1 Surround-View Camera
Image Sensors World Go to the original article...
Alphabet Waymo publishes Safety Report with some details on its self-driving car sensors - 5 LiDARs and one 360-deg color camera:LiDAR (Laser) System
LiDAR (Light Detection and Ranging) works day and night by beaming out millions of laser pulses per second—in 360 degrees—and measuring how long it takes to reflect off a surface and return to the vehicle. Waymo’s system includes three types of LiDAR developed in-house: a short-range LiDAR that gives our vehicle an uninterrupted view directly around it, a high-resolution mid-range LiDAR, and a powerful new generation long-range LiDAR that can see almost three football fields away.
Vision (Camera) System
Our vision system includes cameras designed to see the world in context, as a human would, but with a simultaneous 360-degree field of view, rather than the 120-degree view of human drivers. Because our high-resolution vision system detects color, it can help our system spot traffic lights, construction zones, school buses, and the flashing lights of emergency vehicles. Waymo’s vision system is comprised of several sets of high-resolution cameras, designed to work well at long range, in daylight and low-light conditions.
Half a year ago, Bloomberg published an animated gif image showing the cleaning of Waymo 360-deg camera:
Chronocam Partners with Huawei
Image Sensors World Go to the original article...
French sites L'usine Novelle, InfoDSI, Chine report that Chronocam partners with Huawei. Huawei is said to cooperate with Chronocam on face recognition technology in its smartphones, similar to Face ID in iPhone X.Hynix Proposes TrenchFET TG
Image Sensors World Go to the original article...
SK Hynix patent application US20170287959 "Image Sensor" by Pyong-su Kwag, Yun-hui Yang, and Young-jun Kwon leverages the company's DRAM trench technology:Omron Improves Its Driver Monitoring System
Image Sensors World Go to the original article...
OMRON driver monitoring system uses three barometers to judge whether the driver is capable of focusing on driving responsibilities: (1) whether the driver is observing the vehicle's operation (Eyes ON/OFF); (2) how quickly the driver will be able to resume driving (Readiness High/Mid/Low); and (3) whether the driver is behind the wheel (Seating ON/OFF). Additionally, the company's facial image sensing technology, OKAO Vision, now makes it possible to sense the state of the driver even if wearing a mask or sunglasses - something that had previously not been possible.Magic Leap Seeks $1b Funding on $6b Valuation
Image Sensors World Go to the original article...
Reuters reports that AR glasses startup Magic Leap files in SEC that it's seeking to raise $1b on $6b valuation. The filing does not indicate the amount that Magic Leap had so far secured from investors. It may end up raising less than $1b.Compressed Sensing Said to Save Image Sensor Power
Image Sensors World Go to the original article...
Pravir Singh Gupta and Gwan Seong Choi from Texas A&M University publish an open access paper "Image Acquisition System Using On Sensor Compressed Sampling Technique." They say that "Compressed Sensing has the potential to increase the resolution of image sensors for a given technology and die size while significantly decreasing the power consumption and design complexity. We show that it has potential to reduce power consumption by about 23%-65%."The proposed sensor architecture implementing this claim is given below:
"Now we demonstrate the reconstruction results of our proposed novel system flow. We use both binary and non-binary block diagonal matrix to compressively sample the image. The binary block diagonal(ΦB) and non-binary block diagonal(ΦNB) sampling matrix are mentioned below."















