Image Sensors World Go to the original article...
Autosens publishes videos from its Detroit conference in May 2017. Here is few of them talking about image sensing:Qualcomm and Himax Collaborate on 3D Sensing
Image Sensors World Go to the original article...
PRNewswire, GlobeNewsWire: Qualcomm and Himax jointly announce a collaboration on low power active 3D depth sensing camera for use cases such as biometric face authentication, 3D reconstruction, and scene perception for mobile, IoT, surveillance, automotive and AR/VR.Qualcomm expertise in computer vision and algorithm with Himax's technologies in wafer optics, sensing, driver, and module integration are combined in a fully integrated SLiM (Structured Light Module) 3D solution. The SLiM is a turn-key 3D camera module that delivers real-time depth sensing and 3D point cloud generation with high resolution and high accuracy for indoor and outdoor environments. Qualcomm and Himax will commercialize the SLiM for a wide array of markets and industries with mass production targeting in Q1/2018.
"Our 3D sensing solution will be a game changing technology for smartphones, where we will enable the Android ecosystem to provide the next generation of mobile user experience," said Jordan Wu, President and CEO of Himax. "Our two companies have worked together for more than four years to design the SLiM 3D sensing solution to meet growing demands for enhanced computer vision capabilities that will enable amazing new features and use cases in a broad range of markets and applications. We are pleased to partner with Qualcomm Technologies to put together an ecosystem and to enable the revolutionary computer vision solutions for our customers globally in a timely fashion."
Apple Testing Different LiDARs and Cameras
Image Sensors World Go to the original article...
Macrumors publishes images of Lexus car with Apple-equipped set of LiDARs and cameras. Apparently, Apple is testing different configurations and comparing the data quality coming from them:Thanks to OB for the link!
Yole on Automotive Sensor Market
Image Sensors World Go to the original article...
Yole Developpement releases "Automotive sensing: a mature yet highly dynamic market" report. Image sensors of all sorts is a significant part of this market:"Despite just 3% growth in the volume of cars sold expected through to 2022, Yole expects an average growth rate in sensors sales volumes above 8% over the next five years, and above 14% growth in sales value. This is thanks to the expanding integration of high value sensing modules like RADAR, imaging and LiDAR.
Among all sensing technologies located in the car, three main sensors will drastically change the landscape: imaging, RADAR and LiDAR sensors. Imaging sensors were initially mounted for ADAS purposes in high-end vehicles, with deep learning image analysis techniques promoting early adoption. It is now a well-established fact that vision-based AEB is possible and saves lives. Adoption of forward ADAS cameras will therefore accelerate.
Growth of imaging for automotive is also being fueled by the park assist application, and 360° surround view camera volumes are skyrocketing. While it is becoming mandatory in the US to have a rear view camera, that uptake is dwarfed by 360° surround view cameras, which enable a “bird’s eye view” perspective. This trend is most beneficial to companies like Omnivision at sensor level and Panasonic and Valeo, which have become the main manufacturers of automotive cameras.
LiDAR remains the “Holy Grail” for most automotive players, allowing 3D sensing of the environment. In this report Yole’ analysts highlight the different potential usages of this technology, which will transform the transportation industry completely.
“We expect tremendous growth of the LiDAR market within the next five years, from being worth US$300 million in 2017 to US$4.4 billion by 2022,” details Guillaume Girardin from Yole. LiDAR is expected to be a key technology, but sensing redundancy will still be the backbone of the automotive world where security remains the golden rule."
AI News: Movidius Miriad X, Cambricon Raises $100M in Series A Round
Image Sensors World Go to the original article...
TomsHardware: Intel Movidius releases Miriad X Vision Processing Unit (VPU) with a dedicated Neural Compute Engine. The SoC includes vision accelerators, a Neural Compute Engine, imaging accelerators, and 16 SHAVE vector processors with a total of up to 4 TOPS (Trillion Operations Per Second) of performance in 1.5W power package:Medium, China Money Network: China-based Cambricon AI startup raises $100M at reported $1b valuation. Cambricon was founded by Prof. Chen Tianshi in 2016 as a spin-off from the Institute of Computing Technology of the Chinese Academy of Science. Its processor, the Cambricon-1A, is claimed to be the first commercial chip for deep learning applications and can be used in robotics, drones, autonomous vehicles and consumer electronics. It is said to have a better performance when running mainstream AI algorithms, is more energy efficient and has a higher integration density.
The total revenue of the AI-related deep learning chip market is forecast to rise from $500M in 2016 to $12.2b in 2025 at CAGR of over 40%, according to Tractica market research.
Update: Digitimes reports that Cambricon expects to roll out hundreds of millions of AI SoCs for smart terminal devices and servers in the next three years, the CEO Chen Tianshi said. Cambricon will license its processor instruction set to more enterprises engaged in AI applications. In this regard, Huawei's HiSilicon Kirin 970 chipset, to be released in September, is to carry the IP instruction set for Cambricon-1A co-processor.
Corephotonics Shows iPhone 7+ Zoom Jumps, Presents its Smooth Zoom Solutions
Image Sensors World Go to the original article...
Corephotonics posts a nice video showing iPhone 7 Plus optical zoom jumping in video mode, compares it with its own zoom solution:The company also presents its dual camera solutions for mobile applications:
Yole Hypotheses on iPhone 8 3D Technology
Image Sensors World Go to the original article...
Yole Developpement analyst Pierre Cambou posts his guesses on what technology might be used in Apple iPhone 8 3D camera:"Until earlier this year everyone thought 3D cameras would appear on the rear of Apple’s new iPhones, as is the case in the Lenovo Phab 2 Pro, in order to develop Augmented Reality applications. This changed radically when Ming Chi Kuo, an analyst from KGI Securities, confirmed in April 2017 that there would be a front 3D camera... Most analysts speculate that this camera would exploit “structured light”, because of Primesense’s involvement. However, Yole maintains its original idea of a Time of Flight (ToF) camera on the front due to the nature of the application, which is to be a new revolutionary user interface."
Xintec Reports Losses
Image Sensors World Go to the original article...
Digitimes comments on CSP packaging company Xintec ongoing losses:"Xintec used to rely on orders from OmniVision. However, the acquisition of OmniVision by a group of China-based investors led by state-owned Hua Capital Management in 2016 has had an adverse impact on Xintec's business in the image sensor field, the observers indicated. Xintec is also facing increased competition from its China-based rivals in the fingerprint sensor sector.
Nevertheless, Xintec is reportedly among the companies joining Apple's and Qualcomm's 3D sensing supply chains, which also included its largest shareholder Taiwan Semiconductor Manufacturing Company (TSMC). Xintec may begin to see the light at the end of tunnel as early as 2018, the observers noted."
From the earlier Xintec report:
Intel Comments on Qualcomm 3D Camera
Image Sensors World Go to the original article...
One of Intel RealSense group members publishes his comments in subtitles on Qualcomm 3D camera video:KGI Compares Apple and Qualcomm 3D Camera Technologies
Image Sensors World Go to the original article...
Macrumors and IF News quote KGI securities analyst Ming-Chi Kuo saying that Apple 3D camera technology is 1.5-2 years ahead of Qualcomm one. Apple is expected to use 3D frontside camera in one of its upcoming iPhones for face recognition and user interface."According to Kuo, Qualcomm is dealing with immature algorithms and an unfavorable hardware reference design for smartphones due to form factor design and thermal issues. Qualcomm may also be impacted by Apple's choice of suppliers. Many key component suppliers have already allocated resources to Apple, so Qualcomm has to find different suppliers in order to obtain sufficient resources."
So far, only Xiaomi 2018 flagship phone is expected to use Qualcomm 3D solution with volumes less than 10M units, according to Kuo. Even this might be cancelled, depending on the new iPhone success.
Omron Vision Platform
Image Sensors World Go to the original article...
Omron shows its HVC-P2 vision platform capabilities:Sony Promotes 4K Slow Motion Imaging
Image Sensors World Go to the original article...
Sony video shows 8x slow motion 4K video attributed to the recent progress in its CMOS sensors:Yole and WCP Mobile Depth Sensing Report
Image Sensors World Go to the original article...
Yole Developpement and Woodside Capital Partners publish "Smartphone Depth Sensing" report dated by July 2017. Few slides from the review:Entry Level Smartphone SoCs Support Dual Camera
Image Sensors World Go to the original article...
PRNewsWire: Spreadtrum announces mass production of its LTE SoC platforms: SC9853I manufactured in Intel's 14nm foundry process with Octa-core 64-bit Intel Airmont processor; and Spreadtrum SC9850 series. Both SC9853I and SC9850 series emphasize dual-camera processing capability. Both include a built-in 3DNR to improve night shooting and features like refocusing, real-time face beauty, 3D modeling and AR. SC9853I supports 16MP dual camera, while SC9850 supports 13MP dual configuration.Sony Proposes BCMD with Reduced Dark Current
Image Sensors World Go to the original article...
Sony patent application US20170229493 "Pixel circuit and imaging apparatus" by Kouichi Harada and Toshiyuki Nishihara proposes dark current reduction in bulk charge modulated device (BCMD)."...the above image sensor needs to read 1000 times in one frame for example, and has problems of increase in read voltage and increase in read time. Also, as the number of reads per frame increases, dark current of the FD increases proportionally. As a result, the dark current of the FD becomes the main component of the dark current of the pixel. The dark current of the FD is unable to be reduced easily, and thus even if the conversion efficiency can be set to 600 μV/e−, the accuracy of detecting one photon is reduced. If there is no FD, the accuracy of detecting one photon is improved.
...According to the present technology, it is possible to obtain an excellent effect that the dark current of the FD in the image sensor is eliminated, and that the conversion efficiency of converting the electric charge to a voltage can be improved."
TrinamiX 3D Sensor Paper
Image Sensors World Go to the original article...
BASF spinoff Trimamix publishes an arxiv.org paper "Focus-Induced Photoresponse: a novel optoelectronic distance measurement technique" by Oili Pekkola, Christoph Lungenschmied, Peter Fejes, Anke Handreck, Wilfried Hermes, Stephan Irle, Christian Lennartz, Christian Schildknecht, Peter Schillen, Patrick Schindler, Robert Send, Sebastian Valouch, Erwin Thiel, and Ingmar Bruder."Here we introduce Focus-Induced Photoresponse (FIP), a novel method to measure distances. In a FIP-based system, distance is determined by using the analog photoresponse of a single pixel sensor. This means that the advantages of high-density pixelation and high-speed response are not necessary or even relevant for the FIP technique. High resolution can be achieved without the limitations of pixel size, and detectors selected for a FIP system can be orders of magnitude slower than those required by ToF based ones. A system based on FIP does not require advanced sensor manufacturing processes to function, making adoption of unusual sensors more economically feasible.
In the FIP technique, a light source is imaged onto the photodetector by a lens. The size of its image depends on the position of the detector with respect to the focused image plane. FIP exploits the nonlinearly irradiance-dependent photoresponse of semiconductor devices. This means that the signal of a photodetector not only depends on the incident radiant power, but also on its density on the sensor area, the irradiance. This phenomenon will cause the output of the detector to change when the same amount of light is focused or defocused on it. This is what we call the FIP effect."
Qualcomm on VR Motion Tracking Setup
Image Sensors World Go to the original article...
Qualcomm presentation "On-device motion tracking for immersive mobile VR" discusses 4-camera setup in the company's VR headset:First Report on CIS Reproducibility, Variability and Reliability
Image Sensors World Go to the original article...
Albert Theuwissen releases the first report on "Reproducibility, Variability and Reliability of CIS" in a 5-year series. The 175-page report contains 118 figures and 98 tables with data on QE, FPN, DSNU, Qsat, DR, SNR, DC, and more.Coolpad CEO Quantifies Dual Camera Advantages
Image Sensors World Go to the original article...
Coolpad, one of the large China-based smartphone makers that uses 13MP monochrome + 13MP RGB dual camera in its devices, says about the benefits of that configuration:"The two lenses may look the same, but they have very different functions. One shoots in RGB to produce a color image, while the other takes care of the monochrome images. The monochrome lens brings out the detail, and engage more light than the RGB lens when in low-light condition, which takes care of the colors. The Dual camera 2.0 technology in Cool Dual actually enhanced the overall clarity of the image by 20%, help reduce image noise by 8% and improved brightness by 20%. “With these, we believe the real dual 13MP cameras brings us smart framing and the 6P lens gives customers the best quality of pictures”, said Jeff Liu, Coolpad Group CEO."
Digitimes on Automotive LiDAR Adoption
Image Sensors World Go to the original article...
Digitimes Research comes up with its analysis of LiDAR adoption in the car industry, forecasting first LiDAR-equipped production cars appearing this year:"...Audi will take the initiative to launch car models equipped with LiDAR sensors in 2017 and Mercedes Benz, Cadillac, Ford and Volvo are expected to follow suit in 2018-2019...
...only one million LiDAR sensors worth US$200 million will be shipped globally in 2019. However, global shipment value for LiDAR sensors will fast grow to US$500 million in 2024 along with decreasing cost and increasing adoption.
For Level 2 self-driving... LiDAR sensors are required to detect objects as far as 100-150 meters ahead. For Level 3, the required range is 200-300 meters
...Velodyne LiDAR and Quanergy Systems, and Germany-based Ibeo Automotive Systems are the main vendors globally of LiDAR sensors, with the former two focusing on solid-state models to reduce LiDAR sensor sizes."
Qualcomm Unveils Structured Light Depth-Sensing Platform
Image Sensors World Go to the original article...
Qualcomm announces an expansion to the Qualcomm Spectra Module Program to incorporate a biometric authentication and high-resolution depth sensing for a broad range of mobile devices and head mounted displays (HMD). This module program is built on the 2nd generation Spectra embedded ISP family.Now, the camera module program is being expanded to include new camera modules capable of utilizing active sensing for biometric authentication, and structured light for a variety of computer vision applications that require real-time, dense depth map generation and segmentation.
The low-power, high-performance motion tracking capabilities of the Qualcomm Spectra ISP, in addition to optimized simultaneous localization and mapping (SLAM) algorithms, are designed to support new extended reality (XR) use cases for VR and AR applications that require SLAM.
It also features multi-frame noise reduction for superior photographic quality, along with hardware-accelerated motion compensated temporal filtering (MCTF), and inline electronic image stabilization (EIS) for superior camcorder-like video quality.
The Spectra family of ISPs and new Spectra camera modules are expected to be part of the next flagship Snapdragon Mobile Platform.
Qualcomm Emerging Vision Technologies presentation gives some use cases for its 3D depth sensing module.
Qualcomm also publishes a Youtube demo if its structured light depth sensing module:
Pinnacle Introduces HDR ISP Core
Image Sensors World Go to the original article...
Pinnacle Imaging Systems launches Denali-MC HDR ISP IP core said to preserve a scene’s color fidelity and full contrast range throughout the tone mapping process, all without producing halos, color shifts, and undesired motion artifacts.Denali-MC provides a 16-bit data path capable of producing 100 dB or 16-EV steps of dynamic range. Denali-MC HDR IP completely eliminates halo artifacts and color shifts, and mitigates the ghost artifacts and transition noise often seen when merging multiple exposures. This allows Denali-MC to capture up to four exposure frames from 1080p video at 120 fps, while merging and tone mapping at 30 fps in real time. For applications requiring faster output frame rates, Denali-MC also supports a two frame merge mode exporting at 60 fps. Furthermore, Denali-MC can support up to 29 different CMOS sensors, including 9 Aptina/ON Semi, 6 Omnivision and 11 Sony sensors, and 12 different pixel-level gain and frame-set HDR methods, and is said to be easily ported to the most widely-used logic platforms.
HDR-Specific Features:
- Advanced motion compensation algorithms virtually eliminate HDR merge artifacts and transition noise
- Proprietary Locally Adaptive Tone Mapping technology preserves color fidelity through the entire tonal range without creating halo artifacts or color shifts
- Automatic EV bracketing
- Automatic or manual contrast adaptation for global or local video correction
- React concurrent still frame and video capture feature, non-destructively extracts four source LDR Bayer images, merged Bayer HDR, tone mapped Bayer or HDMI RGB still frames without interrupting video
- Ability to capture separate HDR and tone mapped output video streams concurrently (ideal for ADAS applications)
- Two or four frame multiple exposure merge (with Sony IMX290 implementation)
- HDR + Low illumination capabilities with Sony IMX290 sensor enable 24/7 round the clock video capture capabilities for any contrast and lighting condition
“At Fairchild Imaging, we have been very impressed with the Denali-MC ISP state of the art locally adaptive tone mapping (LATM) functionality,” said Vern Klein, Director of Sales and Marketing Fairchild Imaging. “I’ve seen first-hand how they can make a great sensor perform even better in its native WDR mode. Camera manufacturers will benefit from this technology, which provides high quality HDR functionality without requiring companion chips or additional hardware cost to support the algorithms.”
“Pinnacle’s new Denali-MC HDR ISP is a significant achievement addressing HDR video requirements in surveillance, monocular camera automotive markets and machine learning with its customization, artifact compensation, color accuracy and quantifiable high dynamic range of 100 dB,” said Paul Gallagher, image sensor industry veteran and futurist. “Camera system developers in these markets would benefit from utilizing these attributes of the Denali-MC ISP as a standalone ISP or integrate Pinnacle’s HDR IP blocks within their existing ISP.”
Pinnacle also publishes a video demo of its HDR capabilities (short version, long version):
Technavio Low-Light Imaging Market Report
Image Sensors World Go to the original article...
BusinessWire: Technavio comes up with another unusual report "Global Low Light Level Imaging Sensors Market 2017-2021." Few quotes:"In 2016, the night vision devices segment accounted for close to 55% of the total revenues due to high adoption of low light level imaging sensors by the defense sector. The increasing focus on reducing road accidents drives the demand for night vision systems. Night vision systems are offered as built-in systems by Audi, BMW, and Toyota. The market is expected to grow at a CAGR of close to 16% during the forecast period.
The global low light level imaging sensors market by cameras contributed 25% of the total revenues in 2016. These sensors are used in applications such as home security cameras, small business monitoring, and infrastructure security.
In 2016, the global low light level imaging sensors market by optic lights accounted for around 13% of the total revenue in 2016. These are widely used in lighting, decorations, and mechanical inspections of obscure things. Optic lights save space and provide superior lighting, and are therefore used in vehicles. Low light level imaging sensors are crucial components of optic lights as these sensors intensify the range of light."
SMIC Reports Rising CIS Sales
Image Sensors World Go to the original article...
SeekingAlpha publishes SMIC Q2 2017 earnings call saying that the foundry sees a significant growth in its CIS process volume:"By device, most of our year-on-year revenue growth came from CMOS image sensors, NOR Flash, application processors and the power ICs.
And for the existing customer, for... the CMOS imager applications, we also see the recovery, yes."
Sony Kumamoto Earthquake Video
Image Sensors World Go to the original article...
Imaging Resource publishes a video on Sony Kumamoto fab earthquake damage and a recovery effort that brought fab back to life in a so short time:LG Announces Smartphone Camera with F1.6 Lens
Image Sensors World Go to the original article...
BusinessWire: The upcoming LG V30 smartphone is said to have camera module with F#1.6 lens, said to be the brightest in smartphones. I wonder what is the effective aperture of the pixels in the sensor and whether it makes use of the so bright lens.Sony introduces 2.9um Pixel 1080p60 HDR Sensors
Image Sensors World Go to the original article...
Sony introduces IMX307 and IMX327 sensors featuring 2.9um pixels. The 1080p60 sensors support multiple-exposure and DOL-HDR and have 12b ADC.ESPROS CEO Interview
Image Sensors World Go to the original article...
Autosens publishes an interview with ESPROS CEO Beat De Coi. Few quotes:Q: Why did you decide to buck the trend in semiconductors to have your own foundry?
A: Simply there was or is no silicon CMOS technology available which offered high Quantum Efficiency in NIR in combination with high performance CCD. Our backside illumination technology OHC15L offers what is needed for powerful LiDAR, TOF and in general ultrafast gated imagers.
Q: Your session on time-of-flight sensors is about next generation technology – what’s different about it from existing ToF?
A: - Quantum Efficiency is greater than 75% @905nm
- active ambient light suppression
- single pulse operation vs. continuous light modulation in previous generation systems
Q: What is the timescale of this technology coming to the market?
A: Early 2018
Stacking Technologies Overview
Image Sensors World Go to the original article...
Phil Garrou IFTLE 346 overviews image sensor staking history since Sony introduced it in 2013:Grenoble-Lyon Imaging Valley Guide
Image Sensors World Go to the original article...
EETimes publishes a useful article "Engineer's Guide to Imaging Valley" by Junko Yoshida. Most of the France’s imaging companies are concentrated in the Grenoble/Lyon area:The article gives a very nice overview of the history and contemporary state of the French Imaging Valley.
Thanks to JD for the link!











