Image Sensors World Go to the original article...
XDA Developers: Qualcomm has updated the specs of its Spectra 250, 280, and 380 ISPs in its Snapdragon 855, 845, 710, 675, and 670 SoCs. The new specs state that they all support 192MP image sensors, Multi-Frame Noise Reduction (MFNR), and Zero Shutter Lag (ZSL):Thesis on Mechanical Stress Influence on Image Sensors
Image Sensors World Go to the original article...
As somebody mentioned in comments to the curved sensor post, University of Duisburg-Essen published 2014 PhD Thesis "Photodiodes and Image Sensors on Mechanically FlexibleUltra-Thin Silicon Chips-in-Foil" by Georgios Dogiamis. However, the experimental results on stress influence on dark current shows very little influence, much smaller than what Marseilles University group has measured:
2-sided Lensless Sensor
Image Sensors World Go to the original article...
It still remains to be seen whether lensless image sensors are able to find a market beyond the under-display fingerprint applications. Meanwhile, Tokyo Institute of Technology and Shuzuoka University propose a 2-sided version of lensless imager in a Preprint paper "Super Field-of-View Lensless Camera by Coded Image Sensors" by Tomoya Nakamura, Keiichiro Kagawa, Shiho Torashima, Masahiro Yamaguchi."A lensless camera is an ultra-thin computational-imaging system. Existing lensless cameras are based on the axial arrangement of an image sensor and a coding mask, and therefore, the back side of the image sensor cannot be captured. In this paper, we propose a lensless camera with a novel design that can capture the front and back sides simultaneously. The proposed camera is composed of multiple coded image sensors, which are complementary-metal-oxide-semiconductor~(CMOS) image sensors in which air holes are randomly made at some pixels by drilling processing. When the sensors are placed facing each other, the object-side sensor works as a coding mask and the other works as a sparsified image sensor. The captured image is a sparse coded image, which can be decoded computationally by using compressive-sensing-based image reconstruction. We verified the feasibility of the proposed lensless camera by simulations and experiments. The proposed thin lensless camera realizes super field-of-view imaging without lenses or coding masks, and therefore can be used for rich information sensing in confined spaces. This work also suggests a new direction in the design of CMOS image sensors in the era of computational imaging."
Silicon Valley LiDAR Companies
Image Sensors World Go to the original article...
Silicon Valley Business Journal publishes a gallery of local LiDAR companies CEOs together with the company's data:"Seven venture-backed startups from the Bay Area are among the more than 70 companies working on automotive lidar today. Two of them have the highest valuations in the world, according to PitchBook Data and others."
Sony Unveils 1-inch Sensor for 360 deg Cameras
Image Sensors World Go to the original article...
Sony 9MP IMX533 sensor has 1:1 aspect ratio for the emerging market of 360 deg consumer cameras and camcorders. The new device features 3.76um BSI pixel for wider CRA of fish-eye lenses, 14b ADC, and is fairly fast to be useful in a video camera:Update: It appears that Sony has already announced IMX533 sensor 1.5 year ago. Not sure why it's marked as new announcement and "preliminary info" at Sony site.
Curving Improves Image Sensor Dark Current
Image Sensors World Go to the original article...
Arxiv.oorg paper "Curved detectors for astronomical applications: characterization results on different samples" by Lombardo Simona, Behaghel Thibault, Chambion Bertrand, Caplet Stephane, Jahn Wilfried, Hugot Emmanuel, Muslimov Eduard, Roulet Melanie, Ferrari Marc, Gaschet Christophe, and Henry David from CEA-Leti, Marseille and Grenoble Universities, France, and Caltech reports dark current reduction in curved sensors:"The development of curved detectors allows to enhance the performances of the optical system used (telescope or astronomical instrument), while keeping the system more compact. We describe here a set of five curved CMOS detectors developed within a collaboration between CEA-LETI and CNRS-LAM. These fully-functional detectors 20 Mpix (CMOSIS CMV20000) have been curved to different radii of curvature and spherical shapes (both convex and concave) over a size of 24x32 mm^2. Before being able to use them for astronomical observations, we assess the impact of the curving process on their performances. We perform a full electro-optical characterization of the curved detectors, by measuring the gain, the full well capacity, the dynamic-range and the noise properties, such as dark current, readout noise, pixel-relative-non-uniformity. We repeat the same process for the flat version of the same CMOS sensor, as a reference for comparison. We find no significant difference among most of the characterization values of the curved and flat samples. We obtain values of readout noise of 10e− for the curved samples compared to the 11e− of the flat sample, which provides slightly larger dynamic ranges for the curved detectors. Additionally we measure consistently smaller values of dark current compared to the flat CMOS sensor. The curving process for the prototypes shown in this paper does not significantly impact the performances of the detectors. These results represent the first step towards their astronomical implementation."
ON Semi Automotive Sensors in Low Light Tests
Image Sensors World Go to the original article...
ON Semi publishes a demo video of its automotive sensor performance in low light:Xperi on Use Cases of 3D Camera in Smartphone
Image Sensors World Go to the original article...
TechTheLead publishes an interview with Greg DeCamp, Xperi Fotonation's FaceSafe Product Manager, the software used in LG G8 ThinQ smartphone with PMD ToF camera:Another application for 3D camera is 3D Portrait:
Teledyne-e2v Presents 1.3MP TOF Sensor
Image Sensors World Go to the original article...
e2v ToF BORA sensor has 10um pixels and 1.3MP resolution:- One single sensor enabling a 1.3MP 2D high definition image and several 3D depth resolutions
- Binning allows VGA and QVGA 3D depth map
- HDR for lower noise and better accuracy at long distance
- Long distance range: (0.5m to 5m) or (5m to 50m)
- 3D detection from fast moving objects
Melexis Announces ToF Sensor That Uses Sony BSI Process
Image Sensors World Go to the original article...
Melexis announces MLX75027, the industry’s first single-chip automotive grade VGA ToF image sensor for applications such as in-car and exterior monitoring.Apparently, the MLX75027 sensor stacks Sony Back Illuminated pixel array manufactured in 90nm process on top of Melexis-designed SOC. “This is the next and perhaps most significant release in our product portfolio targeting automotive applications using ToF technology,” commented Gualtiero Bagnuoli, Marketing Manager Optical Sensors. “Not only does it reinforce Melexis’ leadership in this space, it provides Tier 1 and Tier 2 manufacturers with a single-chip solution that really could help revolutionise the automotive industry.”
Sampling of the MLX75027 will start in July 2019.
Some of the key features of the new sensor:
- 1/2" optical format
- VGA (640 x 480) pixel array
- 10 x 10 µm DepthSense pixels
- Integrated microlenses
- Backside illumination (BSI) technology
- External QE 44.3% (850nm)
- External QE 25.5% (940nm)
- High distance accuracy due to programmable modulating frequencies up to 100 MHz
- AC Demodulation contrast >85 % (50 MHz)
- AC Demodulation contrast >69 % (100 MHz)
- Differential light source control with phase delay feedback loop
- Full resolution distance framerate of max. 135 FPS (4 phases, Tint 300µs, 4-lane data @960mbps MIPI configuration)
- 1.5ms phase readout time
- Up to 8 raw phases (or quads) per frame
- Per-phase statistics & diagnostics
- Continuous or triggered operation mode(s)
- CSI-2 serial data output, MIPI D-PHY, 1 clock lane, 2 or 4 data lanes (< 960 Mbps/lane)
- Built-in temperature sensor
- ROI selection
- Support for binning (2x2, 4x4, 8x8)
- Ambient operating temperature range of -40 - 105°C
- AEC-Q100 qualified (grade 2)
Hynix to Manufacture CIS in China
Image Sensors World Go to the original article...
Digitimes reports that Hinyx 8-inch fab construction in Wuxi, China nears completion. The fab is aimed to non-memory products, including image sensors. New fab capacity is 100,000 wafers per month.Image is from TheElec site.
LiDAR News: Waymo Offers its LiDAR for Sale, Innoviz Raises $100M
Image Sensors World Go to the original article...
Venturebeat and Bloomberg report that Waymo has decided to offer its LiDAR for sale. This is a nice opportunity to learn more about the Google/Waymore internal design:"Our Laser Bear Honeycomb is a best-in-class perimeter sensor. For those familiar with our self-driving cars, it’s the same sensor around the bumper of the vehicles. It features:"
- Wide field of view: Where some 3D lidar have a vertical field of view (FOV) of just 30°, the Honeycomb has a vertical FOV of 95°, plus a 360° horizontal FOV. That means one Honeycomb can do the job of three other 3D sensors stacked on top of one another.
- Multiple returns per pulse: When the Honeycomb sends out a pulse of light, it doesn’t just see the first object the laser beam touches. Instead, it can see up to four different objects in that laser beams’ line of sight (e.g., it can see both the foliage in front of a tree branch and the tree branch itself). This gives a rich and more detailed view of the environment, and uncovers objects that might otherwise be missed.
- Minimum range of zero: The Honeycomb has a minimum range of zero, meaning it can see objects immediately in front of the sensor. This enables key capabilities such as near object detection and avoidance.
CTech reports that Innoviz is in the process of raising a new round of around $100M bringing the total investment into the company to $182M. The company's valuation in this round is $500M, according to Reuters.
WSAU: Reuters publishes its view on "A chaotic market for one sensor stalls self-driving cars:"
Sigma 40mm f1.4 Art review
Cameralabs Go to the original article...
The Sigma 40mm f1.4 Art is a short standard lens designed for full-frame sensors and available in Nikon, Canon, Sigma, Sony E and Leica L mounts. The 40mm length falls between more common 35mm and 50mm models, but is actually closer to ‘normal’ coverage with a natural perspective that’s ideal for general-use. Thomas compares it to rivals in his in-depth review!…
The post Sigma 40mm f1.4 Art review appeared first on Cameralabs.
PMD CEO Interview
Image Sensors World Go to the original article...
TechTheLead publishes an interview with PMD CEO Bernd Buxbaum on advantages of using ToF in a context of LG G8 ThinQ phone announcement with PMD module inside:Ultrafast Imaging with Slow Camera
Image Sensors World Go to the original article...
Phys.org: At the 2019 American Physical Society Meeting on Marc 4, 2019 in Boston, John Kolinski of EPFL in Lausanne, Switzerland, will present a new imaging technique known as the virtual frame technique that he and colleagues Samuel Dillavou and Shmuel Rubinstein of Harvard University developed that enables ordinary digital cameras to capture millions of frames per second for several seconds while maintaining high spatial resolution. He will also participate in a press conference describing the work. Information for logging on to watch and ask questions remotely is included at the end of this news release.The virtual frame technique uses a camera sensor's bit depth, the amount of information the sensor can obtain, to dramatically increase frame rate. Cracking and many other physical processes are binary; for example, material is either cracked or not cracked. Thus, only two bits are needed to image a crack. An image sensor with a bit depth of 16 bits has more than 65,000 color or grayscale values, meaning it is possible to produce thousands of virtual frames during a single exposure. Using precise camera timing and a short pulse of intense light can increase frame rates even further. "In a recent study using the virtual frame technique, we obtain virtual frame rates exceeding 60 million per second using precise time-gating and a camera sensor with substantial bit-depth," Kolinski said.
Using the virtual frame technique, virtually any camera can directly image dynamic cracks as they form. Additionally, it can be used to study other fast physical processes that happen at interfaces between solids and fluids such as wetting that occurs when a liquid drop hits a material surface. The only requirement is that the solid be opaque, whether it's a construction material or soft substance such as a polymer. "Essentially any material could be imaged with the virtual frame technique," Kolinski said.
Archiv.org paper "Virtual Frame Technique: Ultrafast Imaging with Any Camera" by Sam Dillavou, Shmuel M Rubinstein, John M Kolinski reveals more details:
Smartphone Market News
Image Sensors World Go to the original article...
IFNews quotes IHS Markit forecasting that Sony is expected to supply 150M sensors with 0.8um pixels. Currently, most of the 0.8um pixel product supply comes from Sony and Samsung with Omnivision expected to be ready in Q3 2019.IHS Markit compares different triple-camera configurations found in modern smartphones:
IFNews posts Credit Suisse comparison of optical under-display fingerprint sensors with ultrasound ones:
Credit Suisse writes: "We also believe the proliferation of front-facing 3D-sensing for biometric identification will remain slow in 2019-20, given higher BOM cost and lack of other uses besides unlocking the device. Nevertheless, there will still be few new smartphone models adopting front-3D-sensing in 2019, as well as potential adoption for non-smartphone applications such as cleaning robot, smart doorbell, refrigerator, etc.
For the rear-3D-sensing with ToF camera, our checks indicate several Android smartphone brands have been developing this feature and could see a few flagship models adopting this in 2019, although there is doubt regarding the cost and end-application. However, we do not expect iPhone to adopt ToF camera in 2H19 models (it’s more likely to happen in 2H20), based on our checks on equipment delivery, supplier qualification, and capacity-build plans. We also do not expect 2H19 iPhone to adopt under-display fingerprint-sensing as there has been limited activity in the supply chain."
SiOnyx vs Hamamatsu Lawsuit
Image Sensors World Go to the original article...
It came to my attention that, some time ago, Sionyx sued Hamamatsu for using and patenting its technology:"Plaintiff SiOnyx, LLC alleges that it approached defendant Hamamatsu Photonics K.K. (“HPK”) concerning a potential business partnership involving the technology. The parties entered into a nondisclosure agreement and SiOnyx provided HPK with certain technical information.
SiOnyx alleges that after the approach proved unsuccessful, HPK violated the nondisclosure agreement, obtained patents on SiOnyx’s technology without naming SiOnyx personnel as inventors, and infringed other patents held by SiOnyx. HPK contends that its engineers independently developed the technology contained in its patents and practiced by its products, and that it does not infringe SiOnyx’s patents."
The court in part granted and in part denied SiOnyx claims.
Why Velodyne LiDARs are Expensive
Image Sensors World Go to the original article...
Silicon Valley Business Journal publishes few photos from Velodyne manufacturing line showing quite complex and labor intensive calibration procedures. The rumor is that the high-end Velodyne LiDARs require about 90 hours of calibration and alignment during and after the assembly. Assuming a technician labor cost of about $50 per hour (am I correct? is this a typical number for Silicon Valley?), it sets a floor for the unit price at about $4,500.However, it appears that Velodyne CEO disagrees with me:
Velodyne "opened its San Jose "megafactory" in 2017 and now employs about 400 there. Founder David Hall said he plans to make his products there for the foreseeable future.
"Most of the cost of our lidars are in the parts themselves and not the labor to assemble them," he said. "San Jose has a large and available skilled labor force that, while not price competitive with anywhere in Asia, does a higher quality job than we would get by assembling the units elsewhere."
![]() |
| Velodyne LiDAR puck is mounted on robotic arm for testing |
![]() |
| Velodyne employees work on laser and detector alignment for VLS-128 LiDARs |
![]() |
| Optical and mechanical accuracy checks for every components |
Update: Silicon Valley Business Journal publishea another article about Velodyne "Velodyne LiDAR, the inventor: ‘We aren’t a one-trick pony’" - can be seen on mobile phone.
Injection of Nanoantennas into Eye Extends Vision to 980nm
Image Sensors World Go to the original article...
Cell journal publishes a paper "Mammalian Near-Infrared Image Vision through Injectable and Self-Powered Retinal Nanoantennae" by Yuqian Ma, Jin Bao, Yuanwei Zhang, Zhanjun Li, Xiangyu Zhou, Changlin Wan, Ling Huang, Yang Zhao, Gang Han, and Tian Xue from University of Science and Technology of China, Hefei, Anhui, Chinese Academy of Sciences, and University of Massachusetts."Mammals cannot see light over 700 nm in wavelength. This limitation is due to the physical thermodynamic properties of the photon-detecting opsins. However, the detection of naturally invisible near-infrared (NIR) light is a desirable ability. To break this limitation, we developed ocular injectable photoreceptor-binding upconversion nanoparticles (pbUCNPs). These nanoparticles anchored on retinal photoreceptors as miniature NIR light transducers to create NIR light image vision with negligible side effects. Based on single-photoreceptor recordings, electroretinograms, cortical recordings, and visual behavioral tests, we demonstrated that mice with these nanoantennae could not only perceive NIR light, but also see NIR light patterns. Excitingly, the injected mice were also able to differentiate sophisticated NIR shape patterns. Moreover, the NIR light pattern vision was ambient-daylight compatible and existed in parallel with native daylight vision. This new method will provide unmatched opportunities for a wide variety of emerging bio-integrated nanodevice designs and applications."
Unfortunately, the upconversion photon efficiency is quite low, between 1e-5 and 1e-6 depending on the 980nm source power:
The researchers publish a video explaining their paper in a plain language:
Techinsights Image Sensor Slides
Image Sensors World Go to the original article...
Techinsights publishes "Image Sensor Subscription: Example Content" presentation with many interesting slides:Sigma-Foveon New Sensor to be Manufactured by TSI Semiconductors
Image Sensors World Go to the original article...
L-Rumors quotes Sigma CEO Kazuto Yamaki saying that the new full frame Foveon sensor will be manufactures by TSI Semiconductors foundry in Roseville, CA:Techinsingths Estimates Samsung Galaxy S10+ Cameras Cost at 13.5% of BOM
Image Sensors World Go to the original article...
Techinsights publishes a teardown report of just announced Samsung flagship phone Galaxy S10+. The cost of 3 rear cameras and 2 front cameras of the smartphone is estimated at $56.5 out of the total BOM of $420:SPAD Imagers at High Illumination
Image Sensors World Go to the original article...
Arxiv.org publishes University of Wisconsin-Madison 27-page long paper "High Flux Passive Imaging with Single-Photon Sensors" by Atul Ingle, Andreas Velten, and Mohit Gupta."We propose passive free-running SPAD (PF-SPAD) imaging, an imaging modality that uses SPADs for capturing 2D intensity images with unprecedented dynamic range under ambient lighting, without any active light source. Our key observation is that the precise inter-photon timing measured by a SPAD can be used for estimating scene brightness under ambient lighting conditions, even for very bright scenes. We develop a theoretical model for PF-SPAD imaging, and derive a scene brightness estimator based on the average time of darkness between successive photons detected by a PF-SPAD pixel. Our key insight is that due to the stochastic nature of photon arrivals, this estimator does not suffer from a hard saturation limit. Coupled with high sensitivity at low flux, this enables a PF-SPAD pixel to measure a wide range of scene brightness, from very low to very high, thereby achieving extreme dynamic range. We demonstrate an improvement of over 2 orders of magnitude over conventional sensors by imaging scenes spanning a dynamic range of 10^6:1."
Smartsens on its ISSCC 2019 Presentation
Image Sensors World Go to the original article...
PRNewswire: SmartSens has presented a research paper, "A Stacked Global-Shutter CMOS Imager with SC-Type Hybrid-GS Pixel and Self-Knee Point Calibration Single-Frame HDR and On-Chip Binarization Algorithm for Smart Vision Applications," at ISSCC 2019 in San Francisco. SmartSens CEO Richard Xu was the first presenter in the ISSCC's image sensor technology session which attracted over 200 attendees from leading organizations from the industry and academia.Known as the "integrated circuit Olympics," ISSCC is said to be one of the most important global forums in the IC industry. "We are proud to be selected to present our paper at ISSCC. It's a tremendous honor for SmartSens as the committee recognizes our achievements in the field of CMOS image sensors," said Xu. "Behind this success is our commitment to developing cutting-edge image sensing technology and products for the emerging applications in the era of 5G, AI and machine vision."
In this paper, SmartSens unveils a new BSI global shutter sensor that has performance advantages such as high sensitivity, low noise, high shutter efficiency, HDR with improved PRNU performance and self-knee point calibration. The sensor integrates an ISP using stacked technology. The new sensor is said to be most suitable for smart vision applications, such as face identification, machine vision, 3D imaging and AI.
TowerJazz GS Pixels on 65nm Platform
Image Sensors World Go to the original article...
TowerJazz blog post describes the advantages of its 65nm platform, including small global shutter pixels:"World’s previous smallest GS pixel, also achieved by TowerJazz, was 2.8 µm in 2016 and was based on its 110mm platform. Using the 65nm technology node and optimization of our light pipe technology enabled us a further reduction of pixel size to 2.5um achieving these excellent performance characteristics."
Yole CCM Industry Report
Image Sensors World Go to the original article...
Yole Developpement publishes a report "Status of the Camera Module Industry 2019 – Focus on Wafer Level Optics." Few quotes:"The camera module industry has reached a new stage in its development. With $27.1B of global revenues generated in 2018, Yole Développement expects it to maintain a 9.1% Compound Annual Growth Rate (CAGR) for the next 5 years. This industry, which covers image sensors, lenses, voice coil motors, illuminators and camera assemblies, will therefore reach $45.7B by 2024. The overall growth is a combination of mega trends. The main upward driver is the increasing number of cameras in products such as smartphones and cars. 3D sensing cameras are part of this trend, invading mobile devices, computing and automotive industries. If the nature of camera module making is unchanged with 3D sensing, illuminator submodules create a new market area. This brings new technologies, such as wafer level optics (WLO), along with it. The market for devices involved in illumination for 3D sensing accounted for $720M in 2018 and will expand ninefold within five years, reaching $6.1B by 2024. This is helping compensate for the shipment volume slowdown in smartphones, computers, tablet and digital cameras. While the complexity and cost of each individual camera is still increasing on average, reaching $5.5 per unit, we are now seeing more diversity. In recent years the distribution of resolution, optical format and camera type was only heading towards uniformly high specifications. But in 2018 the smartphone market has evolved quite dramatically. In an attempt to work around the increasing cost of imaging, mid-range phones have been implementing 2-5Mp formats that were previously fading away.
This new equilibrium between volume, cost and specification is lowering Yole Développement’s forecast with respect to the previous 2017 report, but overall the direction of the industry remains highly attractive."
Facial Recognition Explained
Image Sensors World Go to the original article...
Youtube Function channel publishes a nice video explaining how facial recognition fuses traditional algorithmic with neural networking:Other interesting videos on this channel talk about eye tracking observations of soccer player, artists and kids, people at the first date, etc.
IISW 2019 Pre-Registration Begins
Image Sensors World Go to the original article...
2019 International Image Sensor Workshop (IISW) pre-registration starts now! The Workshop is to be held in Snowbird, Utah on June 23-27.Registration is limited to approximately 160 attendees on a first-come, first-served basis. Registration will be guaranteed for presenters, but they are still required to register. Past experience shows that registration is often filled to capacity within a few days’ time. Registration forms are available on the web.
OIS with Invensese-TDK and Cambridge Mechatronics Devices
Image Sensors World Go to the original article...
BusinessWire: TDK-owned InvenSense announces the launch of the industry’s first Cambridge Mechatronics SMA (Shape Memory Alloy) OIS software controller specifically designed to leverage the Qualcomm Sensor Execution Environment for Android flagship smartphones. The solution leverages TDK SMA actuators and TDK’s latest 6-axis CORONA premium motion sensors to provide a software controller based on CML’s innovative SMA image stabilization designs and control logic.“Unlike other OIS solutions that require a dedicated OIS controller chip, the advantage of TDK’s SMA software controller is that it avoids the need for such a dedicated OIS controller chip,” said Lars Johnsson, senior director of Product Marketing, InvenSense, a TDK Group Company. “This reduces the cost and complexity of developing and commercializing SMA OIS solutions and enables OEMs to accelerate their next-generation imaging flagship launches on select Snapdragon mobile platforms that support the Qualcomm Sensor Execution Environment.”
Sony FE 135mm f1.8 GM review
Cameralabs Go to the original article...
The Sony FE 135mm f1.8 G Master is a telephoto prime lens for its Alpha series of full-frame mirrorless cameras, delivering a classic telephoto view with minimal distortion and very shallow depth-of-field effects. I tested a final production sample for my review.…
The post Sony FE 135mm f1.8 GM review appeared first on Cameralabs.




















