Image Sensors World Go to the original article...
Infineon's quarterly Investor Presentation shows the company's forecast of ToF sensor market:
Visual Industry Guide
Image Sensors World Go to the original article...
Infineon's quarterly Investor Presentation shows the company's forecast of ToF sensor market:
Image Sensors World Go to the original article...
Sohu (via IFNews), Zaotech: Gigadevice Director of ToF Marketing of the Sensor Division Haolei Liu presents the company plans to develop ToF sensors and optical fingerprint sensors for smartphones:
"GigaDevice’s innovative ToF solution adopts a special process, has a higher QE, can effectively reduce power consumption and system cost, and can support both the 1350nm-1550nm long wavelength band and the 940nm wavelength band. Outdoors have excellent performance, which meets the needs of the future screen direction.
Liu Haolei also pointed out: “In our opinion, iToF and dToF will be parallel for a period of time. Although the resolution is a shortcoming of dToF, in the long run, we believe that dToF has a lot of room for growth because of the dToF solution. Not long after its launch, the industrial chain is not mature, which also means that it has a lot of room for improvement.
We believe that the potential of ToF needs to be promoted by industry chain ecological partners. It is certain that this technology will have higher and higher requirements for hardware . For example, resolution. In the dToF solution, if the resolution can be significantly improved, the application of products will become more and more extensive.”
The new GigaDevice's ToF sensor is said to have QVGA resolution, QE of 65% at 940nm or 50% at 1350nm, "which is nearly double that of the ToF chip based on silicon technology."
MEMSensor: The company also develops a new α-Si process-based under OLED-screen optical fingerprint sensor named GSL7253. It's said to have a sensitive area of 20x30mm2, a QE of 80%, and is only 0.3mm thin:
Image Sensors World Go to the original article...
A number of Ebay sellers offer "research" 8-inch image sensor wafers for $25 a piece. Possibly, they could serve as a nice souvenir:
Image Sensors World Go to the original article...
FinanceSecond, NetEase: GalaxyCore listing at Science and Technology Innovation Board of the Shanghai Stock Exchange has been approved by authorities. The IPO includes 15% share of the company at the price of 7,428,830,300 RMB (about $1.124B). This values the company at $7.5B.
The money will be invested into CIS R&D and 12-inch wafer BSI processing facility that Galaxycore built in Lingang New Area of China (Shanghai) Free Trade Pilot Zone.
"Through the construction of some 12-inch BSI wafer back-end production lines, 12-inch wafer manufacturing pilot lines, some OCF manufacturing and back grinding and cutting production lines, the company has realized the transition from the Fabless model to the Fab-Lite model."
Image Sensors World Go to the original article...
BusinessWire: As a public company, Velodyne reports its quarterly results, giving a good food for thought about LiDAR market as a whole:
Image Sensors World Go to the original article...
BusinessWire: Ambarella introduces the CV28M camera SoC, the latest in the CVflow family, combining image processing, high-resolution video encoding, and CVflow computer vision processing in a single, low-power design. The CV28M’s efficient AI architecture provides the flexibility to enable a new class of smart edge devices for applications including smart home security, retail monitoring, consumer robotics, and occupancy monitoring.
“All around us, devices are becoming smarter, and with our newest CV28M SoC, our customers can develop a new generation of intelligent sensing cameras for a variety of new applications,” said Chris Day, VP of marketing and business development at Ambarella. “In privacy-sensitive applications—such as monitoring retail stores, workplaces, rental properties, or the elderly at home—edge-based AI processing can support intelligent monitoring and fast decision-making without the requirement to record or stream video to the cloud.”
ON Semi publishes a promotional video about its SiPM use in dToF laser rangefinders:
Image Sensors World Go to the original article...
Soitec says that its "Imager-SOI [wafer] product line is designed specifically for fabricating front-side imagers for near-infrared (NIR) applications including advanced 3D image sensors."
Image Sensors World Go to the original article...
Soitec says that its "Imager-SOI [wafer] product line is designed specifically for fabricating front-side imagers for near-infrared (NIR) applications including advanced 3D image sensors."
Image Sensors World Go to the original article...
Waterloo Institute for Nanotechnology is going to design sub-micron pixels for X-Ray ptychography. In regular X-ray sensors, the pixel pitch is 100-200um or more.
Image Sensors World Go to the original article...
IC-Insights forecasts a slight growth of image sensor market this year, followed by a 12% growth in 2021:
Image Sensors World Go to the original article...
Samsung unveils its first iTOF product - ISOCELL Vizion 33D:
"Featuring 4-tap pixels, the Samsung ISOCELL Vizion 33D delivers precise and swift depth sensing capabilities for next-level 3D applications.
Enabling pro-grade shots with bokeh effects or accurate 3D object images, the ToF (Time-of-Flight) sensor is optimized to provide best-in-class photography and AR/VR experiences.
To enable precise depth measurement of fast-moving objects, the ISOCELL Vizion 33D features a 4-tap demodulation system and supports frame rate of up to 120fps. Each pixel in the sensor can receive four phase signals simultaneously (0°, 90°, 180°, and 270°), which means it can generate a depth image with just a single frame. The ISOCELL Vizion 33D can capture moving objects with significant reduction of motion artifacts.
In both indoor and outdoor conditions, the sensor can detect the depth of an object within up to 5m with high accuracy. ISOCELL’s pixel technology, coupled with high resolution, enables the sensor to accurately separate objects from the background with 3D bokeh effect.
Deep Trench Isolation technology (DTI) maximizes isolation between pixels to reduce crosstalk, while Backside Scattering Technology (BST) enhances the sensor’s quantum efficiency. With high-precision depth images, the ISOCELL Vizion 33D delivers next-level 3D applications, such as facial authentication for payment services.
With a total power consumption of under 400mW for both IR illuminator and the ToF sensor, the 33D makes it possible for users to enjoy powerful 3D features, such as AR games and video bokeh, throughout the day."
Image Sensors World Go to the original article...
Zeiss Visioner 1 digital microscope uses SD Optics' Micro-mirror Array Lens System (MALS) technology to achieve digitally-extended depth of focus up to 69mm:
"ZEISS Visioner 1 revolutionizes the world of optical inspection and documentation. Driven by the unique Micro-mirror Array Lens System (MALSTM technology), enables for the first time, real-time all-in-focus imaging – first time, every time.
Using a micro-mirror array lens system (MALS™) enables us to generate “virtual” lenses with distinctly different curvatures, thus different focus planes. This is achieved by changing the orientation of each individual micro-mirror in an orchestrated way.
Re-shaping the curvature of this “virtual” lens at speed enables ultra-fast focusing and real-time all-in focus imaging and documentation."
Image Sensors World Go to the original article...
IEDM publishes its 2020 program with many image sensor-related papers:Image Sensors World Go to the original article...
PRNewswire: 3 year-old FMCW LiDAR startup Aeva announces a reverse merging with InterPrivate Acquisition Corp. to be listed on NYSE at $2.1B valuation. This transaction is to provide up to $363M in gross proceeds, comprised of InterPrivate's $243M held in trust and a $120M fully committed common stock PIPE at $10.00 per share, including investments from Adage Capital and Porsche SE.
The combined company expected to have an estimated post-transaction equity value of approximately $2.1B and is expected to be listed on the NYSE under the ticker symbol AEVA following anticipated transaction close in Q1 2021.
Image Sensors World Go to the original article...
BusinessKorea tells: "Sony’s image sensor sales are predicted to fall from 240 billion yen in the second quarter of this year to 130 billion yen in the second quarter of next year.
This is leading to an opportunity for Samsung. The latecomer in the industry has focused on Xiaomi, Vivo and others rather than Huawei.
Samsung is aiming to rise to the top in the global image sensor market by 2030. Last year, Samsung’ share in the market was 18.1 percent and Sony’s was 53.5 percent."
Image Sensors World Go to the original article...
Livox announces two new products, Mid-70 and AVIA:
Voyant Photonics President Peter Stern talks about Apple LiDAR:
"The iPhone time-of-flight LiDAR, probably built with the same amazing SPAD array used in the iPad, coupled with a VCSEL array for illumination, is an engineering marvel. It’s absolute magic.
After working on LiDAR three decades ago that could detect telephone lines kilometers away from a fast-moving, low-flying helicopter, I have been waiting for this kind of LiDAR magic for a long time.
At Voyant, we have a different approach. No VCSELs, no SPADs. Adapting microscopic optical components from datacom chips to active sensing, we have created a coherent pixel array for LiDAR, similar to the ubiquitous CMOS image sensors found everywhere. Each pixel both transmits and receives light at 1550 nm wavelengths.
Image Sensors World Go to the original article...
APL Photonics paper "An optical chip for self-testing quantum random number generation" by Nicolò Leone, Davide Rusca, Stefano Azzini, Giorgio Fontana, Fabio Acerbi, Alberto Gola, Alessandro Tontini, Nicola Massari, Hugo Zbinden, and Lorenzo Pavesi from University of Trento, FBK, and University of Geneva describes how photon shot noise-based RNG is built:
"We present an implementation of a semi-device-independent protocol of the generation of quantum random numbers in a fully integrated silicon chip. The system is based on a prepare-and-measure scheme, where we integrate a partially trusted source of photons and an untrusted single photon detector. The source is a silicon photomultiplier, which emits photons during the avalanche impact ionization process, while the detector is a single photon avalanche diode. The proposed protocol requires only a few and reasonable assumptions on the generated states. It is sufficient to measure the statistics of generation and detection in order to evaluate the min-entropy of the output sequence, conditioned on all possible classical side information. We demonstrate that this protocol, previously realized with a bulky laboratory setup, is totally applicable to a compact and fully integrated chip with an estimated throughput of 6 kHz of the certified quantum random bit rate."
Image Sensors World Go to the original article...
MDPI paper "How Good Are RGB Cameras Retrieving Colors of Natural Scenes and Paintings?—A Study Based on Hyperspectral Imaging" by João M. M. Linhares, José A. R. Monteiro, Ana Bailão, Liliana Cardeira, Taisei Kondo, Shigeki Nakauchi, Marcello Picollo, Costanza Cucci, Andrea Casini, Lorenzo Stefani, and Sérgio Miguel Cardoso Nascimento from University of Minho, University of Lisbon, Portuguese Catholic University (Portugal), Toyohashi University of Technology (Japan), and Istituto di Fisica Applicata “Nello Carrara” del Consiglio Nazionale delle Ricerche (Italy), describes an interesting experiment:
"RGB digital cameras (RGB) compress the spectral information into a trichromatic system capable of approximately representing the actual colors of objects. Although RGB digital cameras follow the same compression philosophy as the human eye (OBS), the spectral sensitivity is different. To what extent they provide the same chromatic experiences is still an open question, especially with complex images. We addressed this question by comparing the actual colors derived from spectral imaging with those obtained with RGB cameras. The data from hyperspectral imaging of 50 natural scenes and 89 paintings was used to estimate the chromatic differences between OBS and RGB. The corresponding color errors were estimated and analyzed in the color spaces CIELAB (using the color difference formulas ΔE*ab and CIEDE2000), Jzazbz, and iCAM06. In CIELAB the most frequent error (using ΔE*ab) found was 5 for both paintings and natural scenes, a similarity that held for the other spaces tested. In addition, the distribution of errors across the color space shows that the errors are small in the achromatic region and increase with saturation. Overall, the results indicate that the chromatic errors estimated are close to the acceptance error and therefore RGB digital cameras are able to produce quite realistic colors of complex scenarios."
Image Sensors World Go to the original article...
IEEE JSSC publishes an open-access paper "Large-Area, Fast-Gated Digital SiPM With Integrated TDC for Portable and Wearable Time-Domain NIRS" by Enrico Conca, Vincenzo Sesta, Mauro Buttafava, Federica Villa, Laura Di Sieno, Alberto Dalla Mora, Davide Contini, Paola Taroni, Alessandro Torricelli, Antonio Pifferi, Franco Zappa , and Alberto Tosi from Politecnico di Milano.
"We present the design and characterization of a large-area, fast-gated, all-digital single-photon detector with programmable active area, internal gate generator, and time-to-digital converter (TDC) with a built-in histogram builder circuit, suitable for performing high-sensitivity time-domain near-infrared spectroscopy (TD-NIRS) measurements when coupled with pulsed laser sources. We used a novel low-power differential sensing technique that optimizes area occupation. The photodetector is a time-gated digital silicon photomultiplier (dSiPM) with an 8.6-mm 2 photosensitive area, 37% fill-factor, and ~300 ps (20%–80%) gate rising edge, based on low-noise single-photon avalanche diodes (SPADs) and fabricated in 0.35- μm CMOS technology. The built-in TDC with a histogram builder has a least-significant-bit (LSB) of 78 ps and 128 time-bins, and the integrated circuit can be interfaced directly with a low-cost microcontroller with a serial interface for programming and readout. Experimental characterization demonstrated a temporal response as good as 300-ps full-width at half-maximum (FWHM) and a dynamic range >100 dB (thanks to the programmable active area size). This microelectronic detector paves the way for a miniaturized, stand-alone, multi-wavelength TD-NIRS system with an unprecedented level of integration and responsivity, suitable for portable and wearable systems."
Image Sensors World Go to the original article...
Korea IT News: SK Hynix introduces its TOF image sensor at “SEDEX 2020” exhibition this week. Its sensor has 10µm BSI pixels and QVGA resolution in 1/4.5-inch format. The sensor is still in development and its release date has not been disclosed.
SK Hynix TOF sensor is a part of its plan to grow its image sensor business. The company opened a R&D center in Japan last year that primarily focuses on image sensor technology. It also reorganized its lineup of image sensors for smartphone cameras this year. It supplies sensors with increased pixel numbers and smaller pixel size to major smartphone manufacturers. It is currently working on sensors with high pixel numbers such as 48MP and 64MP sensors.
Image Sensors World Go to the original article...
SK Hynix Head of CIS ISP Taehyun (Ted) Kim publishes a post "The Visual Evolution & Innovation of Image Sensors." Few quotes:
"...this trend for high pixels in CIS is expected to face technical difficulties soon, and the innovation for a high level of functions centered on the ISP will be in full swing.
This is due to the limits of miniaturization of CIS pixels due to the diffraction limit. It is possible to reduce the critical dimension of electric circuits to several nanometers with the current semiconductor technology; however, since the light reception amount decreases as the pixel size decreases, the sensitivity and the signal level is reduced, resulting in the decline in SNR and the image quality degradation.
Currently, SK hynix’s CIS has built-in image processing functions such as phase detection auto focus (PDAF), Quad pixel processing, and high dynamic range (HDR) processing, and new functions are constantly being added to it.
Currently, SK hynix’s CIS, mainly the Black Pearl product line, is widely used in smartphone cameras and the application field is expected to expand to various fields such as bio, security, and autonomous vehicles.
In the future, CIS is expected to evolve into an information sensor that supports advanced additional functions, without being limited to image quality improvement. SK hynix’s stack sensor is already capable of embedding a simple AI hardware engine inside the ISP on the lower substrate, based on the advanced semiconductor process. Based on this, SK hynix is currently developing new machine learning-based technologies such as super resolution, color restoration, face recognition, and object recognition."
Image Sensors World Go to the original article...
OSA Optics Express paper "Snapshot multispectral imaging using a pixel-wise polarization color image sensor" by Shuji Ono, Fujifilm, uses polarization to separate multispectral filter bands:
"This study proposes a new imaging technique for snapshot multispectral imaging in which a multispectral image was captured using an imaging lens that combines a set of multiple spectral filters and polarization filters, as well as a pixel-wise color polarization image sensor. The author produced a prototype nine-band multispectral camera system that covered from visible to near-infrared regions and was very compact. The camera’s spectral performance was evaluated using experiments; moreover, the camera was used to detect the freshness of food and the activity of wild plants and was mounted on a vehicle to obtain a multispectral video while driving."
Image Sensors World Go to the original article...
Sensing and Imaging An International Journal publishes a paper "On Wide Dynamic Range Tone Mapping CMOS Image Sensor" by Waqas Mughal and Bhaskar Choubey from University of Southampton, UK, and Universitat Siegen, Germany.
"The dynamic range of a natural scene often covers over 6 decades of intensity from bright to dark areas. Typical image sensors, however, have limited ability to capture this dynamic range available in nature. Even after designing specific wide dynamic range (WDR) image sensors, displaying them on conventional media with limited ability requires computationally complex tone mapping. This paper proposed a novel CMOS pixel which can capture and perform tone mapping during data acquisition. The pixel requires a reference voltage to generate tone mapped response. A number of different reference signals are proposed and generated which can perform WDR operation. Nevertheless, fixed pattern noise (FPN) effects the performance of these pixel. A pixel model with simple parameter extraction procedure is described for a typical tone mapping operator. This model is then used to obtain a simple procedure for pixel calibration leading to reduced FPN. The new proposed pixel response is able to capture upto 6 decades of light intensity and reported FPN correction procedure produces 1% of FPN contrast error."
Image Sensors World Go to the original article...
NikkeiAsia: "I believe the next megatrend [after mobile phones] will be mobility," said Sony Chairman and President Kenichiro Yoshida as he unveiled the Vision-S concept car at the CES tech show in the U.S. in January.
The Vision-S will have 33 sensors, including image sensors, a Sony specialty. Izumi Kawanishi, Sony's SVP who is shepherding development of the car, said the sensors "give passengers and pedestrians a sense of security thanks to the 360-degree vision it provides."
NikkeiAsia says that Sony controls about 70% of the global market for the image sensors used in smartphone cameras, but its share for automotive image sensors is only 9%. The Vision-S is an exploratory effort by the company as it taps into a market led by ON Semi. According to NikkeiAsia, ON Semi has been producing automotive image sensors for over 50 years (since 1970?) and controls 45% of the market.
Image Sensors World Go to the original article...
Nikkei reports that Sony and Omnivision have been granted licenses by the U.S. government to resume some shipments to China's Huawei.
"What we learned was that some... image sensor related suppliers are receiving some licenses from the U.S. government as those components are viewed as less related to cybersecurity concerns, and Sony is among those who received approval," an unnamed chip industry executive told Nikkei Asia.
Image Sensors World Go to the original article...
Light Co. announces its automotive 3D depth Clarity platform:
"Lidars do a great job, but they don’t do the whole job. Their range is often limited to ~250 meters. Class 8 trucks need at least 400+ meters to come to a complete stop, safely. Lidar as well as monocular camera-based systems can get confused as to whether they’re seeing a person painted on the side of a truck or an actual person.
Clarity is a camera-based perception platform that’s able to see any 3D structures in the road from 10 centimeters to 1000 meters away — three times the distance of the best-in-class lidar with 20 times the detail."
“There is nothing else like the Clarity platform with its combination of depth range, accuracy, and density per second. It enables a new generation of vehicles that can be made safer, without having to compromise on cost, quality, or reliability,” said Prashant Velagaleti, Chief Product Officer of Light. “Rather than only minimizing the severity of a collision, having high fidelity depth allows any vehicle powered by Clarity to make decisions that can avoid accidents, keeping occupants safe as well as comfortable.”
Image Sensors World Go to the original article...
Sony reports its quarterly results and updates on its image sensor business:Image Sensors World Go to the original article...
AWE publishes a panel discussion "AWE Nite NYC: Will the iPhone LiDAR Change AR Forever? With Snap, Niantic, Occipital."
Image Sensors World Go to the original article...
BusinessWire: Infineon and pmdtechnologies developed a 3D ToF sensor which is claimed to outperform other solutions in the market and aims for a wider spectrum of consumer applications. The 3D sensor market in smartphones for rear side cameras is expected to grow up to more than 500M units per year until 2024.
“The latest 3D image sensor from Infineon and pmdtechnologies enables a new generation of applications”, says Philipp von Schierstaedt, SVP Infineon. “It aims to create most immersive and smarter AR experiences as well as better photography results with a faster autofocus in low-light conditions or more beautiful night mode portraits based on picture segmentation. This latest chip development is truly setting standards when it comes to improvements of the imager, the driver and processing as well as unprecedented ten meters long range capabilities at lowest power.”
The new chip allows the integration into miniaturized camera modules, accurately measuring depth in short and long range for AR while meeting low power consumption requirements with more than 40% power saving on the imager.
Furthermore seamless augmented reality sensing experiences are being achieved, allowing for high quality 3D depth data capture up to a distance of 10m (at reduced resolution), without losing resolution in the shorter range. Always-on applications such as mobile AR gaming can greatly benefit from the small power budget required by the new sensor. For applications such as the 3D scanning for room and object reconstruction or 3D mapping for furniture planning and other design applications the sensor allows to double the measuring range beyond the current solution in the market.
The volume delivery for this chip starts in Q2 2021, demo kits are already available. The recorded livestream from the official press event is available here: https://livestream.com/infineontechnologies/real3
Image Sensors World Go to the original article...
GlobeNewswire: STMicroelectronics extends its portfolio of FlightSense ToF sensors with a 64-zone device. This first-of-its-kind product comprises a 940nm VCSEL light source, a SoC sensor integrating a VCSEL driver, the receiving array of SPADs, and a low-power 32-bit MCU core and accelerator running firmware. The VL53L5 retains the Class 1 certification of all ST’s FlightSense sensors and is fully eye-safe for consumer products.
“The multi-zone VL53L5 FlightSense direct Time-of-Flight sensor uses our most advanced 40nm SPAD production process to offer outstanding 4m ranging performance and up to 64 ranging zones that help an imaging system build a detailed spatial understanding of the scene,” said Eric Aussedat, GM of ST’s Imaging Division. “Delivering 64x more ranging zones than previously available, the VL53L5 offers radical performance improvement in laser autofocus, touch-to-focus, presence detection, and gesture interfaces while helping developers create even more innovative imaging applications.”
With a vertically integrated manufacturing model for its FlightSense sensors, ST builds its SPAD wafers on a 40nm proprietary silicon process in the Company’s 12” wafer plant at Crolles, France before assembling all of the module components in ST’s back-end plants in Asia. This approach delivers exceptional quality and reliability to customers.
Packaged in a 6.4 x 3.0 x 1.5 mm module, the VL53L5 integrates both transmit and receive lenses into the module design and expands the FoV of the module to 61-degrees diagonal. This wide FoV is especially suited to detect off-center objects and ensure perfect autofocus in the corners of the image. In the ‘Laser Autofocus’ use case, the VL53L5 gathers ranging data from up-to 64 zones across the full FoV to support “Touch to Focus” and many other features.
Further flexibility is available via the SPAD array, which can be set to favor spatial resolution, where it outputs all 64 zones at up to 15fps, or to favor maximum ranging distance, where the sensor outputs 4×4/16 zones at a frame rate of 60fps.
ST’s architecture can automatically calibrate each ranging zone and direct Time-of-Flight technology allows each zone to detect multiple targets and reject reflection from the cover-glass.
Customer development with the VL53L5 can build on ST’s strong relationships with key smartphone and PC platform suppliers as ST has pre-integrated the sensor onto these platforms. The VL53L5 is in mass production with millions of units already shipped to leading wireless and computer manufacturers.
Return to top of page
Copyright © 2026 F4news Terms & Policies