Archives for November 2018

Funding News: All Money Invested in Automotive Startups

Image Sensors World        Go to the original article...

Adasky, an Israeli developer of Far-Infrared camera (FIR) for autonomous vehicles, has secured $20M from a lead investor, Sungwoo Hitech, a Korean automotive supplier. The investment is part of a larger round of funding, and will
enable the company to expand globally. AdaSky’s solution, Viper, is an all-in-one, complete solution for autonomous vehicles, combining FIR camera technology with fusion-ready, deep-learning computer vision algorithms.

Viper is the smallest, highest-resolution thermal camera for autonomous vehicles on the market. We strongly believe that AdaSky’s technology will enable 24/7 sight and perception for vehicles and put us all on the path to fully autonomous driving,” said Myung-Keun Lee, Chairman & Co-CEO of Sungwoo Hitech.


EETimes: Solid-state LiDAR company Sense Photonics founded in 2016 and based in North Caroline raises $14.4m. The company previously raised $2.8m in 2016. TSense Photonics plans to use the money address the autonomous vehicle, UAV and industrial automation markets.

The company's patent applications reveal a design based on VSCEL array and unspecified ToF sensor"


BusinessWire: Korean SOS Lab (Smart Optical Sensors Lab) has raised $6m for its automotive LiDAR. The lead investor in this series A round is Mando, a top-tier automotive supplier.




BusinessWire: In spite of rumors about technological troubles, Quanergy announces its Series C funding at a valuation exceeding $2 billion, with an un-named global top-tier fund as the lead investor. The Series C financing is sad to take the company well beyond its planning horizons to cash-flow and operating breakeven, and keeps the company’s IPO process on track.

"Demand for Quanergy’s solutions continues to be strong, with revenue increasing rapidly and bookings exceeding forecast. Product and software development continues at a brisk pace. Substantial orders for the company’s S3 solid-state sensor were fulfilled this year. Rapid innovation continues to increase the field of view (FoV) and range for the S3 in outdoor environments.

Since the end of 2017, Quanergy has had an annual production capacity of one million solid-state sensors at its fully automated production line in Silicon Valley. The completion of this round of financing will further enhance the company's capital reserve, to accelerate innovation and commercialization of its hardware, software and smart sensing solutions, and construction of ultra-large-scale production facilities.
"

"With our advanced technology, we have reduced the price of solid-state LiDAR to a few hundred dollars in volume,” said Louay Eldada, CEO of Quanergy. “Our third-generation solid-state LiDAR is being developed to fully integrate the sensor on a single chip. For Quanergy, the most important focus at the moment is to speed up the production ramping and prove our strength with mass-produced products."


PRNewswire: Israeli Guardian Optical Technologies announces an additional investment of $2.5m. The new investment is part of a pre-B round that totals $5.6M that will be used to expand the R&D team to serve the companies' expanding customer base as well as supporting customers' projects.

Guardian Optical sensor empowers car manufacturers to build safer cars, and at a lower cost, by eliminating the need to install multiple sensors throughout the car. Patent-pending sensor technology provides real-time, information on occupancy status based on three interconnected layers of information: video image recognition (2D), depth mapping (3D), and micro- to macro-motion detection. The sensor detects the location and physical dimensions of each occupant and can identify the difference between a person and an inanimate object.

Go to the original article...

Sensation Cooperation Project in Europe

Image Sensors World        Go to the original article...

SENSATION, a project within the EUREKA PENTA Cluster managed by AENEAS Industry Association, is developing innovative image capture, transmission and processing technologies for high-end Machine Vision and Broadcast applications. The project focuses on key requirements common to all professional vision-based applications namely: higher spatial resolution, higher frame rate, wider colour gamut, higher DR and improved image quality.

Machine vision calls for small pixel, high resolution sensors that can perform high quality inspection at high speeds. In the broadcast market, demand is being driven by the migration from HDTV to UHDTV. The UHDTV standard supports 4K and 8K resolutions, 12 bits per pixel (compared to 10 bits in HDTV), a wider colour gamut and an increased DR.

The SENSATION project brings together key European players in the imaging industry including R&D institutes specialized in image sensor technologies, image sensor designs and video processing; fabless design houses; a semiconductor manufacturer; image compression experts and system integrators. Through this collaboration the partners can strengthen Europe’s ability to compete in global markets for image capture, processing and transmission.

The partners will cooperate on the development of the following:
  • Development of (building blocks for) CMOS image sensors: smaller global shutter pixels, increased dynamic range, increased data rates, auto-focus pixels, improved ADC’s, ultra-high-speed architectures and high speed serial interfaces
  • New solutions for camera transmission
  • Demonstration of results in cameras for Machine Vision and Broadcast, and demonstration of separate image sensor evaluation set-ups
  • Standards for a high-speed serial interface for image sensors, image compression and camera interfaces.


Thanks to AT for the info!

Go to the original article...

Image Sensor Papers at IEDM 2018

Image Sensors World        Go to the original article...

Image sensor papers have a strong appearance in IEDM 2018 Program:

1.5µm dual conversion gain, backside illuminated image sensor using stacked pixel level connections with 13ke- full-well capacitance and 0.8e- noise
V. C. Venezia, A. C-W Hsiung, K. Ai, X. Zhao, Zhiqiang Lin, Duli Mao, Armin Yazdani, Eric A. G. Webster, L. A. Grant, OmniVision Technologies
A 1.5µm pixel size, 8 mega pixel density, dual conversion gain (DCG), back side illuminated CMOS image sensor (CIS) is described having a linear full-well capacity (FWC) of 13ke- and total noise of 0.8e- RMS at 8x gain. The sensor adopts a world smallest 1.5µm pitch, stacked pixel-level connection (SPLC) technology with greater than 8M connections, maximizing fill-factor of the photodiode and dimensions of the associated transistor dimensions to achieve a large FWC and low noise performance at the same time. In addition, by allocating transistors into two different layers, the DCG function can be realized with 1.5µm pixel size.

A 0.68e-rms Random-Noise 121dB Dynamic-Range Sub-pixel architecture CMOS Image Sensor with LED Flicker Mitigation
S. Iida, Y. Sakano, T. Asatsuma, M. Takami, I. Yoshiba, N. Ohba, H. Mizuno, T. Oka, K. Yamaguchi, A. Suzuki, K. Suzuki, M. Yamada, M. Takizawa, Y. Tateshita, and K. Ohno, Sony Semiconductor
This is a report of a CMOS image sensor with a sub-pixel architecture having a pixel pitch of 3 um. The aforementioned sensor achieves both ultra-low random noise of 0.68e-rms and high dynamic range of 121 dB in a single exposure, further realizing LED flicker mitigation.

A 24.3Me- Full Well Capacity CMOS Image Sensor with Lateral Overflow Integration Trench Capacitor for High Precision Near Infrared Absorption Imaging
M. Murata, R. Kuroda, Y. Fujihara, Y. Aoyagi, H. Shibata*, T. Shibaguchi*, Y. Kamata*, N. Miura*, N. Kuriyama*and S. Sugawa, Tohoku University, *LAPIS Semiconductor Miyagi Co., Ltd.
This paper presents a 16um pixel pitch CMOS image sensor exhibiting 24.3Me- full well capacity with a record spatial efficiency of 95ke-/um2 and high quantum efficiency in near infrared waveband by the introduction of lateral overflow integration trench capacitor on a very low dopant concentration p-type Si substrate. A diffusion of 5mg/dl concentration glucose was clearly visualized by an over 71dB SNR absorption imaging at 1050nm.

HDR 98dB 3.2µm Charge Domain Global Shutter CMOS Image Sensor (Invited)
A. Tournier, F. Roy, Y. Cazaux*, F. Lalanne, P. Malinge, M. Mcdonald, G. Monnot**, N. Roux**, STMicroelectronics, **CEA Leti, **STMicroelectronics
We developed a High Dynamic Range (HDR) Global Shutter (GS) pixel for automotive applications working in the charge domain with dual high-density storage node using Capacitive Deep Trench Isolation (CDTI). With a pixel size of 3.2µm, this is the smallest reported GS pixel achieving linear dynamic range of 98dB with a noise floor of 2.8e-. The pinned memory isolated by CDTI can store 2 x 8000e- with dark current lower than 5e-/s at 60°C. A shutter efficiency of 99.97% at 505nm and a Modulation Transfer Function (MTF) at 940nm better than 0.5 at Nyquist frequency is also reported.

High Performance 2.5um Global Shutter Pixel with New Designed Light-Pipe Structure
T. Yokoyama, M. Tsutsui,Y. Nishi, I. Mizuno, V. Dmitry, A. Lahav TowerJazz
We developed a 2.5um global shutter (GS) CMOS image sensor pixel using an advanced Light-Pipe (LP) structure designed with novel guidelines. To the best of our knowledge, it is the smallest reported GS pixel in the world. The developed pixel shows an excellent Quantum Efficiency (QE), Angular Responses (AR) and very low Parasitic Light Sensitivity (PLS). Also, even in oblique light condition of 10 degrees, the 1/PLS is maintained to about half value. These key characteristics allow development of ultra-high resolution sensors, industrial cameras with wide aperture lenses and low form factors optical modules for GS mobile applications.

Back-Illuminated 2.74 µm-Pixel-Pitch Global Shutter CMOS Image Sensor with Charge-Domain Memory Achieving 10k e- Saturation Signal
Y. Kumagai, R. Yoshita, N. Osawa, H. Ikeda, K.Yamashita, T. Abe, S. Kudo, J. Yamane, T. Idekoba, S. Noudo, Y. Ono, S.Kunitake, M. Sato, N. Sato, T. Enomoto, K. Nakazawa, H. Mori, Y. Tateshita, and K. Ohno, Sony Semiconductor
A 3208×2184 global shutter image sensor with back-illuminated architecture is implemented in a 90 nm/65 nm imaging process. The sensor, having 2.74 µm-pitch-pixels, achieves 10000 electrons full-well capacity and -80 dB parasitic light sensitivity. Furthermore, 13.8 e-/s dark current at 60°C and 1.85 erms random noise are obtained. In this paper, the structure of a pixel with memory along with saturation enhancement technology is described.

A CMOS Proximity Capacitance Image Sensor with 16µm Pixel Pitch, 0.1aF Detection Accuracy and 60 Frames Per Second
M. Yamamoto, R. Kuroda, M. Suzuki, T. Goto, H. Hamori*, S. Murakami*, T. Yasuda*, and S. Sugawa, Tohoku University, *OHT Inc.
A 16µm pixel pitch 60 frames per second CMOS proximity capacitance image sensor fabricated by a 0.18µm CMOS process technology is presented. By the introduction of noise cancelling operation, both fixed pattern noise and kTC noise are significantly reduced, resulting in the 0.1aF detection accuracy. Proximity capacitance imaging results using the developed sensor are also demonstrated.

Through-silicon-trench in back-side-illuminated CMOS image sensors for the improvement of gate oxide long term performance
A. Vici, F. Russo*, N. Lovisi*, L. Latessa*, A. Marchioni*, A. Casella*, F. Irrera, Sapienza University of Rome, *LFoundry, a SMIC Company
To improve the gate oxide long term performance of MOSFETs in back side illuminated CMOS image sensors the wafer back is patterned with suitable through-silicon-trenches. We demonstrate that the reliability improvement is due to the annealing of the gate oxide border traps thanks to passivating chemical species carried by trenches.

High-Performance Germanium-on-Silicon Lock-in Pixels for Indirect Time-of-Flight Applications
N. Na, S.-L. Cheng, H.-D. Liu, M.-J. Yang, C.-Y. Chen, H.-W. Chen, Y.-T. Chou, C.-T. Lin, W.-H. Liu, C.-F. Liang, C.-L. Chen, S.-W. Chu, B.-J. Chen, Y.-F. Lyu, and S.-L. Chen, Artilux Inc.
We investigate and demonstrate the first Ge-on-Si lock-in pixels for indirect time-of-flight measurements. Compared to conventional Si lock-in pixels, such novel Ge-on-Si lock-in pixels simultaneously maintain a high quantum efficiency and a high demodulation contrast at a higher operation frequency, which enable consistently superior depth accuracies for both indoor and outdoor scenarios. System performances are evaluated, and pixel quantum efficiencies are measured to be more than 85% and more than 46% at 940nm and 1550nm wavelengths, respectively, along with demodulation contrasts measured to be higher than 0.81 at 300MHz. Our work may open up new routes to high-performance indirect time-of-flight sensors and imagers, as well as potential adoptions of eye-safe lasers (e.g. wavelengths longer than 1.4µm) for consumer electronics and photonics.

CMOS-Integrated Single-Photon-Counting X-Ray Detector using an Amorphous-Selenium Photoconductor with 11×11-µm2 Pixels
A. Camlica, A. El-Falou, R. Mohammadi, P. M. Levine, and K. S. Karim, University of Waterloo
We report, for the first time, results from a single-photon-counting X-ray detector monolithically integrated with an amorphous semiconductor. Our prototype detector combines amorphous selenium (a-Se), a well known X-ray photoconductive material suitable for large-area applications, with a 0.18-µm-CMOS readout integrated circuit containing two 26×196 photon counting pixel arrays. The detector features 11×11-um2 pixels to overcome a-Se count-rate limitations by unipolar charge sensing of the faster charge carriers (holes) via a unique pixel geometry that leverages the small pixel effect for the first time in an amorphous semiconductor. Measured results from a mono-energetic radioactive source are presented and demonstrate the untapped potential of using amorphous semiconductors for high-spatial-resolution photon-counting Xray imaging applications.

High Performance 2D Perovskite/Graphene Optical Synapses as Artificial Eyes
H. Tian, X. Wang, F. Wu, Y. Yang, T.-L. Ren, Tsinghua University
Conventional von Neumann architectures feature large power consumptions due to memory wall. Partial distributed architecture using synapses and neurons can reduce the power. However, there is still data bus between image sensor and synapses/neurons, which indicates plenty room to further lower the power consumptions. Here, a novel concept of all distributed architecture using optical synapse has been proposed. An ultrasensitive artificial optical synapse based on a graphene/2D perovskite heterostructure shows very high photo-responsivity up to 730 A/W and high stability up to 74 days. Moreover, our optical synapses has unique reconfigurable light-evoked excitatory/inhibitory functions, which is the key to enable image recognition. The demonstration of an optical synapse array for direct pattern recognition shows an accuracy as high as 80%. Our results shed light on new types of neuromorphic vision applications, such as artificial eyes.

Hybrid bonding for 3D stacked image sensors: impact of pitch shrinkage on interconnect robustness
J. Jourdon,, S. Lhostis, S. Moreau**, J. Chossat, M. Arnoux***, C. Sart, Y. Henrion, P. Lamontagne, L. Arnaud**, N. Bresson**, V. Balan**, C. Euvrard**, Y. Exbrayat**, D. Scevola, E. Deloffre, S. Mermoz, A. Martin***, H. Bilgen, F. Andre, C. Charles, D. Bouchu**, A. Farcy, S. Guillaumet, A. Jouve**, H. Fremont*, and S. Cheramy**, STMicroelectronics, *University of Bordeaux, **CEA-LETI, ***STMicroelectronics
We present the first 3D-stacked CMOS Image Sensor with a bonding pitch of 1.44 µm. The influence of the hybrid bonding pitch shrinkage (8.8 to 1.44 µm) from the process point of view to a functional device via the robustness aspect is studied. Smaller bonding pads do not lead to any specific failure.

Few other papers that are not directly related to imaging, but might become more relevant some day:

100-340GHz Systems: Transistors and Applications (Invited),
M.J.W. Rodwell, Y. Fang, J. Rode, J. Wu, B. Markman, S. T. Suran Brunelli, J. Klamkin, M Urteaga*, University of California, Santa Barbara, *Teledyne Scientific Company
We examine potential 100-340 GHz wireless applications in communications and imaging, and examine the prospects of developing the mm-wave transistors needed to support these applications.

High Voltage Generation Using Deep Trench Isolated Photodiodes in a Back Side Illuminated Process
F. Kaklin, J. M. Raynor*, R. K. Henderson, The University of Edinburgh, *STMicroelectronics Imaging Division
We demonstrate passive high voltage generation using photodiodes biased in the photovoltaic region of operation. The photodiodes are integrated in a 90nm back side illuminated (BSI) deep trench isolation (DTI) capable imaging process technology. Four equal area, DTI separated arrays of photodiodes are implemented on a single die and connected using on-chip transmission gates (TG). The TGs control interconnects between the four arrays, connecting them in series or in parallel. A series configuration successfully generates an open-circuit voltage of 1.98V at 1klux. The full array generates 423nW/mm2 at 1klux of white LED illumination in series mode and 425nW/mm2 in parallel mode. Peak conversion efficiency is estimated at 16.1%, at 5.7klux white LED illumination.

Error-Resilient Analog Image Storage and Compression with Analog-Valued RRAM Arrays: An Adaptive Joint Source-Channel Coding Approach
X. Zheng, R. Zarcone*, D. Paiton*, J. Sohn, W. Wan, B. Olshausen* and H. -S. Philip Wong, Stanford University, *University of California, Berkeley
We demonstrate by experiment an image storage and compression task by directly storing analog image data onto an analog-valued RRAM array. A joint source-channel coding algorithm is developed with a neural network to encode and retrieve natural images. The encoder and decoder adapt jointly to the statistics of the images and the statistics of the RRAM array in order to minimize distortion. This adaptive joint source-channel coding method is resilient to RRAM array non-idealities such as cycle-to-cycle and device-to-device variations, time-dependent variability, and non-functional storage cells, while achieving a reasonable reconstruction performance of ~ 20 dB using only 0.1 devices/pixel for the analog image.

Go to the original article...

Pixart Reports Quarterly Results

Image Sensors World        Go to the original article...

Pixart Q3 2018 report shows that optical mouse sensor business keeps going strong:

Go to the original article...

Event-Based Vision to Dominate MV Applications?

Image Sensors World        Go to the original article...

InVision.de publishes Prophesee article with bold predictions for the machine vision future:

"Event-based vision is poised to take over from the frame-based approach used by traditional film, digital and mobile phone cameras in many machine-vision applications. The mode of operation of state-of-the-art image sensors is useful for exactly one thing: photography, i.e. for taking an image of a still scene.

An ´ideal´ image sensor samples parts of the scene that contain fast motion and changes at high sampling rates and slow changing parts at slow rates, all at the same time – with the sampling rate going to zero if nothing changes. Obviously, this will not work using one common single sampling rate, the frame rate, for all pixels of a sensor. Conversely, one wants to have as many sampling rates as there are pixel in the sensor – and let each pixel’s sampling rate adapt to the part of the scene it sees.
"

Go to the original article...

Sony Adds Square 1MP Sensor to its GS Family

Image Sensors World        Go to the original article...

Sony IMX419CLN sensor has 1MP 3.45um GS pixels and square pixel array and is intended for industrial B&W cameras.

Go to the original article...

MEMSDrive Speeds Up Super-Resolution Imaging

Image Sensors World        Go to the original article...

MEMSDrive says its image stabilization approach makes super-resolution image capturing faster:

Go to the original article...

Two Toulouse Workshops Program

Image Sensors World        Go to the original article...

Radiation Effects on Optoelectronic Detectors” workshop to be held in Toulouse, France on Nov 27, 2018, publishes its program with many interesting presentations:
  • Radiation Hardness Comparison of CMOS Image Sensor Technologies at High Total Ionizing Dose Levels S. Rizzolo, V. Goiffon, F.Corbière, R. Molina A. Chabane, P. Magnan, ISAE SUPAERO; S. Girard, A. Boukenter, T. Allanche, Univ. Saint-Etienne; P. Paillet, C. Muller, CEA DAM DIF; C. Monsanglant, Louvet, M. Osmond, H. Desjonqueres, IRSN; J-R Macé, New AREVA; P. Burnichon, J-P Baudu, OPTSYS; S. Plumeri, ANDRA
  • CIS113 Ionising Radiation Tolerance
    P. Turner, W. Hubbard, T. Lemon, Teledyne-E2V
  • Total Ionizing Dose Radiation Induced Dark Current Random Telegraph Signal in Pinned Photodiode CMOS Image Sensors
    C. Durnez, Airbus DS (formerly CNES/ISAE SUPAERO/ SOFRADIR); V. Goiffon, S. Rizzolo, P. Magnan, ISAE SUPAERO; C. Virmontois, CNES; P. Paillet, C. Marcandella, CEA DAM DIF; L. Rubaldo, SOFRADIR
  • MTG-FCI Qualification Phase Gamma and Proton Test Results
    R. Simpson, A. Walker, W, Hubbard, Teledyne E2V
  • Effect of Ionizing and Non-Ionizing Radiation one CMOS SPADs for Charged Particle Tracking
    L. Ratti, M. Musacci, C. Vacchi, Univ. Pavia/INFN Pavia; P. Brogi, P.S. Marrocchesi, Univ. Siena/INFN Pisa; G. Collazuol, Univ. Padova/INFN Padova; G.-F. Dalla Betta, A. Ficorella, L. Pancheri Univ. Trento/TIFPA; L. Lodola, STMicroelectronics; F. Morsani, INFN Pisa
  • Radiation Testing of the CIS115 CMOS Sensor for the JANUS Camera on ESA’s JUICE Mission
    M. Soman, D.-D. Lofthouse-Smith, C. Crews, E. Allanwood, A. Holland, K. Stefanov, M. Leese, The Open University; P. Turner, J. Pratlong, Teledyne-E2V
  • Dose and Single Event Effects on Color CMOS Camera for Space Exploration
    C. Virmontois, J.-M. Belloir, A. Bardoux, CNES; M. Beaumel, A. Vriet, SODERN; N. Perrot, C. Sellier, J. Bezine, D. Gambart, D. Blain, E. Garci-Sanchez, W. Mouallem, 3DPLUS
  • Radiation Effects in Pinned Photodiode CMOS Image Sensors: Variation of Photodiode Implant Dose
    J.-M. Belloir, C. Virmontois, A. Materne, A. Bardoux, CNES; V. Goiffon, M. Estribeau, P. Magan, ISAE SUPAERO
  • Radiation Induced Leakage Current in CMOS Image Sensor Floating Diffusion
    A. Le Roch, V. Goiffon, S. Rizzolo, F. Pace, C. Durnez, P. Magnan, ISAE SUPAERO; C. Virmontois, J.-M. Belloir, CNES; P. Paillet, CEA DAM DIF
  • Random telegraph signal investigation in different CMOS SPAD layout
    D. Fiore, F. Di Capua, M. Campajola, Univ. Calabria, INFN Cosenza
  • Neutron Irradiation of CCDs and Characterisation using Trap Pumping
    N. Bush, The Open University
  • Experimental Measurements of Damage Factors in Silicon Devices: Comparisons with NIEL
    T. Nuns, C. Inguimbert, S. Soonckindt, ONERA; B. Dryer, T. Buggey, The Open University; C. Poivey, ESA
  • NIEL Scaling Approach Reliability
    C. Inguimbert, T. Nuns, ONERA
  • A Comparison of p-channel and n-channel CCD Technologies Following Cryogenic Proton Irradiation
    A Holland, N Bush, B Dryer, D Hall, The Open University; P. Jerram, Teledyne-E2V
  • Investigating Differences in End-of-life Performance and Defects Properties of PLATO CCDs: Warm versus Cold Proton Irradiation
    T. Prod’homme, P. Verhoeve, F. Lemmel, H. Smit, S. Blommaert, C. Van der Luijt, I. Visser, T. Beaufort, Y. Levillain, B. Shortt, ESA
  • Modelling of Luminescence Induced by Proton Irradiation in HgCdTe Infrared Detector Array in Space Environment
    T. Pichon, S. Mouzali, O. Boulade, O. Limousin, CEA Dap; G. Badano, A. Ferron, O. Gravrand, CEA LETI
  • High-energy electrons impact on Sofradir NGP and Teledyne H1RG IR MCT arrays for JUICE/MAJIS instrument
    P. Guiot, M. Vincendon, Y. Langevin, A. Carapelle, J. Carter, IAS
  • Live Readout of the Device Under Test for Proton Irradiation Dosimetry During the First Space Component Proton Irradiations with the MC40 Accelerator at Birmingham
    M. Soman, N. Bush, R. Adlard, X. Meng, A. Holland, The Open University; T. Price, Univ. Birmingham

Ultra-Violet Detectors and Instruments” workshop in Toulouse to be held on November 28, too features a nice agenda:
  • UV Detector Development at Teledyne-e2v
    P. Jerram, Teledyne-E2V
  • Space-grade 3Kx3K Backside Illuminated CMOS Image Sensor for EUV Observation of the Sun
    S.Gissot, B. Giordanengo, A. BenMoussa, Royal Observatory of Belgium; G. Meynants, M. Koch, AMS CMOSIS; U. Schühle, Max Planck Institut; A. Gottwald, C. Laubis, U. Kroth, F. Scholze, Physikalisch-Technische Bundesansalt
  • Classical Frontside Illuminated CMOS and CCD Image Sensors are Suitable for Visible Light Imaging
    D.Van Aken, and B. Dierickx, Caeleste
  • Spatial Resolution and Noise Characteristics of Intensified Active Pixel Sensor Cameras for Vacuum Ultraviolet Imaging
    L. Teriaca, U. Schuehle, R. Aznar Cuadrado, K. Heerlein, M. Uslenghi, Max Planck Institute for Solar System Research
  • Photonis Ultraviolet Detectors
    E. Kernen, Photonis
  • The FUV Detector for the WSO-UV Field Camera Unit
    L. Diez, SENER; A. I. Gómez de Castro, UCM
  • Compact and Lightweight MCP Detector Development for UV Space Missions
    L. Conti, J. Barnstedt, L. Hanke, C. Kalkuhl, N. Kappelmann, T. Rauch, B. Stelzer, K. Werner, IAAT Universität Tübingen; H.-R. Elsener, Empa, Swiss Federal Laboratories for Materials Science and Technology; K. Meyer, D. M. Schaadt, Institute of Energy Research and Physical Technologies, Clausthal University of Technology
  • The New Oxide Paradigm for Solid State Ultraviolet Photodetectors
    D. J. Rogers, P. Bove, V.E. Sandana, F.H. Teherani, Nanovation; L. Dame, M. Meftah, J.F. Mariscal, CNRS LATMOS; M. Razeghi, R. McClintock, Centre for Quantum Devices ECE department; E. Frisch, S. Harel, Ofil Systems
  • What is New about Nitrides for UV Detection one Decade Years after the Last Studies in Europe?
    J.-L. Reverchon, III-V Lab; J.-Y. Duboz, CNRS-CRHEA
  • AlGaN Photodetectors for the Ultraviolet Regime
    R. Rehm, R. Driad , L. Hahn, S. Leone, T. Passow, F. Rutz, Fraunhofer Institute for Applied Solid State Physics IAF
  • 4H-SiC-based UV Photodiodes for Space Applications
    L. Ottaviani, O. Palais, IM2P3; M. Lazar, AMPERE; A. Lyoussi, CEA/DEN/CAD/DER/SPEx; E. Kalinina, A. Lebedev, IOFFE Institute
  • The POLLUX UV spectropolarimeter for the LUVOIR mission project
    C. Neiner, LESIA; J.-C. Bouret; E. Muslimov, LAM; H. Ouslimani, TAS
  • Ultra-violet polarimetry for Pollux
    M. Le Gal, C. Neiner, LESIA; A. López Ariste, CNRS IRAP; M. Pertenaisa DLR
  • UV Space Instrumentation at CSL: from the IMAGE FUV Spectrographic Imager to POLLUX
    R. Desselle, S. Habraken, J. Loicq, Centre Spatial de Liège
  • The Cosmic Evolution Through Ultraviolet Spectroscopy (CETUS) NASA Probe Mission Concept
    W. Danchi, L Purves, NASA GSFC; S. Heap, NASA GSFC Emerita R. Woodruff, Woodruff Consulting; A. Hull, Kendrick Aerospace Consulting LLC and Univ. New Mexico; S. Kendrick, Kendrick Aerospace Consulting LLC
  • SUAVE: a disruptive far UV telescope for long lasting performances in Space
    L. Damé, M. Meftah, N. Rouanet, P. Gilbert, CNRS LATMOS; P. Etcheto, J. Berthon, CNES
  • Space UV Lidars for Earth Observation: from Design to Flight Demonstration
    G. de Villèle, B. Corselle, J. Lochard, O. Saint-Pé, AIRBUS DS
  • Sentinel-4 and -5: Monitoring Earth’s Environment in the UV from Low-Earth and from Geostationary Space Orbits
    H. Candeias, A. Haerter, S. Riedl, C. Keim, S. Weiss; R. Maurer, R. Greinacher, AIRBUS DS
  • UV Instrument Development Activities for Space Weather Monitoring
    I. Biswas, Rhea System GmbH, ESA/ESOC
  • CUTE CubeSat Mission
    S. A. Gopinathan, L. Fossati, Space Research Institute, Austrian Academy of Sciences; K. France, B. Fleming, Arika Egan, Univ. of Colorado; J.-M. Desert, Univ. of Amsterdam; T. Koskinen, Univ. of Arizona; P. Petit, OMP; A. Vidotto, Trinity College Dublin
  • High-Resolution FUV Spectroscopy in a Cubesat package
    M. Beasley, Southwest Research Institute; R. McEntaffer, Pennsylvania State University
  • The Venus Spectrometry in UltraViolet (VeSUV) Instrument on-Board the ESA/M5 EnVision mission
    G. Guignan, N. Rouanet, E. Marcq, CNRS LATMOS
  • ULTRASAT – a wide-angle UV space telescope to capture transients
    J. Topaz, E. Waxman, M. Soumagniac, E. Ofek, O. Lapid, O. Aharonson, A. Gal-Yann, N. Ganot, Weizmann Institute of Science, S. Ben-Ami, Harvard- Smithsonian Centre for Astrophysics
  • SOLAR/SOLSPEC UV spectrometer. Lessons learned from the 9-year SOLAR mission
    D. Bolsée, N. Pereira, G. Cessateur, IASB-BIRA; M. Meftah, L. Damé, S. Bekki, A. Irbah, A. Hauchecorne, LATMOS; D. Sluse, ULG
  • Design and properties of the gratings of POLLUX, the UV high-resolution spectropolarimeter for LUVOIR
    E. Muslimov, J.-C. Bouret, LAM; C. Neiner, LESIA; H. Ouslimani, TAS
  • Instrument model for POLLUX
    S. Lombardo LAM; the POLLUX consortium
  • The computer-based simulator of the far UV detector implemented in the field Camera Unit on board the WSO-UV space telescope
    P. Marcos-Arenal, A. I Gómez de Castro, UCM

Go to the original article...

Melexis Announces 2nd Generation ToF Sensor

Image Sensors World        Go to the original article...

Melexis announces "a major upgrade to ToF technology for the automotive industry, its next-generation QVGA ToF sensor chipset and a forthcoming VGA ToF sensor."

The new ToF sensors are AEC-Q100 qualified and suitable for automotive applications including gesture recognition, driver monitoring and people/object detection. The new MLX75024 ToF QVGA sensor doubles the sensitivity of the previous generation while maintaining the same resolution and ambient light robustness. This allows it to operate in lower light levels or reduce the illumination power required by at least 30%. System efficiency is further enhanced by a 50% reduction in current consumption and the resulting lower heat generation allows the design of more compact cameras. A new selectable gain feature allows a trade-off between illumination power, accuracy and ambient light robustness. As a result of these, the SNR is two times better in low light conditions and distances greater than 1m. As an additional improvement, the sensor now integrates an on-chip temperature sensor, reducing system size and cost.

To support the latest MLX75024 QVGA ToF sensor, Melexis has developed the MLX75123BA ToF companion chip, which offers a three-fold improvement in front-end noise over its predecessor. The companion chip is used to configure parameters such as pixel gain, and now supports pixel binning to simplify hardware and software for lower resolution applications. Additionally, the MLX75123BA can support two MLX75024 sensors at the same time.

Melexis also has developed a new BSI VGA sensor. Initial sampling to automotive customers will start early 2019.

Go to the original article...

Samsung Exynos 9820 Supports 8K 30fps Video, 5 Cameras, and More

Image Sensors World        Go to the original article...

Samsung announces Exynos 9 Series 9820 application processor featuring multi-format codec (MFC) capable of encoding and decoding of 4K video at 150 fps or 8K video at 30 fps. The MFC also renders colors in 10-bit HDR mode.

"The Exynos 9820’s advanced image signal processor (ISP) supports up to five sensors, including an IR sensor, to enable flexible multi-camera solutions. With advanced design for greater photo quality and faster auto-focus, the Exynos 9820 offers best-in-class photography experience, which is further enhanced by the AI-capabilities of the NPU."

The Exynos 9 Series 9820 is expected to be in mass production by the end of this year.

Go to the original article...

AIT Uses Dynamic Vision Sensor in Panoramic Scanner

Image Sensors World        Go to the original article...

Austrian Institute of Technology presents its version of DVS - Dynamic Vision Sensor:

"Unlike conventional image sensors the chip has no pixel readout clock but signals the detected changes instantaneously. This information is signalled as so-called “events” that contain the information of the responding pixels x-y addresses (address-event) in the imager array and the associated timestamp via a synchronous timed addressevent-representation (TAER) interface. The sensor can produce two types of events for each pixel: “On”-events for a relative increase in light intensity and “Off”-events for a relative decrease (see diagram)."


AIT also makes a 360deg 3D scanner with its DVS sensor:


Thanks to TL for the links!

Go to the original article...

White Light Inteferometric 3D Imager

Image Sensors World        Go to the original article...

Heliotis HeliInspect H8 3D camera uses the company's next generation 3D image sensor HeliSens S4:


Thanks to TL for the flyer!

Go to the original article...

Forza on Image Sensor Verification Challenges

Image Sensors World        Go to the original article...

Forza CAD Manager, Kevin Johnson, presents "CMOS Image Sensor Verification Challenges for Safety Critical Applications" at Mentor Graphics U2U conference:

Go to the original article...

Andanta SWIR to Green Photon Converter

Image Sensors World        Go to the original article...

Andanta presents InGaAs PD array combined with green LED array in a single module, effectively converting SWIR photons to green photons with 3% efficiency:


Thanks to AB for the pointer!

Go to the original article...

Quantum Dot SWIR Cameras

Image Sensors World        Go to the original article...

SWIR Vision Systems introduces the Acuros family of low cost SWIR cameras featuring CQD sensing technology:


Thanks to AB for the pointer!

Go to the original article...

ams Pre-Releases Endoscopic Imagers

Image Sensors World        Go to the original article...

BusinessWire: ams pre-releases the NanEyeM (already announced last week) and NanEyeXS for single-use endoscopes in minimally invasive surgery.

The new 1mm2 NanEyeM offers a 100kpixel readout over an LVDS digital interface at a maximum rate of 49 fps at 62MHz. The NanEyeM, which is supplied as a Micro Camera Module (MCM) including a cable up to 2m long, features a custom multi-element lens which improves the effective resolution of the sensor and reduces distortion. Compared to the earlier NanEye 2D sensor, which has a single-element lens, the new NanEyeM offers improved MTF of more than 50% in the corners, lower distortion of less than 15%, and lower color aberration of less than 1Px.

The new NanEyeXS from ams has a 0.46mm2 footprint, making it one of the world´s smallest image sensors. It produces a digital output in 40kpixel resolution at a maximum rate of 55 fps at 28MHz. Like the NanEyeM, the NanEyeXS is supplied as an MCM.

The NanEyeM is also available in surface-mount chip form.

Medical endoscopy is a rapidly growing market and the demand for single-use devices is expected to increase, creating a clear need for cost-effective imaging solutions that offer a level of performance and image quality equal to that seen in reusable endoscopes. The NanEyeM and NanEyeXS modules were designed to meet this market need by offering a full package approach with exceptional imaging capabilities while retaining a cost competitive edge in high volumes for single-use endoscopes and catheter-based applications,” said Dina Aguiar, marketing manager for the NanEye products at ams. “These new additions to the NanEye family will complement the award winning NanEye 2D, which pioneered the technological evolution of medical endoscopy. ams thus reinforces its position in the rapidly growing market for disposable endoscopy with unique products that will help further revolutionize patient care.

The NanEyeXS and NanEyeM image sensors will be available for sampling in January 2019.

Go to the original article...

2018 Harvest Imaging Forum Agenda

Image Sensors World        Go to the original article...

Albert Theuwissen announces agenda of Harvest Imaging Forum, to be held on December 6-7 in Delft, the Netherlands.

Day 1 of the forum is devoted to "Efficient embedded deep learning for vision applications," presented by Marian VERHELST (KU Leuven, Belgium):
  1. Introduction into deep learning
    From neural networks (NN) to deep NN
    Benefits & applications
    Training and inference with deep NN
    Types of deep NN
    Sparse connectivity
    Residual networks
    Separable models
    Key enablers & challenges
  2. Computer architectures for deep NN inference
    Benefits and limitations of CPU and GPUs
    Exploiting NN structure in custom processors
    Architecture level exploitation: spatial reuse in efficient datapaths
    Architecture level exploitation: temporal reuse in efficient memory hierarchies
    Circuit level exploitation: near/in memory compute
    Exploiting NN precision in custom processors
    Architecture level exploitation: reduced and variable precision processors
    Circuit level exploitation: mixed signal neural network processors
    Exploiting NN sparsity:
    Architecture level exploitation: computational and memory gating
    Architecture level exploitation: I/O compression
  3. HW and SW optimization for efficient inference
    Co-optimizing NN topology and precision with hardware architectures
    Hardware modeling
    Hardware-aware network optimization
    Network-aware hardware optimization
  4. Trends and outlook
    Dynamic application-pipelines
    Dynamic SoCs
    Beyond deep learning, explainable AI
    Outlook
Day 2 is devoted to "Image and Data Fusion," presented by Wilfried PHILIPS (imec and Ghent University, Belgium):
  1. Data fusion: principles and theory
    Bayesian estimation
    Priors and likelihood
    Information content, redundancy, correlation
    Application to image processing: recursive maximum likelihood tracking, pixel fusion
  2. Pixel level fusion
    Sampling grids and spatio-temporal aliasing
    Multi-modal sensors, interpolation
    Temporal fusion and superresolution
    Multi-focal fusion
  3. Multi-camera image fusion
    Occlusion and inpainting
    Uni and multimodal Inter-camera pixel fusion
    Fusion of heterogeneous sources: camera, lidar, radar
    Applications: time of flight, hyperspectral, hdr, multiview imaging
    Fusion of heterogeneous sources: radar, video, lidar
  4. Geometric fusion
    Multi-view geometry
    Fusion of point clouds
    Image stitching
    Simultaneous localization and mapping
    Applications: remote sensing from drones and vehicles
  5. Inference fusion in camera networks
    Multi-camera calibration
    Occlusion reasoning for multiple cameras with an overlapping viewpoint
    Multi-camera tracking
    Cooperative fusion and distributed processing

Go to the original article...

Pyxalis Presents GS HDR Sensor

Image Sensors World        Go to the original article...

Pyxalis seems to expand its activity beyond custom image sensors to standard products. At the Vision Show in Stuttgart, Germany, the company presented a flyer of its Robin chips with 3.2um global shutter pixels, said to provide "artifact-free in-pixel HDR." The new sensor outputs ASIL data per frame, suitable for automotive applications:


Thanks to AB for the photo from Pyxalis booth!

Go to the original article...

Ouster Discusses its LiDAR Principles

Image Sensors World        Go to the original article...

PRNewswire: Ouster unveils the details of its LiDAR technology. Several breakthroughs covered by recently granted patents have enabled Ouster's move toward state-of-the art, high volume, silicon-based sensors and lasers that operate in a near-infrared light spectrum.

Ouster's multi-beam Lidar is said to carry significant advantages over traditional approaches:

True solid state - Ouster's core technology is a two chip (one monolithic laser array, one monolithic receiver ASIC) solid state lidar core, which is integrated in the mechanically scanning product lines (the OS-1 and OS-2) and will be configured as a standalone device in a future solid state product. Unlike competing solid state technologies, Ouster's two chip lidar core contains no moving parts on the macro or micro scale while retaining the performance advantages of scanning systems through its multi-beam flash lidar technology.

Lower cost at higher resolution - Ouster's OS-1 64 sensor costs nearly 85% less than competing sensors, making it the most economical sensor on the market. In an industry first, Ouster has decoupled cost from increases in resolution by placing all critical functionality on scalable semiconductor dies.

Simplified architecture - Ouster's multi-beam flash lidar sensor contains a vastly simpler architecture than other systems. The OS-1 64 contains just two custom semiconductor chips capable of firing lasers and sensing the light that reflects back to the sensor. This approach replaces the thousands of discrete, delicately positioned components in a traditional lidar with just two.

Smaller size and weight - Because of the sensor's simpler architecture, Ouster's devices are significantly smaller, lighter weight and more power efficient, making them a perfect fit for unmanned aerial vehicles (UAVs), handheld and backpack-based mapping applications, and small robotic platforms. With lower power and more resolution, drone and handheld systems can run longer and scan faster for significant increases in system productivity.

In an article on the company's website, CEO Angus Pacala wrote, "I'm excited to announce that Ouster has been granted foundational patents for our unique multi-beam flash lidar technology which allows me to talk more openly about the incredible technology we've developed over the last three years and why we're going to lead the market with a portfolio of low-cost, compact, semiconductor-based lidar sensors in both scanning and solid state configurations."

The US10063849 "Optical system for collecting distance information within a field" and US9989406 "Systems and methods for calibrating an optical distance sensor." disclose LiDAR Tx side consisting of an array of VCSEL lasers and Rx side - an array of SPADs. The VCSEL lasers project an set of points on the subject, while each SPAD has a small FOV aligned with the projection point in order to cut the ambient light. Also, the Rx side optics has a 2nm-narrow spectral filter, again to cut more of the ambient light illumination. All this is placed on a rotating platform:


Angus Pacala also publishes an explanatory article in the Company's blog on Medium and gives an interview to ArsTechnica. Few quotes:

"While our technology is applicable to a wide range of wavelengths, one of the more unique aspects of our sensors is their 850 nm operating wavelength. The lasers in a lidar sensor must overcome the ambient sunlight in the environment in order to see obstacles. As a result lidar engineers often choose operating wavelengths in regions of low solar flux to ease system design. Our decision to operate at 850 nm runs counter to this trend.

A plot of solar photon flux versus wavelength at ground level (the amount of sunlight hitting the earth versus wavelength) shows that at 850 nm there is almost 2x more sunlight than at 905 nm, up to 10x more sunlight than at 940nm, and up to 3x more sunlight than 1550 nm — all operating wavelengths of legacy lidar systems.



We’ve gotten plenty of strange looks for our choice given that it runs counter to the rest of the industry. However, one of our patented breakthroughs is exceptional ambient light rejection which makes the effective ambient flux that our sensor sees far lower than the effective flux of other lidar sensors at other wavelengths, even accounting for the differences in solar spectrum. Our IP turns what would ordinarily be a disadvantage into a number of critical advantages:

  • Better performance in humidity
  • Improved sensitivity in CMOS: Silicon CMOS detectors are far more sensitive at 850 nm than at longer wavelengths. There is as much as a 2x reduction in sensitivity just between 850 and 905 nm. Designing our system at 850 nm allows us to detect more of the laser light reflected back towards our sensor which equates to longer range and higher resolution.
  • High quality ambient imagery
  • Access to lower power, higher efficiency technologies

...the flood illumination in a conventional flash lidar, while simpler to develop, wastes laser power on locations the detectors aren’t looking. By sending out precision beams only where our detectors are looking, we achieve a major efficiency improvement over a conventional flash lidar.

Our single VCSEL die has the added advantage of massively reducing system complexity and cost. Where other lidar sensors have tens or even hundreds of expensive laser chips and laser driver circuits painstakingly placed on a circuit board, Ouster sensors use a single laser driver and a single laser die. A sliver of glass no bigger than a grain of rice is all that’s needed for an OS-1–64 to see 140 meters in every direction. It’s an incredible achievement of micro-fabrication that our team has gotten this to work at all, let alone so well.

The second chip in our flash lidar is our custom designed CMOS detector ASIC that incorporates an advanced single photon avalanche diode (SPAD) array. Developing our own ASICs is key to our breakthrough performance and cost, but the approach is not without risk. Ouster’s ASIC team has distinguished themselves time and again and they’ve now delivered seven successful ASICs — each more powerful, more reliable, and more refined than the previous."

Go to the original article...

Photoneo 3D Camera Wins Vision Show Award

Image Sensors World        Go to the original article...

Optics.org reports: "The winner of this year’s VISION Award, presented by Imaging & Machine Vision magazine, was named at the conference as Photoneo. Its new PhoXi 3D Camera is said to be the highest resolution and highest accuracy area based 3D camera available. It is based on Photoneo’s patented technology called Parallel Structured Light implemented by a custom CMOS image sensor.

The developer says this “novel approach” makes it the most efficient technology for high resolution scanning in motion. The key features of Parallel Structured Light include: scanning in a rapid motion – one frame acquisition, 40 m/s motion possible; 10x higher resolution and accuracy with more efficient depth coding technique with per pixel measurement possible; no motion blur resulting from its 10 µs per pixel exposure time; and rapid acquisition of 1068x800 point-clouds and texture up to 60 fps.
"

Photoneo claims that its custom designed image sensor is the key to the high performance of its 3D camera:

"Photoneo has developed a new technique of one frame 3D sensing that can offer high resolution common for multiple frame structured light systems, with fast, one frame acquisition of TOF systems. We call it Parallel Structured Light and it runs thanks to our exotic image sensor."


The company's patent application US20180139398 updates on the "exotic image sensor" over the earlier version circa 2014:


The 3D camera offers a nice trade-off between the resolution and speed:




Update: IMVE too publishes an article on Photoneo technology.

Go to the original article...

Himax Updates on 3D Imaging, CIS Business

Image Sensors World        Go to the original article...

Globenewswire: Himax quarterly earnings release updates on the company's CIS and 3D imaging business:

"Himax has participated in most of the smartphone OEMs’ ongoing 3D sensing projects covering all three types of technologies, namely structured light, active stereo camera (ASC) and time-of-flight, where it provides 3D sensing total solution, or just the projector or optics inside the module, depending on the customers’ needs. By offering either the projector or critical optics, Himax has been collaborating with a small handful of smartphone names that have in-house capability to come up with their own customized 3D sensing solutions. Himax already has one such end customer using its technology for mass production with two more in the pipeline targeting 2019 product launch.

For most Android smartphone makers who don’t have such in-house capability, however, the Company aims to provide total solution to enable their 3D sensing. At present, the 3D sensing adoption for this market remains low. The adoption is hindered primarily by the prevailing high hardware cost of 3D sensing and the long development lead time required for 3D sensing to integrate it into the smartphone and the lack of killer applications. Instead of 3D sensing, most of the Android phone makers have chosen the lower cost finger print technology which can achieve similar phone unlock and online payment functions with somewhat compromised user experience.

Reacting to their lukewarm response, Himax is working on the next generation 3D sensing with an aim to leapfrog the market by providing high performance, easy to adopt and yet cost friendly total solutions, targeting most of the Android smartphone players. In addition, Himax is providing 3D sensing developer kit which is being used to develop applications over both smartphone and non-smartphone platforms. Himax believes that 3D sensing will be widely used by more Android smartphone makers when the ecosystem is able to substantially lower the cost of adoption while offering easy-to-use, fully-integrated total solutions, for which Himax is playing a key part.

The Company has mentioned previously that 3D sensing can have a wide range of applications beyond smartphone. While smartphone remains its top priority, the Company has started to explore business opportunities in various industries by leveraging its SLiM 3D sensing total solution. Such industries are typically less sensitive to cost and always require a total solution. Himax recently announced collaboration with Kneron, an industry leader in edge-based artificial intelligence, to develop an AI-enabled 3D sensing security and surveillance solution is just an example of real world applications using its 3D sensing technology.

On CMOS image sensor business updates, Himax continues to make great progress with its two machine vision sensor product lines, namely, near infrared (“NIR”) sensor and Always-on-Sensor (“AoS”). NIR sensor is a critical part for both of the Company’s structured light and ASC 3D sensing total solutions. On the AoS product line, the joint offering of Emza and Himax technologies uniquely positions the Company to provide ultra-low power, smart imaging sensing total solutions, leveraging Himax’s industry leading super low power CIS and ASIC designs and Emza’s unique AI-based computer vision algorithm. The Company is pleased with the status of engagement with leading players in areas such as connected home, smart building and security, all of which new frontiers for Himax.
"

Go to the original article...

Caterpillar Develops LiDAR for Tracks

Image Sensors World        Go to the original article...

InternationalMining: Caterpillar’s Command for Hauling automation system used to use Velodyne LiDAR sensor for its tracks. Cat has now developed its own in-house LiDAR sensor, Cat LiDAR. While the Velodyne is used in hundreds of haul trucks across Western Australia and elsewhere – it is said to lack the reliability and capability to meet Cat’s long-term needs.

For example, Velodyne was not able to work in cold climates below freezing, while LiDAR would often detect dust as a hazard, causing an unnecessary track stop.

Cat LiDAR has been in field tests for the past year and one commercial unit has already been shipped to a new Command for Hauling customer. The OEM is expected to make it available as a replacement option for existing operations, Cat said.

The new system include greater tolerance of extreme temperatures – it has been tested down to -40°C, improvement in accuracy of operating distances between vehicles and obstructions, enhanced ability to distinguish between hazards and non-hazards, the ability to measure the diagnostics and health of the LiDAR sensor.

Cat says the new LiDAR has been proven to last three times longer than the previous sensor when it comes to reporting first failure.

Go to the original article...

Gait Recognition in China

Image Sensors World        Go to the original article...

Techcrunch, AP: Chinese AI startup Watrix has recently raised $14.5m to further develop its gait recognition technology that is supposed to complement a face recognition in security and surveillance cameras. The technology is already being used by police in Beijing and Shanghai where it can identify individuals even when their face is obscured or their back is turned.

Huang Yongzhen, the CEO of Watrix, said that its system can identify people from up to 50 meters away, even with their back turned or face covered. This can fill a gap in facial recognition, which needs close-up, high-resolution images of a person’s face to work.

You don’t need people’s cooperation for us to be able to recognize their identity,” Huang said in an interview in his Beijing office. “Gait analysis can’t be fooled by simply limping, walking with splayed feet or hunching over, because we’re analyzing all the features of an entire body.

Go to the original article...

High Speed Imaging from Sparse Photon Counts

Image Sensors World        Go to the original article...

Arxiv.org paper "A `Little Bit' Too Much? High Speed Imaging from Sparse Photon Counts" by Paramanand Chandramouli, Samuel Burri, Claudio Bruschini, Edoardo Charbon, and Andreas Kolb from University of Siegen, Germany, and Swiss Federal Institute of Technology, Lausanne, Switzerland shows the power of machine learning in recovering nice images from single-photon mess:

"Recent advances in photographic sensing technologies have made it possible to achieve light detection in terms of a single photon. Photon counting sensors are being increasingly used in many diverse applications. We address the problem of jointly recovering spatial and temporal scene radiance from very few photon counts. Our ConvNet-based scheme effectively combines spatial and temporal information present in measurements to reduce noise. We demonstrate that using our method one can acquire videos at a high frame rate and still achieve good quality signal-to-noise ratio. Experiments show that the proposed scheme performs quite well in different challenging scenarios while the existing denoising schemes are unable to handle them."

Go to the original article...

Ams Pre-Releases NaneyeM Module

Image Sensors World        Go to the original article...

BusinessWire: Ams announces the pre-release of the NanEyeM, a miniature integrated Micro Camera Module (MCM) assembly with a tiny footprint at the image sensor end of just 1mm2. The NanEyeM is aimed for integration into space-constrained industrial and consumer designs, providing new embedded vision capabilities in products such as smart toys and home appliances.

The NanEyeM offers a resolution of 100kpixel, 10-bit digital readout, and features a Single-Ended Interface Mode (SEIM). Like a standard SPI, the SEIM channel is easy to implement in any host processor and provides a cost-optimized solution without the need for LVDS deserialization. The maximum frame rate over the SEIM interface is 58 fps at a clock rate of 75MHz.

The NanEyeM features a custom multi-element lens which greatly improves the effective resolution of the sensor and reduces distortion compared to competing sensors that have a single-element lens. The MTF (Modulation Transfer Function) is >50% in the corners, distortion is <15% and color aberration is <1Px. Designers who wish to add high-resolution video capability in space-constrained enclosures have until now been hampered by the size of the industrial image sensors on the market. The introduction of the NanEyeM module opens up new possibilities to add camera capability in the smallest spaces,” said Tom Walschap, Marketing Director in the CMOS Image Sensor business line at ams. “Provided in an easy-to-use module format with a convenient digital output, designers can quickly add camera capability with little development effort.

The NanEyeM image sensor will be available for sampling in Q2 2019.

Go to the original article...

ToF-Based People Counter

Image Sensors World        Go to the original article...

ST presents a use case for its ToF proximity sensor:

Go to the original article...

SiOnyx Night Vision Demo

Image Sensors World        Go to the original article...

SiOnyx publishes a demo of its Aurora camera:

"The Sionyx Aurora camera looking at buffalo grazing about 1.5 hours after sunset. The first part of the recording is taken using Aurora's Twilight mode and the second part using Color Night Vision. Notice the pinkish color of the grass and trees. This is "Earth Glow" where IR energy collected in the atmosphere during the day time is reflected by plants at night. Aurora is able to detect and take advantage of that IR light."

Go to the original article...

Yole on Components for 3D Sensing

Image Sensors World        Go to the original article...

Yole Developpement publishes a nice webcast "Components for 3D Sensing Revolution:"



The webcast has an interesting comparison of cost of various 3D cameras:

Go to the original article...

Ams Announces 3.2um GS Pixel Sensor, the Fastest among 1-inch Sensors

Image Sensors World        Go to the original article...

BusinessWire: ams introduces a new global shutter sensor for machine vision and Automated Optical Inspection (AOI) equipment which offers better image quality and higher throughput than any previous device that supports the 1” optical format.

The new CSG14k image sensor features 14MP resolution at a "frame rate considerably higher than any comparable device on the market offers today." The CSG14k’s 12-bit output provides sufficient dynamic range to handle wide variations in lighting conditions and subjects. The sensor’s global shutter with true CDS (Correlated Double Sampling) produces high-quality images of fast-moving objects free of motion artefacts.

The high performance and resolution of the CSG14k are the result of innovations in the design of the sensor’s 3.2µm x 3.2µm pixels. The new pixel design is 66% smaller than the pixel in the previous generation of 10-bit ams image sensors, while offering a 12-bit output and markedly lower noise.

Future advances in factory automation technology are going to push today’s machine vision equipment beyond the limits of its capabilities. The breakthrough in image quality and performance offered by the CSG14k gives manufacturers of machine vision systems headroom to support new, higher throughput rates while delivering valuable improvements in image quality and resolution,” said Tom Walschap, Marketing Director in the CMOS Image Sensors business line at ams.

The CSG14k will be available for sampling in the first half of 2019.

Go to the original article...

TowerJazz Announces Automotive SPAD Parameters, LeddarTech Combines SPADs with CIS

Image Sensors World        Go to the original article...

GlobeNewswire: TowerJazz's 0.18um CIS SPAD platform offers an integrated solution with superb figures of merit. Its photon detection efficiency (PDE) is similar to, or better than, the leading stand-alone SPADs on the market. The dark count rate (DCR) is less than 100Hz/um^2 at 60°C and less than 1KHz/um^2 at 100°C (especially suited for automotive applications), and jitter of less than one nanosecond. This sophisticated platform also saves silicon, and therefore, reduces cost of mass production.

TowerJazz's 0.18um CIS SPAD process has been chosen by LeddarTech for its next generation automotive LiDAR solutions, combining CMOS image sensors and SPAD on the same chip. Integration of everything on the same chip is said to save silicon cost.

With our advanced CIS SPAD technology, we are able to provide groundbreaking manufacturing solutions for the growing LiDAR and automotive markets. We are pleased to work with LeddarTech, a true innovator in solid state-LiDAR technology,” said Avi Strum, TowerJazz SVP and GM, CMOS Image Sensor Business Unit.

Go to the original article...

css.php