Proton Image Sensor with High Spatial-Temporal Resolution

Image Sensors World        Go to the original article...

Nature publishes a paper "CMOS-based bio-image sensor spatially resolves neural activity-dependent proton dynamics in the living brain" by Hiroshi Horiuchi, Masakazu Agetsuma, Junko Ishida, Yusuke Nakamura, Dennis Lawrence Cheung, Shin Nanasaki, Yasuyuki Kimura, Tatsuya Iwata, Kazuhiro Takahashi, Kazuaki Sawada, and Junichi Nabekura from National Institutes of Natural Sciences, Okazaki, The Graduate University for Advanced Studies, Hayama, and Toyohashi University of Technology, Japan.

"Recent studies have shown that protons can function as neurotransmitters in cultured neurons. To further investigate regional and neural activity-dependent proton dynamics in the brain, the development of a device with both wide-area detectability and high spatial-ltemporal resolution is necessary. Therefore, we develop an image sensor with a high spatial-temporal resolution specifically designed for measuring protons in vivo. Here, we demonstrate that spatially deferent neural stimulation by visual stimulation induced distinct patterns of proton changes in the visual cortex. This result indicates that our biosensor can detect micrometer and millisecond scale changes of protons across a wide area. Our study demonstrates that a CMOS-based proton image sensor with high spatial and temporal precision can be used to detect pH changes associated with biological events. We believe that our sensor may have broad applicability in future biological studies."

Go to the original article...

ON Semi Reports Q4 2019 Results

Image Sensors World        Go to the original article...

SeekingAlpha transcript of ON Semi Q4 earnings call updates on the company's CIS business:

"An emerging area of growth for our industrial business is e-commerce. We have built a strong design win pipeline for our CMOS image sensors for warehouse automation and delivery robots. We are engaged with the leading e-commerce retailers on many programs, and we expect strong contribution from this segment of the industrial market.

During the fourth quarter of 2019, we secured design wins for key platforms for ADAS and incabin viewing applications. Our design funnel for ADAS continues to expand at a robust pace. As we noted in our previous earnings call, we have won 16 of the 17 two-megapixel and eight-megapixel platforms awarded in 2019 for level-2 and level-3 vehicles.

Our LiDAR and radar products are gaining strong traction, and our design funnel for these products continues to expand at a rapid pace. We believe that we are enabling democratization of LiDAR with a solid-state solution, which is a fraction of cost of other existing solutions. Our low cost advantage is enabled by a CMOS based architecture as opposed to that based on exotic materials. Based on our design win pipeline, we expect to have leading share with top five global LiDAR module makers.
"

Go to the original article...

1.12um Sony Pixel Variations under Electron Microscope

Image Sensors World        Go to the original article...

R.Matthews, N.Falkner, and M.Sorella from The University of Adelaide, Australia publish an updated version of their paper "Reverse engineering the Raspberry Pi Camera V2: A study of pixel non-uniformity using a scanning electron microscope."

"In this paper we reverse engineer the Sony IMX219PQ image sensor, otherwise known as the Raspberry Pi Camera v2.0. We provide a visual reference for pixel non-uniformity by analysing variations in transistor length, microlens optic system and in the photodiode. We use these measurements to demonstrate irregularities at the microscopic level and link this to the signal variation measured as pixel non-uniformity used for unique identification of discrete image sensors."

Go to the original article...

Broadcom Announces SiPM with High NIR PDE for LiDARs

Image Sensors World        Go to the original article...

Broadcom Industrial Fiber Products Division (IFPD) announces new silicon photomultiplier (SiPM) devices for automotive and industrial LiDAR applications. Broadcom’s latest NIR SiPM solutions address various challenges, such as range limitations and multi-target resolution. The underlying NIR SiPM technology is said to deliver an unprecedented performance by combining a high photon detection efficiency (PDE) of 18 percent at 905 nm with a recharge time of 10 ns. HDR is achieved with the smallest cell size, while a low dark count rate (DCR), low crosstalk and after-pulsing probability make Broadcom’s NIR SiPM good fit for LiDAR applications.

Broadcom NIR SiPM highlights:
  • PDE at 905 nm: 18%
  • Recharge time constant: < 10 ns
  • Single photon time resolution: 500 ps
  • Smallest cell size
  • DCR: 600 kHz/mm2
  • Direct crosstalk: < 20%
  • Samples available
Broadcom IFPD successfully released NUV-HD products to the market two years ago. Now that our key SiPM milestones have been reached, we are excited to announce our new cutting-edge NIR solutions for LiDAR applications. The very high PDE, small cell size and low recharge time are critical to enabling mass market adoption of LiDAR. With a long-established history of developing and supplying automotive-grade and industrial fiber optic solutions, Broadcom is well positioned to address the growing demand of high performing LiDAR sensors for the automotive and industrial markets,” said Martin Weigert, VP and GM of Broadcom IFPD.

Go to the original article...

1MP 14,350fps Camera

Image Sensors World        Go to the original article...

Vision Research-Ametek announces the Phantom VEO 1310 equipped with a new proprietary 1.2MP CMOS sensor with 2.5X higher light sensitivity than previous VEO models. It achieves recording speeds over 10,800 fps at full 1.2MP resolution and higher rates at reduced resolutions. It has a native ISO of 25,000 (monochrome).

Key Specifications of the Phantom VEO 1310:
  • 10,860 fps at 1280 x 960 resolution
  • Increased frame rates at reduced resolution, for example:
    14,350 fps at 1280 x 720
    30,030 fps at 640 x 480 standard or 40,300 fps with binning enabled
  • Max frame rate is 423,350 fps at 320 x 24
  • Native ISO (12232 STD)—25,000 D (mono) and 6,400 D (color)


The Phantom VEO 1310 captures an ignition test via schlieren imaging:

Go to the original article...

Sony Reports Quarterly Results

Image Sensors World        Go to the original article...

Sony reports its quarterly results for the quarter ended on Dec. 31, 2019. The company says:
  • FY19 Q3 sales increased 29% year-on-year to 298.0 billion yen primarily due to an improvement in product mix and an increase in unit sales of image sensors for mobile devices.
  • Operating income increased 28.7 billion yen year-on-year to 75.2 billion yen primarily due to the impact of the increase in sales, partially offset by an increase in research and development costs and depreciation expense.
  • We revised upward our FY19 sales forecast 50 billion yen to 1 trillion 90 billion yen and our operating income forecast 30 billion yen to 230 billion yen.
  • Demand for our image sensors in Q4 continues to be strong.
  • Although production capacity is expanding according to plan and we continue to operate at full production capacity utilization, sales are increasing due to strong nearterm demand, and that is preventing us from stockpiling strategic inventory as originally planned.
  • In addition, partly due to the introduction of a highly competitive new product this fiscal year, we have been able to maintain our overall margin, all of which has enabled us to operate this business extremely well.
  • There is no change to our view that demand will continue to increase over the mid- to long-term from next fiscal year, but, in regards to next fiscal year in particular, we cannot be too optimistic due to the impact of the spread of infection from the new coronavirus that I mentioned earlier, the competitive environment and various geopolitical risks.
  • We will continue to closely monitor demand trends and the external environment as we manage this business going forward.
  • ToF sensors, which we expect will be the next growth driver after image sensors, have begun to sell well, although their size within the overall business is still small.
  • We expect their adoption, primarily in mobile devices, to increase further from next fiscal year.
  • Taking a longer-term view, as we made a point of showcasing at CES last month, we are taking steps to expand the adoption of Sony’s imaging and sensing technology in the mobility space and in the diverse industrial and factory automation space.
  • We plan to proactively invest even more in technology development to grow this business in the future, such as the hiring of personnel including algorithm and software engineers, and the building of an office in Osaka to serve as a design and development center for image sensors.

Go to the original article...

Fairchild Imaging Announces sCMOS 3.0 Sensor with 0.5e- Noise

Image Sensors World        Go to the original article...

BAE Fairchild Imaging presents a new ultra low light HWK4123 sensor featuring a BSI sCMOS 3.0 technology. It is said to have "advanced the state-of-the-art in low light imaging by lowering the read noise to 0.5 electrons while improving the broad-spectrum quantum efficiency." The 4/3” 9MP HWK4123 enables night vision and surveillance cameras to image at less than 0.001 Lux (starlight). The HWK4123 is sampling now.


Thanks to RP for the info!

Go to the original article...

SK Hynix Black Pearl Technology

Image Sensors World        Go to the original article...

SK Hynix publishes an article "Evolution of Pixel Technology in CMOS Image Sensor" by Hoon Sang Oh, Fellow of CIS Business at SK Hynix.

"SK hynix started its CIS business in 2007 and has produced 8-inch wafer-based image sensor products for more than a decade. With the advent of IoT and 5G-based ICT world, the image sensor market is growing rapidly today, and SK hynix is fostering the CIS business as the next growth engine beyond its memory business in order to meet this soaring demand. After a couple of years of preparation for the 12-inch fab CIS process, SK hynix launched a 12-inch wafer-based 1.0um product in 2019, which adopted its proprietary Black Pearl pixel technology and received good reviews from customers due to the highly competitive pixel performance. Black Pearl boasts improved noise characteristics, implementing clear images with little noise under low illumination conditions.

SK hynix is now expanding its CIS product portfolio spectrum from mid-end to high-end products based on both performance and price competitiveness. Through its investment in the CIS space in 2020 and beyond, the company aims to quickly become one of the world’s leading CIS companies.
"




Update: In another PR, SK Hynix reports that it has opened CIS R&D center in Tokyo, Japan, on Sept. 1, 2019:

"By establishing the JRC in the heart of the world’s image sensor innovation, SK hynix aims to secure the excellent CIS talent and to further strengthen its technological prowess through the creation of a global network.

Japan is home to numerous small and large image sensor manufacturers, with Sony being one of the largest. According to IHS Markit, a leading market research institution, Sony now leads the image sensor market with its 51.1% market share achieved in the first quarter of 2019.

The country is also seeing companies from overseas, such as China, bolstering their research and development by establishing research centers in Japan. “The significance of SK hynix opening up the JRC in Japan cannot be understated. The new facility will give the company ultimate access to the diverse CIS resources concentrated in Japan,” said [Shimura Masayuki, the Head of the Hynix JRC.] “This extends to joint research and development with leading Japanese universities, which is expected to contribute to various aspects of our CIS business, including the successful development of a new CIS technology.

Go to the original article...

Oryx Vision LiDAR Post-mortem

Image Sensors World        Go to the original article...

Oryx Vision LiDAR startup raised $67M but closed its operations on summer 2019 and returned the remaining $40M back to investors. In a Youtube lecture, the company co-founder and former VP R&D David Ben-Bassat presents the company's technology and achievements (in Hebrew):

Go to the original article...

Ex-Awaiba Team Founds Optasensor and Unveils its First Product

Image Sensors World        Go to the original article...

Optasensor GmbH, a recently founded German company with expertise in the field of photonics integration, design and manufacturing of specially tailored small factor visualization and sensing solutions announces Osiris Micro Camera Module Series, a camera module with 1mm x 1mm footprint, customized wafer level optics, including NIR Cut filter allowing easy vision integration into constrained spaces.


Being an independent Optics and Sensor Module design and manufacturing company it offers Osiris Micro camera module adaptions, using customized and standard CMOS image sensors from worldwide manufacturers. The Camera Module integrates tailor made optical elements allowing endoscope specific oriented solutions, including customised wafer level or single replicated (allowing miniature lensing down to 500um diameter) optics, filters and prisms for light shaping as well as the possibility of integration of miniature illumination solutions.


The founders are pioneers in developing and producing Microcameras in high volume for disposable endoscope market worldwide, with the former company Awaiba. OptaSensor GmbH thus draw on more than a decade of knowledge on miniature fully enclosed camera systems modules, with a focus on the medical market.

The company will exhibit Osiris MCM Series on the coming SPIE Photonics West trade show in San Francisco as well on MD&M trade show in Anaheim.

Go to the original article...

Gigajot Wins NASA SBIR Phase II Project to Develop a Photon-Counting UV/EUV Sensor

Image Sensors World        Go to the original article...

PRNewswire: Gigajot has been awarded a NASA Phase II SBIR project to develop a UV/EUV photon number resolving image sensor based on the QIS technology.

The QIS is a platform technology and can be used in a wide range of imaging and photonics applications ranging from consumer to high-end (e.g. scientific). It provides excellent photon-counting capability, low dark current, high resolution, and high-speed operation. Also, QIS is compatible with the mainstream CMOS image sensor processes. The QIS devices can be designed and implemented in different formats (from a few pixels to hundreds of megapixels), different pixel sizes (from sub-micron to more than ten microns), and different spectral sensitivity (UV-VIS-NIR).

In this project, Gigajot will deliver a novel platform photon-counting image sensor technology, QIS, for future NASA mission concept studies as well as other scientific and consumer applications. The outcome of this project will be a large-format visible-blind CMOS UV/EUV photon-counting sensor with accurate photon-counting capability. The novel sensor will provide several features and capabilities that are not available with other high-sensitivity detectors, these features include: accurate photon-number-resolving, zero dead time, low voltage and power requirements, high spatial resolution, and room temperature operation, among others.

Jiaju Ma, the project's principal investigator and the CTO of Gigajot, noted, "Enabled by our patented innovations, the novel image sensor will combine a linear multi-bit photon-counting response in each detection cycle, zero dead time, low dark current, low operating voltage, the capability of room temperature operation, and strong radiation hardness. When combined with the existing advanced back-surface passivation techniques and band-pass filters, it can produce accurate visible-blind UV/EUV photon-counting with high quantum efficiency."

"Besides scientific applications, there are also sizeable markets for the proposed technology in automotive, medical, industrial, defense, and security applications." Said Gigajot's President & CEO, Saleh Masoodian, "A large-format UV/EUV CMOS photon-counting sensor can potentially have numerous applications in these markets. For example, UV imaging is used in dermatology to identify and visualize epidermal dermatologic conditions. With the high spatial resolution and single-photon sensitivity provided by Gigajot innovative imaging technologies, the details and features of the subcutaneous skin can be more accurately visualized. The reflected-UV imaging is also widely used to detect scratches and digs on the optical surfaces. For instance, the semiconductor industry uses UV imaging to perform an automated inspection on the photomasks. This inspection needs sensors with ultra-high spatial resolution and large format to quickly scan a large area and detect defects with submicron size. Gigajot is looking for partners interested in specific applications for this new imaging technology."

Go to the original article...

Imec Announces New Hyperspectral Devices

Image Sensors World        Go to the original article...

Imec presents a number of new hyperspectral cameras. The SWIR one is based on a new InGaAs sensor:

"Compared to VNIR-range based systems, SWIR-based hyperspectral imaging solutions capture more of the molecular structure of the screened objects (like moisture, lipid, protein contents) and allow for high added value distinctive material analysis. This widens the application potential in domains such as recycling, food sorting and quality grading, security & surveillance solution, etc."

Go to the original article...

Gpixel Announces PulSar Technology for VUV/EUV, Soft X-Ray and Electron Direct Imaging

Image Sensors World        Go to the original article...

Gpixel announces PulSar (PS) technology, extending the range of all GSENSE BSI sensors to detect vacuum UV (VUV) light, extreme UV (EUV) light and soft x-ray photons with QE approaching 100%. In addition, the technology demonstrates a good resistance to radiation damage in soft x-ray detection applications.

Gpixel’s new PulSar (PS) technology eliminates the AR coating on the BSI sensor surface and incorporates a new passivation technique to reduce the thickness of the non-sensitive layer at the sensor surface, reducing the dark current, and boosting the resistance to radiation damage.

Gpixel is very proud to offer our new technology to address the needs of our scientific customers,” says Wim Wuyts, CCO of Gpixel. “We are excited to have the opportunity to make this technology available across our broad range of large format BSI image sensors, offering our customers the unique combination of VUV/EUV and soft x-ray imaging options using our standard scientific CMOS GSENSE product family. PulSar (PS) variants are electronically compatible with their corresponding standard BSI sensors, allowing customers to quickly and easily accommodate the new sensors in existing designs.

Prototype samples of the GSENSE2020BSI-PS are available now for evaluation, and GSENSE400BSI-PS samples will be available in mid 2020. Other variants available upon request.

Go to the original article...

Photo-Gates are Back, but in a New Form

Image Sensors World        Go to the original article...

MDPI paper "Fully Depleted, Trench-Pinned Photo Gate for CMOS Image Sensor Applications" by Francois Roy, Andrej Suler, Thomas Dalleau, Romain Duru, Daniel Benoit, Jihane Arnaud, Yvon Cazaux, Catherine Chaton, Laurent Montes, Panagiota Morfouli, and Guo-Neng Lu from ST Micro, IMEP-LaHC, LETI-CEA, and University Lyon is a part of Special issue on the 2019 International Image Sensor Workshop (IISW2019). The paper proposes a solution of quite an important issue associated with deep high energy implants:

"Tackling issues of implantation-caused defects and contamination, this paper presents a new complementary metal–oxide–semiconductor (CMOS) image sensor (CIS) pixel design concept based on a native epitaxial layer for photon detection, charge storage, and charge transfer to the sensing node. To prove this concept, a backside illumination (BSI), p-type, 2-µm-pitch pixel was designed. It integrates a vertical pinned photo gate (PPG), a buried vertical transfer gate (TG), sidewall capacitive deep trench isolation (CDTI), and backside oxide–nitride–oxide (ONO) stack. The designed pixel was fabricated with variations of key parameters for optimization. Testing results showed the following achievements: 13,000 h+ full-well capacity with no lag for charge transfer, 80% quantum efficiency (QE) at 550-nm wavelength, 5 h+/s dark current at 60 °C, 2 h+ temporal noise floor, and 75 dB dynamic range. In comparison with conventional pixel design, the proposed concept could improve CIS performance."

Go to the original article...

Sony Opens Automotive Design Center in Oslo, Norway

Image Sensors World        Go to the original article...

Johannes Solhusvik becomes the Head of Automotive Design Centre at Sony Europe B.V. In the past, Johannes used to be BU-CTO of Aptina’s Automotive and Industrial Business Unit. Later, he was Omnivision's GM of Europe Design Center located in Norway. Now, one can expect a big boost of Sony automotive image sensor business in Europe.

Go to the original article...

e2v Celebrates 50th Anniversary of the CCD

Image Sensors World        Go to the original article...

Teledyne e2v releases a first of a few publications on the 50th Anniversary of the invention of CCD. These publications is aimed to highlight e2v's long term commitment to CCD design and fabrication in the UK, for space, science, astronomy and other demanding applications:

"Teledyne e2v’s CCD fabrication facility is critical to the success and quality of future space science missions and remains committed to being the long-term supplier of high specification and quality devices for the world’s major space agencies and scientific instruments producers."

Go to the original article...

Event-based Detection Dataset for Automotive Applications

Image Sensors World        Go to the original article...

Prophesee publishes "A Large Scale Event-based Detection Dataset for Automotive" by Pierre de Tournemire. Davide Nitti, Etienne Perot, Davide Migliore, Amos Sironi:

"We introduce the first very large detection dataset for event cameras. The dataset is composed of more than 39 hours of automotive recordings acquired with a 304x240 ATIS sensor. It contains open roads and very diverse driving scenarios, ranging from urban, highway, suburbs and countryside scenes, as well as different weather and illumination conditions. Manual bounding box annotations of cars and pedestrians contained in the recordings are also provided at a frequency between 1 and 4Hz, yielding more than 255,000 labels in total. We believe that the availability of a labeled dataset of this size will contribute to major advances in event-based vision tasks such as object detection and classification. We also expect benefits in other tasks such as optical flow, structure from motion and tracking, where for example, the large amount of data can be leveraged by self-supervised learning methods."

Go to the original article...

LiDAR News: IHS Markit, Blickfeld, Espros, Xenomatix

Image Sensors World        Go to the original article...

Autosens publishes Dexin Chen, Senior Technology Analyst at IHS Markit presentation "The race to a low-cost LIDAR system" in Brussels in Sept. 2019:




Another presentation at Autosens Brussels 2019 explains the technology behind Blickfeld LiDAR "The New Generation of MEMS LiDAR for Automotive Applications" by Timor Knudsen, Blickfeld Head of Embedded Software:




Yet another presentation "A novel CCD LiDAR imager" by Beat De Coi – Founder and CEO, ESPROS Photonics Corporation says that there is no value in the ability to detect single photons in LiDAR, like SPADs do:




XenomatiX CEO Filip Geuens presents "Invisible integration of solid-state LIDAR to make beautiful self-driving cars"


Go to the original article...

ISP News: ARM, Geo, Qualcomm

Image Sensors World        Go to the original article...

Autosens publishes ARM Director ISP Algorithms & Image Quality Alexis Lluis Gomez talk "ISP optimization for ML/CV automotive applications" in Brussels in Sept. 2019:



GEO Semi's Bjorn Grubelich, Product Manager for Automotive Camera Solutions, presents CV challenges in pedestrian detection in a compact automotive camera:



Qualcomm announces mid-rand and low-end mobile SoC Snapdragon 720G, 662 and 460. Even the low-end 460 chip supports 48MP imaging:


The mid-range 720G supports 192MP camera:

Go to the original article...

FLIR Demos Automotive Thermal Cameras for Pedestrian Detection

Image Sensors World        Go to the original article...

FLIR publishes a couple of videos about advantages of thermal cameras for pedestrian detection:



Go to the original article...

Autosens Detroit 2020 Agenda

Image Sensors World        Go to the original article...

Autosens Detroit to be held in mid-May 2020 publishes its agenda with many image sensor related presentations:

  • The FIR Revolution: How FIR Technology Will Bring Level 3 and Above Autonomous Vehicles to the Mass Market
    Yakov Shaharabani, CEO, Adasky
  • The Future of Driving: Enhancing Safety on the Road with Thermal Sensors
    Tim LeBeau, CCO, Seek Thermal
  • RGB-IR Sensors for In-Cabin Automotive Applications
    Boyd Fowler, CTO, OmniVision
  • Robust Inexpensive Frequency Domain LiDAR using Hamiltonian Coding
    Andreas Velten, Director, Computational Optics Group, University of Wisconsin-Madison
  • The Next Generation of SPAD Arrays for Automotive LiDAR
    Wade Appelman, VP Lidar Technology, ON Semiconductor
  • Progress with P2020 – developing standards for automotive camera systems
    Robin Jenkin, Principal Image Quality Engineer, NVIDIA
  • What’s in Your Stack? Why Lidar Modulation Should Matter to Every Engineer
    Bill Paulus, VP of Manufacturing, Blackmore Sensors and Analytics, Inc
  • Addressing LED flicker
    Brian Deegan, Senior Expert, Valeo Vision Systems
  • The influence of colour filter pattern and its arrangement on resolution and colour reproducibility
    Tsuyoshi Hara, Solution Architect, Sony
  • Highly Efficient Autonomous Driving with MIPI Camera Interfaces
    Hezi Saar, Sr. Staff Product Marketing Manager, Synopsys
  • Tuning image processing pipes (ISP) for automotive use
    Manjunath Somayaji, Director of Imaging R&D, GEOSemiconductor
  • ISP optimization for ML/CV automotive applications
    Alexis Lluis Gomez, Director ISP Algorithms & Image Quality, ARM
  • Computational imaging through occlusions; seeing through fog
    Guy Satat, Researcher, MIT
  • Taming the Data Tsunami
    Barry Behnken, Co-Founder and SVP of Engineering, AEye

Go to the original article...

Samsung and Oxford Propose Image Sensors Instead of Accelerometers for Activity Recognition in Smartwatches

Image Sensors World        Go to the original article...

Arxive.org paper "Are Accelerometers for Activity Recognition a Dead-end?" by Catherine Tong, Shyam A. Tailor, and Nicholas D. Lane from Oxford University and Samsung says that image sensors might better suit for tracking calories in smartwatches and smartbands:

"Accelerometer-based (and by extension other inertial sensors) research for Human Activity Recognition (HAR) is a dead-end. This sensor does not offer enough information for us to progress in the core domain of HAR---to recognize everyday activities from sensor data. Despite continued and prolonged efforts in improving feature engineering and machine learning models, the activities that we can recognize reliably have only expanded slightly and many of the same flaws of early models are still present today.

Instead of relying on acceleration data, we should instead consider modalities with much richer information---a logical choice are images. With the rapid advance in image sensing hardware and modelling techniques, we believe that a widespread adoption of image sensors will open many opportunities for accurate and robust inference across a wide spectrum of human activities.

In this paper, we make the case for imagers in place of accelerometers as the default sensor for human activity recognition. Our review of past works has led to the observation that progress in HAR had stalled, caused by our reliance on accelerometers. We further argue for the suitability of images for activity recognition by illustrating their richness of information and the marked progress in computer vision. Through a feasibility analysis, we find that deploying imagers and CNNs on device poses no substantial burden on modern mobile hardware. Overall, our work highlights the need to move away from accelerometers and calls for further exploration of using imagers for activity recognition.
"

Go to the original article...

CIS Production Short of Demand, Prices Rise

Image Sensors World        Go to the original article...

IFNews quotes Chinese-language Soochow Securities report that Sony, Samsung, and OmniVision have recently increased their image sensor prices by 10%-20% due to supply shortages.

Soochow Securities analyst says: "According to our grassroots research, the size of this [CIS] market may even exceed that of memory." (Google translation)


Digitimes too reports that Omnivision "will raise its quotes for various CMOS image sensors (CIS) by 10-40% in 2020 to counter ever-expanding demand for application to handsets, notebooks, smart home devices, automotive and security surveillance systems."

Go to the original article...

Trieye Automotive SWIR Presentation

Image Sensors World        Go to the original article...

Autosens publishes Trieye CTO Uriel Levy's presentation "ShortWave Infrared Breaking the Status Quo" in Brussels in Sept. 2019:

Go to the original article...

LiDAR News: Ibeo, Velodyne, Outsight

Image Sensors World        Go to the original article...

ArsTechnica: "Ibeo Operations Director Mario Brumm told Ars that Ibeo's next-generation lidar, due out later this year, would feature an 128-by-80 array of VCSELs coupled with a 128-by-80 array of SPADs. Ibeo is pursuing a modular design that will allow the company to use different optics to deliver a range of models with different capabilities—from a long-range lidar with a narrow field of view to a wide-angle lidar with shorter range. Ibeo is aiming to make these lidars cheap enough that they can be sold to automakers for mass production starting in late 2022 or early 2023."

IEEE Spectrum quotes Velodyne new Anand Gopalan talking about the company's new solid state Velabit LiDAR:

Gopalan wouldn’t say much about how it works, only that the beam-steering did not use tiny mirrors based on micro-electromechanical systems (MEMS). “It differs from MEMS in that there’s no loss of form factor or loss of light,” he said. “In the most general language, it uses a metamaterial activated by low-cost electronics.”

Autosens publishes Outsight President Raul Bravo's presentation at Brussels Sept. 2019 conference:

Go to the original article...

SmartSens Announces SmartClarity H Series

Image Sensors World        Go to the original article...

PRNewswire: SmartSens launches six new sensors as part of the SmartClarity H Series family. The SC8238H, SC5238H, SC4210H, SC4238H, SC2210H, and SC2310H sensors have resolutions in range of 2MP to 8MP and are said to provide a significant improvement over previous generation products in lowering dark current and reducing white pixels and temperature noise suitable to use in home surveillance and other video based applications.

"Overheating and high temperature has been the culprit that creates bottlenecks to excellent CIS image quality, especially in the field of non-stop running security applications. As the leader in Security CIS sensors, we've always set our sight in tackling this issue. As we see the improvements from our high-performing CIS technologies," noted Chris Yiu, CMO of SmartSens, "We're reaching the level comparable to other top-notch industry leaders."

Samples of these six products are now available.

Go to the original article...

Himax Unveils Low Power VGA Sensor

Image Sensors World        Go to the original article...

GlobeNewswire: Himax announces the commercial availability for HM0360, said to be an industry-first ultra-low power and low latency BSI CMOS sensor with autonomous modes of operations for always on, intelligent visual sensing applications such as human presence detection and tracking, gaze detection, behavioral analysis, and pose estimation for growing markets like smart home, smart building, healthcare, smartphone and AR/VR devices. Himax is currently working with leading AI framework providers such as Google and industry partners to develop reference design that can enable low power hardware and platform options to reduce time to market for intelligent edge vision solutions.

One of the key challenges to 2-dimensional image sensing for computer vision is the high power consumption and data bandwidth of the sensor and processing,” said Amit Mittra, CTO of Himax Imaging. “The HM0360 addresses this opportunity by delivering a very low power image sensor that achieves excellent image quality with high signal-to-noise ratio and dynamic range, which allows algorithms to operate under challenging lighting conditions, from bright sunlight to moonlight. The VGA resolution can double the range of detection over Himax’s HM01B0 QVGA sensor, especially to support greater than 90-degree wide field of view lens. Additionally, the HM0360 introduces several new features to reduce camera latency, system overhead and power consumption.

Smart sensors that can run on batteries or energy harvesting for years will enable a massive number of new applications over the next decade,” said Pete Warden, Technical Lead of TensorFlow Lite for Microcontrollers at Google. “TensorFlow Lite's microcontroller software can supply the brains behind these products but having low-power sensors is essential. Himax’s camera can operate at less than one milliwatt, which allows us to create a complete sensing system that's able to run continuously on battery power alone.

Unique features of HM0360 include:

  • Several autonomous modes of operation
  • Pre-metering function to ensure exposure quality for every event frame
  • Short sensor initialization time
  • Extremely low less than 2ms wake up time
  • Fast context switching and frame configuration update
  • Multi-camera synchronization
  • 150 parallel addressable regions of interests
  • Event sensing modes with programmable interrupts to allow the host processor to be placed in low power sleep until notified by the sensor
  • Operating up to VGA resolution of 60 frames per second sensor at 14.5mW and consumes less than 500µW using binning mode readout at 3FPS
  • Supporting multiple power supply configurations with minimal passive components to enable a highly compact camera module design for next generation smart camera devices.

M0360 is currently available for sampling and will be ready for mass production in the second quarter of 2020.

Go to the original article...

ADI Demos ToF Applications

Image Sensors World        Go to the original article...

Analog Devices publishes a video presenting its ToF solution inside PicoZense camera for face recognition:



Another demo talks about in-cabin driver status monitoring:



Yet another demo shows ToF camera-equipped autonomous robot:

Go to the original article...

ON Semi on Automotive Image Sensor Challenges

Image Sensors World        Go to the original article...

AutoSens publishes a video "Overview of the Challenges and Requirements for Automotive Image Sensors" by Geoff Ballew, Senior Director of Marketing, Automotive Sensing Division, ON Semiconductor presented at Autosens Brussels in Sept. 2019:

Go to the original article...

Reverse Engineered: Sony CIS inside Denso Automotive Camera, Tesla Model 3 Triple Camera, Samsung Galaxy Note 10+ ToF Camera

Image Sensors World        Go to the original article...

SystemPlus has published few interesting reverse engineering reports. "Denso’s Monocular Forward ADAS Camera in the Toyota Alphard" reveals Sony 1.27 automotive sensor with improved night vision:

"The monocular camera, manufactured by Denso in Japan, is 25% smaller and uses 30% less components and parts than the previous model. Also, for the first time in an automotive camera, we have found a Sony CMOS image sensor. This 1.27M-pixel sensor offers higher sensitivity to the near-infrared sensor, and the low-light use-designed lens housing improves the camera’s night-time identification of other road users and road signs. Regarding processing, the chipset is composed of a Sony image signal processor, a Toshiba image recognition processor, and a Renesas MCU."


"Triple Forward Camera from Tesla Model 3" "captures front images over up to 250 meters that are used by the Tesla Model 3 Driver Assist Autopilot Control Module Unit. The system integrates a serializer but no computing.

Tesla chose dated but well known and reliable components with limited bandwidth, speed and dynamic range. This is probably due to the limited computing performance of the downstream Full Self-Driving chipset. This three-cameras setup is therefore very far from the robotic autonomous vehicle (AV) technology used on Waymo and GM Cruise vehicles. The level of autonomy Tesla will achieve with this kind of hardware is the trillion-dollar question you will be able to assess with this report.

We assume this system is manufactured in the USA. It is an acquisition module without treatment using three cameras, narrow, main and wide, with 3 CMOS Image Sensors with 1280×960 1.2Mp resolution.
"


"Samsung Galaxy Note 10+ 3D Time of Flight Depth Sensing Camera Module" "is using the latest generation of Sony Depth sensing Solutions’ Time-of-Flight (ToF) camera, which is unveiled in this report.

It includes a backside illumination (BSI) Time of Flight Image Sensor array featuring 5µm size pixels and 323 kilopixel VGA resolution developed by Sony Depth sensing Solutions. It has one VCSEL for the flood illuminator coming from a major supplier.

The complete system features a main red/green/blue camera, a Telephoto, a Wide-angle Camera Module and a 3D Time of Flight Camera.
"

Go to the original article...

css.php