Imec and Holst Centre Transparent Fingerprint Sensor

Image Sensors World        Go to the original article...

Charbax publishes a video interview with Hylke Akkerman (Holst Centre) and Pawel Malinowski (Imec) on the transparent fingerprint sensor that won the 2019 I-Zone Best Prototype Award SID Display Week:

Go to the original article...

Basler Announces ToF Camera with Sony Sensor

Image Sensors World        Go to the original article...

Basler unveils Blaze ToF camera based on Sony DepthSense IMX556PLR sensor technology:

Go to the original article...

Under-Display News

Image Sensors World        Go to the original article...

IFNews: Credit Suisse report on smartphone display market talks about under-display selfie camera in Oppo phones:

"Oppo also became the first smartphone brand to unveil an engineering sample with under-display selfie camera last week, by putting the front facing camera under the AMOLED display, although we believe its peers such as Xiaomi, Lenovo, Apple, Huawei, etc., are also working on similar solution. This technology allows a real full screen design as there is no hole or notch on the display, and the screen can act as a screen when the front camera is not in use. Nevertheless, the display image quality in the area surrounding the camera seems to be worse than the rest of the display as it requires special treatment and processing. Moreover, the native image quality (resolution, contrast, brightness, etc.) taken by the under-display selfie camera is also not comparable with current front facing camera. Our checks suggest the brands (not just Oppo) are currently working with software/AI companies for post-processing."


The report also talks about the efforts to reduce under-display fingerprint sensor thickness:

"All of the flagship Android smartphones showcased at the MWC Shanghai are equipped with under-display fingerprint sensing, mostly adopting optical sensor with only Samsung using ultrasonic sensor, and none of them is using Face ID-like biometric sensing. We believe under-display fingerprint is becoming the mainstream for Android's high-end smartphones and could further proliferate into mid-end as the overall cost comes down. We estimate overall under-display fingerprint shipment of ~200 mn units in 2019E (60 mn units for ultrasonic and 140 mn units for optical), up from ~30 mn units in 2018, and could further increase to 300 mn units in 2020E, excluding the potential adoption by iPhone.

For the optical under-display fingerprint, our checks suggest the industry is working on (1) thinner stacking for 5G; (2) half-screen sensing for OLED panel; (3) single-point sensing for LCD panel; and (4) full-screen in-cell solution for LCD panel. As mentioned earlier, 5G smartphone will consume more battery power and it will be necessary to reduce the thickness of the under-display fingerprint module for more room to house a bigger battery.

Currently, optical under-display fingerprint sensor module has a thickness of nearly 4 mm, as its structure requires certain distance between the CMOS sensor and the AMOLED display to have the best optical imaging performance. Given the overall thickness of the handset nowadays is around 7.5-9.0 mm, smartphone makers are required to sacrifice the battery capacity to make extra room for the optical under-display fingerprint sensor. The new structure for 5G smartphone that Goodix and Egis are working on will be adopting MEMS Pinhole structure, replacing the current 2P/3P optical lens structure, given the MEMS Pinhole design could achieve total thickness of 0.5-0.8 mm, versus ~4 mm for 2P/3P optical lens. Our checks suggest the supply chain is preparing sampling/qualification of the new structure in 2H19 for mass production in 2020.
"

Go to the original article...

TechInsights Overviews Smartphone CIS Advances: Pixel Scaling and Scaling Enablers

Image Sensors World        Go to the original article...

TechInsights' image sensor analyst Ray Fontaine continues his excellent series of reviews based on his paper for the International Image Sensors Workshop (IISW) 2019. Part 2 talks about pixel scaling:

"At TechInsights, as technology analysts we are often asked to predict: what’s next? So, what about scaling down below 0.8 µm? Of course, 0.7 µm and smaller pixels are being developed mostly in secrecy, including for non-obvious use cases. For now, we will stick with our trend analysis and suggest that if a 0.7 µm generation is going to happen, it may be ready for the back end of 2020 or in 2021."


"The absence of major callouts in 2016 and onward do not correlate to inactivity. The innovation we have been documenting in leading edge parts of recent years could be described as incremental, although it is a subjective assessment. In summary, it is our belief that development of DTI and associated passivation schemes was the main contributor to delayed pixel introduction of 1.12 µm down to 0.9 µm pixels."

Go to the original article...

See Device Startup Proposes "Quantum PATPD Pixel"

Image Sensors World        Go to the original article...

Buena Park, CA-based See Device Inc. startup proposes:

"Photon Assisted Tunneling Photodetector (PAT-PD) Technology, is new photodetector technology redefining what's possible with standard silicon CMOS image sensor without compromise to performance and efficiency. An innovative pixel array system formed by new structures and design mechanisms of silicon, SeeDevice's proprietary image sensor uses Quantum Tunneling resulting in high sensitivity, quantum efficiency, low SNR, and wide spectral response."


"The PAT-PD sensor is designed incorporating principles of quantum mechanics and nanotechnology to produce groundbreaking improvements in dynamic range, sensitivity, and low light capabilities without compromising size and efficiency. Standard sensors require conceding either the cost efficiency of CMOS and the better specifications of CCD sensors. This compromise is eliminated by the groundbreaking technology used in the SeeDevice image sensors and photodetectors.

PAT-PD completely redefines the physical principles used for sensors by using photon-activated current flow. SeeDevice owns 50 patents worldwide which enable us to produce industry-disrupting specifications by using photons as a trigger mechanism to enable current flow. The technology has a wide spectrum of applications and can be easily integrated since the entire device is built on a CMOS process.

PAT-PD enables device development with no compromise on technical specification. One device can have high resolution, high frame rate, high sensitivity, and a wide dynamic range without modifications.
"


I'm told that the registered agent of Pixel Device Inc, Hoon Kim, has the same name as the CTO of infamous Planet82 company. Does anybody know if this is the same person?

Thanks to RA for the link!

Go to the original article...

Goodix Sues Egis over Under-Display Fingerprint Patents Infringement

Image Sensors World        Go to the original article...

Digitimes: China-based Goodix sues Taiwan's Egis Technology over infringing on its patents of under-display optical fingerprint sensors. Goodix sensors are used in many smartphones manufactured in China, while Egis sensors are used mostly in Samsung smartphones. The lawsuit was filed in Beijing IP court. Goodix demands CNY50.5M (US$7.35M) in compensation from Egis.

"With five years of arduous, tireless and indigenous innovation, a dedicated R&D team of 400+ overcame great difficulties to bring to the world the innovative optical IN-DISPLAY FINGERRPINT SENSOR, which has been leading a technological trend in the global mobile industry since its debut in early 2017. As of today, the innovative technology has been adopted by 52 smartphone models offered by mainstream brands, benefiting hundreds of millions of worldwide consumers, and is recognized as the most popular biometric solution in the bezel-less era.

The precious achievement is a result of enormous investment and persistence – Goodix invests at least 10% of its revenue into research and development each year. In 2018, the number has reached 22.5%, with a compound growth rate of 80% in the past five years. As of June, 2019, Goodix had submitted over 3,300 patent filings and accumulated over 480 issued patents, among which, over 760 filings and 50 issued patents are parts of the optical IN-DISPLAY FINGERPRINT SENSOR technology.

The success of Goodix’s optical IN-DISPLAY FINGERPRINT SENSOR embodies the hard work of all employees of Goodix; yet the team’s painstaking effort was stolen by a competitor. IP theft is a disrespectful act towards enterprises that are dedicated to innovations. It is also a vandalism of the market order. Out of the responsibilities and accountabilities to the employees, customers, consumers, as well as the entire industry, Goodix will defend its legitimate rights and interests by the justice of law.

Together with industry partners and peers, Goodix Technology is looking forward to establishing a healthy and sustainable industry environment that respects innovations and intellectual property rights.
"

Last year, Goodix was involved in a couple of lawsuits on capacitive fingerprint sensors. Goodix sued its Chinese competitor Silead, while Goodix itself was sued by Sweden-based FCP. This year, the optical fingerprint sensors are becoming a field of legal battles.

Go to the original article...

ADI Presents ToF Development Kit

Image Sensors World        Go to the original article...

Analog Devices presents its VGA ToF camera kit developed in cooperation with Arrow. The kit uses Panasonic CCD as a ToF sensor:



Go to the original article...

SPAD LiDAR from Chinese Academy of Science

Image Sensors World        Go to the original article...

Acta Photonics Sinica publishes a paper "A 16×1 Pixels 180nm CMOS SPAD-based TOF Image Sensor for LiDAR Applications" by CAO Jing, ZHANG Zhao, QINan, LIU Li-yuan, and WU Nan-jian.

"The sensor integrates 16 structure-optimized single photon avalanche diode pixels and a dual-counter-based 13-bit time-to-digital converter. Each pixel unit has a novel active quench and recharge circuit. The dark noise of single photon avalanche diode is reduced by optimizing the guard ring of the device. The active quench and recharge circuit with a feedback loop is proposed to reduce the dead time. A dual-counter-based time-to-digital converter is designed to prevent counting errors caused by the metastability of the counter in the time-to-digital converter. The sensor is fabricated in 180 nm CMOS standard technology. The measurement results show the median dark count rate of the single photon avalanche diode is 8 kHz at 1 V excess voltage, the highest photon detection efficiency is 18% at 550 nm light wavelength. The novel active quench circuit effectively reduces the dead time down to 8 ns. The time-to-digital converter with 416 ps resolution makes the system achieve the centimeter-accuracy detection. A 320×160 depth image is captured at a distance of 0.5 m. The maximum depth measurement nonlinear error is 1.9% and the worst-case precision is 3.8%."

Go to the original article...

TrinamiX Presents 3D FaceID Module

Image Sensors World        Go to the original article...

LinkedIn: TrinamiX unveils its compact FaceID solution for smartphones: "Protecting your data is nowadays more important than ever. #trinamiX3Dimaging allows you to protect your confidential infos by unlocking your mobile device only by facial recognition.

The system does not only provide 2D and 3D information, but also a material classification which adds another authentication layer: skin recognition is introduced as a further protective barrier. The #3DImager thus enhances the #safety of mobile devices.

See in the picture below the 3D Imaging system for mobile applications.
"


Trinamix also demos its fiber-based distance measuring system for industrial applications:

Go to the original article...

Samsung Event-Driven Sensors

Image Sensors World        Go to the original article...

Hyunsurk Eric Ryu from Samsung presents the company's progress with event-driven sensors:

Go to the original article...

Melexis ToF Sensor Detailed Datasheet

Image Sensors World        Go to the original article...

Melexis publishes quite a detailed datasheet of its MLX75024 QVGA ToF sensor based on Sony-Softkinetic pixel. So detailed spec is quite a rarity in the world of ToF imaging:

Go to the original article...

Passive Image Recognition

Image Sensors World        Go to the original article...

University of Wisconsin-Madison publishes OSA Photonics Research paper "Nanophotonic media for artificial neural inference" by Erfan Khoram, Ang Chen, Dianjing Liu, Lei Ying, Qiqi Wang, Ming Yuan, and Zongfu Yu proposes a glass that performs NN taks:

"Now, artificial intelligence gobbles up substantial computational resources (and battery life) every time you glance at your phone to unlock it with face ID. In the future, one piece of glass could recognize your face without using any power at all.

“This is completely different from the typical route to machine vision,” says [Zongfu] Yu.

He envisions pieces of glass that look like translucent squares. Tiny strategically placed bubbles and impurities embedded within the glass would bend light in specific ways to differentiate among different images. That’s the artificial intelligence in action.

For their proof of concept, the engineers devised a method to make glass pieces that identified handwritten numbers. Light emanating from an image of a number enters at one end of the glass, and then focuses to one of nine specific spots on the other side, each corresponding to individual digits.

The glass was dynamic enough to detect, in real-time, when a handwritten 3 was altered to become an 8.

Designing the glass to recognize numbers was similar to a machine-learning training process, except that the engineers “trained” an analog material instead of digital codes. Specifically, the engineers placed air bubbles of different sizes and shapes as well as small pieces of light-absorbing materials like graphene at specific locations inside the glass.

“We could potentially use the glass as a biometric lock, tuned to recognize only one person’s face” says Yu. “Once built, it would last forever without needing power or internet, meaning it could keep something safe for you even after thousands of years.


My only concern is that the high optical power needed for non-linear operations in "NN glass" might burn the person's face. However, if recognition is achieved in nanosecond time with a short laser pulse, this might not be an issue:


Go to the original article...

Luminar Raises $100M More

Image Sensors World        Go to the original article...

BusinessWire, Venturebeat, Techcrunch, Wired: A 1550nm mechanical galvo mirror scanning LiDAR startup Luminar announces $100M funding round that brings its total raised capital to $250M and its valuation to $900M. Other than existing investors, G2VP, Crescent Cove Advisors, Octave Ventures, Moore Strategic Ventures, the Westly Group, 1517 Fund, Peter Thiel’s investment group, GoPro founder Nick Woodman, and strategic backers Corning Inc., Cornes, and Volvo Cars Tech Fund have joined the round. Cornes will support Luminar’s expansion into Asia, while Corning will codevelop auto-grade and lidar-friendly Gorilla Glass windows and other components.

Luminar announces its new fully automotive qualified platform called Iris. It will be offered in two versions when it becomes available later this year. The higher end one costs $1,000 in production quantities and enable hands-free “freeway autonomy,” while the cheaper ADAS version costing under $500 will drive functions like emergency braking and steering.

The company says that it’s currently quoting and in the process of arranging multi-year contracts worth more than $1.5b combined.

Iris platform is said to be capable of seeing objects at 250m distance while consuming 15W of power.


Go to the original article...

SQUAD 2019 – Advanced School on Quantum Detectors

Image Sensors World        Go to the original article...

FBK and the University of Trento are organizing a school for PhD students and young researchers on Quantum detectors. Among the speakers, there will be many experts in single-photon imaging.

SQUAD 2019, the Advanced School on Quantum Detectors, is to be held in Fondazione Bruno Kessler at its Science and Technology Hub in Povo, on the suburban hills of Trento, Italy, on September 18-20, 2019.

The preliminary program is quite impressive:
  • Single photons in quantum mechanics: more than clicks on detectors
    Prof. André Stefanov, University of Bern (Switzerland)
  • Waveguide integrated superconducting single photon detectors
    Prof. Wolfram Pernice, University of Münster (Germany)
  • Fundamentals of single-photon avalanche diodes
    Prof. Angelo Gulinatti, Politecnico di Milano (Italy)
  • Silicon photo-multipliers
    Dr. Fabio Acerbi, Fondazione Bruno Kessler (Italy)
  • CMOS SPADs for single photon imaging [title TBC]
    Dr. Sara Pellegrini, ST Microlectronics (United Kingdom)
  • Vacuum photodetectors [title TBC]
    Dr. Serge Duarte Pinto, Photonis (The Netherlands)
  • Quantum imaging using Timepix3-based optical cameras [title TBC]
    Dr. Andrei Nomerotski, Brookhaven National Laboratory (U.S.A.)
  • Cryo-CMOS for quantum applications
    Prof. Edoardo Charbon, Ecole Polytechnique Fédérale de Lausanne (Switzerland)
  • Imaging technologies for quantum applications
    Dr. Colin Coates, Andor (United Kingdom)
  • Past and future uses of single-photon detectors [title TBC]
    Dr. Gianluca Boso, ID Quantique SA (Switzerland)
  • Can science benefit from advances in consumer electronics? [title TBC]
    Dr. Robert Kappel, ams (Switzerland)
  • Applications in computational and quantum imaging using SPAD/emCCD sensors
    Prof. Daniele Faccio, University of Glasgow (United Kingdom)
  • Validation of échelle-based quantum-classical discriminator with novelty SPAD array sensor
    Dr. Dmitri Boiko, CSEM (Switzerland)

Go to the original article...

ON Semi HDR with LFM Promotional Video

Image Sensors World        Go to the original article...

ON Semi publishes a video promoting its HDR with LFM automotive image sensors:

Go to the original article...

Daguerreotypes as Early Plasmonic Imagers

Image Sensors World        Go to the original article...

Proceedings of the National Academy of Sciences publishes a commentary "Plasmonics sheds light on the nanotechnology of daguerreotypes" by Naomi J. Halas on open-access paper "Nineteenth-century nanotechnology: The plasmonic properties of daguerreotypes" by Andrea E. Schlather, Paul Gieri, Mike Robinson, Silvia A. Centeno, and Alejandro Manjavacas talks about a newly found explanation of daguerreotype imaging:

"...before plasmonic nanostructures became a science, they were an art. The invention of the daguerreotype was publicly announced in 1839 and is recognized as the earliest photographic technology that successfully captured an image from a camera, with resolution and clarity that remain impressive even by today’s standards. Here, using a unique combination of daguerreotype artistry and expertise, experimental nanoscale surface analysis, and electromagnetic simulations, we perform a comprehensive analysis of the plasmonic properties of these early photographs, which can be recognized as an example of plasmonic color printing."

Go to the original article...

UTAC Automotive Sensor Packages

Image Sensors World        Go to the original article...

UTAC paper "CMOS Image Sensor Packaging Technology for Automotive Applications" by Teoh Eng Kang, Alastair Attard, and Jonathan Abela says "Whereas high reliability image sensor packages are typically based on ceramic packages, these tend to have considerably higher costs and longer development cycles than laminate-based packages which are normally used in other market segments. In this paper, we present novel methods for packaging image sensors on laminate substrates, enabling a reduction in cost, form factor and time-to-market whilst simultaneously meeting automotive reliability grades typically required for such devices."

Go to the original article...

TechInsights Overviews Smartphone CIS Advances: Chip-stacking and Chip-to-chip Interconnect

Image Sensors World        Go to the original article...

TechInsights starts publishing a series of blog posts "The State-of-the-Art of Smartphone Imagers" based on Ray Fontaine's presentation at IISW 2019 at the end of June. The first part talks about chip-stacking and chip-to-chip interconnect:

"A brief history of stacked smartphone imagers from three leaders is illustrated as follows. Sony launched its first stacked chips with dual TSVs and evolved to a single TSV structure. Its first generation 6 µm pitch Cu-Cu hybrid bonding is still in wide use however we’ve just documented in 2019 an evolution to 3.1 µm pitch Cu-Cu hybrid bonding in its 0.8 µm pixel generation sensor. To our knowledge this is the world record for imager Cu-Cu hybrid bonding pitch. OmniVision and foundry partners have produced butted TSV, single TSV and Cu-Cu direct hybrid bond interconnects. To our knowledge, TSMC holds the world record for imager single TSV pitch at 4.0 µm. W-filled TSVs are the preferred interconnect choice for Samsung stacked imagers and we’ve documented 5.0 µm TSVs in its stacked imagers."

Go to the original article...

Digitimes: 3D Sensing Market Surge Expected

Image Sensors World        Go to the original article...

Digitimes article "Taiwan chipmakers gearing up for 3D sensor market boom" quotes foundries Win Semiconductors and Advanced Wireless Semiconductor (AWSC), epitaxial wafer supplier Visual Photonics Epitaxy (VPEC), Himax, and backend houses ChipMOS Technologies, Xintec, and ShunSin Technology expecting imminent rise in 3D sensing orders for smartphones and ADAS.

Win Semi sees orders for ToF components ramping up from Apple suppliers and also non-Apple customers. VPEC provides epi-wafers for Samsung ToF VCSELs as well as being a major epi-wafer source for Win Semi. AWSC sees increased orders form Ams. ChipMOS gets 3D sensing orders from Himax, ShunSin gets 3D orders from its parent company Foxconn.

Go to the original article...

Valeo Reports $564M in Design Wins for its LiDAR

Image Sensors World        Go to the original article...

Reuters: Valeo has won 500M euros ($564M) worth of orders for automotive LiDARs, told Valeo CEO Marc Vrecko in an interview, highlighting the potential growth of LiDAR. “Those 500 million euros of orders with four major global auto groups will probably eventually represent between 1 to 1.5 billion worth of recurring business,” said Vrecko. Valeo SCALA LiDAR is re-branded version of German Ibeo product.

More than $1b in corporate and private investment has gone into some 50 LiDAR startups over the past three years, including a record $420M in 2018, according to a Reuters analysis of publicly available investment data in March.

Go to the original article...

SiOnyx Announces a Cheaper Vesrion of its Black Silicon Sensor Camera

Image Sensors World        Go to the original article...

BusinessWire: SiOnyx launches a cheaper version of its Aurora color night vision camera: "the new SiOnyx Aurora Sport HD action video camera for an introductory price of just $399. Unveiled at ICAST, the Sport uses SiOnyx’s proprietary Ultra Low Light imaging to turn night into full-color daylight. This imaging is the same semiconductor technology that earned the company a $20 million contract with the US Army."

As we continue to advance our black silicon technology for the law enforcement and defense industries, we are thrilled to expand our product offerings and bring that same expertise to the recreational market,” said Stephen Saylor, CEO of SiOnyx.


Go to the original article...

High Speed Camera vs Event-Based Camera Comparison

Image Sensors World        Go to the original article...

Arxiv.org paper "High Speed and High Dynamic Range Video with an Event Camera" by Henri Rebecq, René Ranftl, Vladlen Koltun, and Davide Scaramuzza from ETH Zurich and Intel now gets a companion video:

"Event cameras are novel sensors that report brightness changes in the form of a stream of asynchronous "events" instead of intensity frames. They offer significant advantages with respect to conventional cameras: high temporal resolution, high dynamic range, and no motion blur. While the stream of events encodes in principle the complete visual signal, the reconstruction of an intensity image from a stream of events is an ill-posed problem in practice. Existing reconstruction approaches are based on hand-crafted priors and strong assumptions about the imaging process as well as the statistics of natural images. In this work we propose to learn to reconstruct intensity images from event streams directly from data instead of relying on any hand-crafted priors. We propose a novel recurrent network to reconstruct videos from a stream of events, and train it on a large amount of simulated event data. During training we propose to use a perceptual loss to encourage reconstructions to follow natural image statistics. We further extend our approach to synthesize color images from color event streams. Our network surpasses state-of-the-art reconstruction methods by a large margin in terms of image quality (> 20%), while comfortably running in real-time. We show that the network is able to synthesize high framerate videos (> 5,000 frames per second) of high-speed phenomena (e.g. a bullet hitting an object) and is able to provide high dynamic range reconstructions in challenging lighting conditions. We also demonstrate the effectiveness of our reconstructions as an intermediate representation for event data. We show that off-the-shelf computer vision algorithms can be applied to our reconstructions for tasks such as object classification and visual-inertial odometry and that this strategy consistently outperforms algorithms that were specifically designed for event data."

Go to the original article...

Sony Proposes Better ID with Multispectral Imaging

Image Sensors World        Go to the original article...

Sony paper "Skin-Based Identification From Multispectral Image Data Using CNNs" by Takeshi Uemori, Atsushi Ito, Yusuke Moriuchi, Alexander Gatto, and Jun Murayama presented at CVPR 2019 in Long Beach, CA, in June 2019 proposes an improvement of user ID with multi-spectral camera:

"User identification from hand images only is still a challenging task. In this paper, we propose a new biometric identification system based solely on a skin patch from a multispectral image. The system is utilizing a novel modified 3D CNN architecture which is taking advantage of multispectral data. We demonstrate the application of our system for the example of human identification from multispectral images of hands. To the best of our knowledge, this paper is the first to describe a pose-invariant and robust to overlapping real-time human identification system using hands. Additionally, we provide a framework to optimize the required spectral bands for the given spatial resolution limitations."

Go to the original article...

Automotive News: ON Semi, BWV, Toshiba, ADI

Image Sensors World        Go to the original article...

Mynavi: ON Semi has held an press event in Japan presenting its recent progress in SPAD-based LiDAR and HDR imaging. The LiDAR is said to be able to reach 3m distance in flash mode or 100m with 1D MEMS scanning (with no word on ambient light and target reflectivity):


The HDR sensor AR0233 with LED flicker mitigation has entered mass production:


ON Semi says that its automotive sensors have the industry's best manufacturing quality. The company's goal is to reach 1 ppb (part per billion) defect rate. The defect rate of 55ppb in 2015 has been reduced to around 30ppb in 2018. So far, the company has shipped 110b sensors since 2010:


PRNewswire: Israeli gated imaging startup BrightWay Vision (BWV) raises $25M in round B. Koito has joined this round as a strategic investor.

BrightWay Vision’s CEO & Co-Founder Ofer David, said: "We are extremely pleased with the investment of Koito and Magenta and the value they bring. The cooperation and investment of KOITO, a leading global manufacturer of automotive equipment, demonstrates the solid relationship and the trust in our solution and paves the way for market penetration within a short time. The funding will be used for commercialization of our unique technology and enable us to expand research and development activities."


BusinessWire: Toshiba reports that its Visconti 4 vision processor is a part of Toyota new ADAS system that recorded industry-leading scores in the 2018 Japan New Car Assessment Program (JNCAP), the government program that assesses the road safety of new vehicles. The Toyota Alphard/Vellfire was declared the winner of the Grand Prix Award for preventive safety performance, and the Toyota Crown and Corolla Sport were both evaluated as ASV +++, the highest level for advanced safety vehicles. Toshiba supplies Visconti 4 to the Toyota vehicles as an integral part of DENSO Front-Camera-Based Active Safety System.

It's not clear whether Toyota keeps using Mobileye processors or completely switches over to Toshiba.


Analog Devices publishes its view on car future "Driver Assistance to Driver Replacement: The Cognitive Vehicle Is Built Upon Foundational, High Integrity Sensor Data" showing a large number of different sensors, primarily those that manufactured by the company:

Go to the original article...

Sony Semi Acquires Midokura Startup

Image Sensors World        Go to the original article...

Sony Semiconductor Solutions has acquired the network virtualization Swiss-Spanish startup Mido Holdings by signing a share transfer agreement with the shareholders of Mido Holdings and completing the transfer of shares. Consequently, Mido Holdings has become a wholly-owned subsidiary of Sony Semiconductor Solutions.

Virtualization technologies include virtual network technology, which uses software to reconfigure networks without altering the physical infrastructure of communication devices such as routers and switches, and technologies that virtually integrate and control multiple servers and storage devices. These technologies are used to build cloud computing systems and other such highly flexible systems.

The acquisition of Mido Holdings is said to enable Sony Semiconductor to utilize virtualization technologies together with its image sensor technology, making it possible to configure a virtual environment that integrates multiple edge devices equipped with image sensors. This, in turn, will allow the company to provide a new edge computing environment that can be linked with cloud systems.

This acquisition will have only a minor impact on Sony Corporation's consolidated financial results for the fiscal year ending March 31, 2020.

Go to the original article...

2nd International Workshop on Event-based Vision and Smart Cameras

Image Sensors World        Go to the original article...

Second International Workshop on Event-based Vision and Smart Cameras held in Long Beach (CA) on June 17th, 2019 publishes its video presentations.

Prephesee video talks about event-based camera potential advantages in machine learning:



CelePixel presents its approach to event-based camera image processing:



ETH Zurich talks about event-based cameras challenges and opportunities:



iniVation talks about SW, HW and applications of event cameras:



Insightness presents event-based cameras for AR applications:



Manchester University presents its vision sensor with pixel-parallel SIMD processor array:

Go to the original article...

Yole CIS Market Tracking Predicts Slowdown After 2024

Image Sensors World        Go to the original article...

Yole Developpement starts quarterly CIS market monitor service throwing in a lot of interesting data:

"CIS is an analogic variant of the CMOS process commonly used for memory and logic circuits, and has become a key segment in semiconductor, reaching $15.5B in 2018 and exceeding 3% of total semiconductor sales.

In 2019, the overall attachment rate for CIS cameras per phone is moving towards in average of 2.5 units per phone, and the growth rate for CIS attachment will rise from 6.5% to 7.8% from 2019 – 2021. Amidst stagnant smartphone volume, CIS attachment rate is a central, successful strategy for main smartphone OEMs like Apple, Huawei, and Samsung. As camera quantity and die size increase per end-device, a 10.1% year-on-year growth rate is expected for 2019.

Alongside mobile, which is the main application market (representing 70% of all CIS sales), security and automotive are experiencing double-digit growth and have grown into billion-dollar CIS segments. Again, the attachment rate per endsystem is the key metric to monitor.

2019 looks slightly different than 2018. With a low Q1, the CIS market faces a slowly eroding ASP since most players can now match Sony’s proposition. Nevertheless, the market remains constrained in terms of capacity, with capex the main limiting factor since customers always want more CIS cameras. The outlook though remains very positive – in the range of 10% YoY in 2019 and 8% over the long-term – CIS is heading for $24B in 2024.

New increases in resolution (16Mp and beyond 20Mp) are linked to new progress in pixel size, but momentum is slow in the 0.8um pixel size range. The consequence is increased die size and silicon. CIS wafer volumes are approaching 250kWpm and will climb to 350kWpm before 2024, necessitating more CIS manufacturing lines to be created or converted from regular CMOS lines.
"

Go to the original article...

ON Semi on Image Sensor Cleaning

Image Sensors World        Go to the original article...

ON Semi application note "Image Sensor Handling and Best Practices" describes best practices of image sensor cover glass cleaning:

Do not touch the cover glass with fingers or anything other than a cleaning paper as required in this section. Finger grease can etch optical coatings and cause permanent damage. The gloves should be static and powder free. Gloves should be static dissipative Nitrile gloves.

Materials:
  • Clean compressed nitrogen
  • Ultra−clean DI water (4−6 megaohms/centimeters deionized water that has been filtered)
  • High-grade IPA (solvent grade/100% pure lab purity grade)
  • ESD protective Wipe:
    ♦ For CCD sensors: Berkshire DurX 670
    ♦ For CMOS sensors: Puritech Puritech S1091PRT or RTMKC002
  • ESD protective gloves for example: Nitrile Glove, required Ansell 93−401/402 or NiProTect CC529

Method A: Blow Off
This method is applicable for loose particle contamination. This is the only method that guarantees no residues such as drying spots.
  • Remove particles from the glass by blowing with an ionized-N2 gun.
  • Do not blow towards the other parts. If you work under a flow box, try to blow out of the box.

Method B: High-Grade IPA Clean or Ultra−Clean DI Water
  • Apply cleaning solvent using a separate lab−ware quality polypropylene squeeze bottle (Nalgene trade name), not the original bottle
  • Use a lint free wipe in one direction, with even pressure across the glass surface.
  • Never wipe the cover glass with a dry cloth. The cleaning solvent should be applied directly to the cleaning wipe and never directly on the cover glass.
  • The ESD protective wipe should not be saturated, only dampened with the cleaning agent.
  • After each wipe, either start with a fresh wipe or fold the wipe to provide a fresh surface for glass cleaning

Note: High grade IPA or ultra-clean DI water are acceptable for cleaning both plain glass and AR coated glass.

Note: Method A and B are acceptable methods to clean the CCD image sensor cover glass with the following exception. DI Water or IPA is not recommended for cleaning the CCD image sensor cover glass. Instead, 100% ethanol is required as the cleaning agent for CCD image sensor cover glass.

Caution on Cleaning Agents:
  • Use high-grade IPA only to clean the image sensor lid glass. Other solvents can contaminate the glass, attack the resin and sealant, and degrade reliability of the package.
  • Do not use acetone because it attacks the resin that glues the cover glass to the package.
  • Do not use methanol due to its toxicity and low quality cleaning properties.
  • Do not use sodium hydroxide (NaOh) because it degrades the AR coating on the glass.
  • Do not use highly alkaline (pH > 8) cleaning chemistries.
  • Do not use any solvents commonly used in paint strippers: toluene, benzene, methyl-ethyl ketones, ester solvents, acetone or methyl chloride, freons, terpens, anionic surfactants and multi−hydroxyl ethers.

If the surface is not clean, repeat these procedures. If the contaminant is not removed in two or three wipes, it is possible that the cover glass is permanently damaged. Inspect the device in optical microscope for permanent damage.

Go to the original article...

Harvest Imaging Forum 2019

Image Sensors World        Go to the original article...

Albert Theuwissen announces 2019 Harvest Imaging Forum agenda:

After the Harvest Imaging forums during the last 6 years, a seventh one will be organized in December 2019, in Delft, the Netherlands.

The 2019 Harvest Imaging forum will deal with two subjects (both in the field of smart cameras) and two speakers. Both speakers are world-level experts in their own fields.

"On-Chip Feature Extraction for Range-Finding and Recognition Applications"
Makoto IKEDA (Tokyo University, Japan)



"Direct ToF 3D Imaging : from the Basics to the System"
Matteo PERENZONI (FBK, Trento, Italy)


Go to the original article...

ams and SmartSens Partner on 3D and NIR Sensors

Image Sensors World        Go to the original article...

BusinessWire: ams has signed a formal letter of intent to collaborate in the field of image sensors with SmartSens . This collaboration complements ams’ strategic approach to further broaden its portfolio for all three 3D technologies – Active Stereo Vision (ASV), Time-of-Flight (ToF) and Structured Light (SL). To quickly meet an expected increasing demand for 3D sensing solutions in mobile devices, the partnership’s initial focus will be on 3D NIR sensors for facial recognition and applications requiring a high QE in the NIR (2D and 3D).

To speed up the time to market for customers, the companies will collaborate on the development of a 3D ASV reference design to support the planned future launch of a 1.3MP Stacked BSI Global Shutter Image Sensor with state of the art QE up to 40% at 940nm. This NIR sensor is a perfect addition to ams’ 3D illumination offerings, extending ams’ 3D portfolio and optimizing overall system performance. The reference design will enable high performance depth maps for payment, face recognition and AR/VR applications at a highly competitive total system cost.

This collaboration with SmartSens in Image Sensors brings customers the benefit of a faster time to market for 3D Active Stereo Vision and Structured Light applications in mobile phones and other devices including IoT applications, based on ams’ industry leading 3D technology and core IP on Voltage-Domain Global Shutter. The collaboration will also help accelerate time to market for exciting new automotive applications such as in-cabin 2D and 3D sensing,” said Stéphane Curral, EVP for the Division Image Sensor Solutions at ams.

We are pleased to partner with ams to combine our expertise in Image Sensors and NIR technology with ams’ 3D expertise and core Image Sensing IP. We believe this combination of robust technology and channel to market will provide an optimal solution to meet customer demand,” said Chris Yiu, CMO, SmartSens Technology.

Go to the original article...

css.php