Omnivision Unveils Two Smartphone Sensors

Image Sensors World        Go to the original article...

PRNewswire, PRNewswire: Omnivision announces two new sensors for smartphone cameras:
  • OV13B: 1/3-inch 13MP sensor for mainstream and entry-level smartphone cameras
  • OV16A: a cost-effective 16MP upgrade for rear- and front-facing cameras on mainstream smartphones with thin bezels

OV13B sensor features 1.12um PureCel Plus pixel technology and is specifically designed for the enormous mainstream and entry-level smartphone markets, providing a compelling solution for rear- or front-facing cameras.

The market for 1/3-inch optical format 13-MP image sensors has grown significantly during the last couple of years due to their optimized size, performance and cost effectiveness; we anticipate that this strong demand will continue for both rear- and front-facing cameras, in mainstream and entry-level smartphone applications,” said James Liu, product marketing manager for OmniVision. “The OV13B builds on the success of our widely deployed 13-MP sensor family with the industry’s best 1.12-micron pixel performance. This sensor will not only fulfill the tremendous demand in the mainstream market, but can also boost the performance of entry-level smartphones. It is also a perfect fit for both wide-angle and telephoto cameras in multi-camera configurations.

In comparison with its predecessors, the OV13B has significantly lower power consumption and a smaller chip size, enabling an 8.5 x 8.5-mm autofocus module for main cameras, or a 6.4 x 7.2-mm fixed-focus module for front-facing cameras with a Z height below 4 mm. This image sensor is sampling now.

The OV16A is built on OmniVision’s PureCel Plus 1.0um pixel architecture. With the OV16A, manufacturers can add a third camera for high-quality, ultra-wide-angle photos in high-end smartphones. Additionally, the OV16A extends battery life with the industry’s lowest power consumption—10% lower than the nearest competitor’s 16 MP 1.0um sensor.

Thin-bezel phones are gaining in popularity because of increased demand for full-display selfie screens. However, these space limiting thin-bezel designs require compact front-facing cameras. The OV16A allows designers to incorporate just such a camera in the bezel, and with 2.0-micron-equivalent pixel performance. The compact OV16A enables the industry’s smallest fixed-focus camera modules, with dimensions down to 6.5x6mm.

Designers want to achieve the best balance among cost, image quality and low power consumption,” said Jason Chiang, product marketing manager at OmniVision. “The cost-effective OV16A allows them to provide excellent performance in a high-resolution camera for various smartphone applications, including tri-camera designs. The 4-cell color filter allows users to consistently capture high-quality photos without motion blur, even in low-light conditions indoors.

The OV16A will begin sampling in February 2019.

Go to the original article...

Omnivision Announces Power and Cost-Effective GS Sensor for Automotive Applications

Image Sensors World        Go to the original article...

PRNewswire: Omnivision's OV9284 1MP sensor is aimed to driver and passenger monitoring in mainstream vehicles and is said to have a number of "best in the industry" and "first in the industry" features:

  • the industry’s best NIR QE in a driver-monitoring image sensor, with 12% at 940nm
  • consumes only 90mW of power at 60 fps, which is 30% lower than the nearest competitor
  • the industry’s first image sensor with "the right balance of cost effectiveness, high-quality imaging and advanced features, meeting the needs of the mainstream automotive market"

The new sensor is based on high-speed global shutter OmniPixel3-GS pixel technology and offers 1280 x 800 resolution at video speeds of up to 120 fps. The OV9284 is available now.

Go to the original article...

ON Semi Sensors in Sunlight

Image Sensors World        Go to the original article...

ON Semi publishes a video on automotive image sensors challenges under the direct sun:

Go to the original article...

More about Foxconn CIS Fab in China

Image Sensors World        Go to the original article...

EETimes compiles more info about Foxconn plans to build CIS fab in China:

Foxconn, "also known as Hon Hai Precision, aims to launch a $9 billion fab near southern China’s Zhuhai city... The total amount of the investment in the project could add up to around 60 billion yuan, or $9 billion, with most of the investment coming from the Zhuhai government.

Initially, Foxconn is expected to draw on Sharp, which has experience making CCD/CMOS sensors... [and] has operated an 8-inch fab in Fukuyama at the 0.13μm process node. Foxconn’s 2016 takeover of Sharp provides the ability to design and produce semiconductors for the first time in Foxconn’s history as it shifts from electronics assembly to higher margin chip production.

“One of the Hon Hai group’s goals when acquiring Sharp was to gain semiconductor technology,” Tokyo-based Mizuho Securities analyst Yasuo Nakane told EE Times... chip projects are financed partly by China’s central government, which sees semiconductors as a more important field, Nakane said.

The main hurdle for the China fab project is likely to focus on securing intellectual property and engineers. “If Sharp were to invest in a 12-inch wafer plant in China, we doubt it would begin operations using processes several generations old, and we think it currently lacks enough engineers for the task,” Nakane said... The new fab will initially make chips for ultra high-definition 8K televisions and camera image sensors, as well as various sensors for industrial use and connected devices.
"

Go to the original article...

TI Compares Automotive Image Sensors

Image Sensors World        Go to the original article...

TI publishes a comparison of automotive image sensors from different companies:

Go to the original article...

Luminar Supplies its LiDARs to Volvo and Volkswagen

Image Sensors World        Go to the original article...

Recently, Luminar LiDAR made its way to Volvo and Volkswagen-Audi autonomous car prototypes:



Go to the original article...

First ToF Imager from China

Image Sensors World        Go to the original article...

Wuhan, China-based Silicon Integrated Inc. unveils SIF2310 that it calls "The first back-illuminated area array ToF sensor in China."

The SIF2310 integrates:
  • HVGA (480x360) ToF pixel array
  • signal generator modulating the IR source
  • 12bit ADC
  • on-chip temperature sensor
  • logic control unit, high-speed clock
  • MIPI interface

The SIF2310 supports modulation frequencies up to 100MHz and output frame rates up to 240fps. With IR light source, SIF2310 can be used in face recognition, AR/VR, motion capture, 3D modeling, machine vision, and ADAS applications. The chip is available in a die or Glass-BGA package.

Go to the original article...

Digitimes on Samsung Plans

Image Sensors World        Go to the original article...

Digitimes overviews 2018 Far East semiconductor industry and emphasize Samsung impact on CIS market:

"With use of CMOS image sensors (CIS) extending from smartphone cameras to automotive cameras, Samsung has planned to expand its CIS production capacity to surpass Sony to become the globally largest supplier. Samsung began to modify Line 11 DRAM factory in its Hwaseong production base for production of CIS at the end of 2017, with the modification to be completed by the end of 2018.

Samsung also will modify Line 13 DRAM factory in the same production base for CIS production. Samsung had monthly production capacity of 45,000 12-inch wafers for making CIS at the end of 2017, and the capacity will increase to nearly 120,000 12-inch wafers when the two additional factories begin production.

Samsung's moves are in line with the optimism of carmakers and other semiconductor vendors about the future of autonmous driving.
"

Go to the original article...

Sony to Increase ToF Sensors Production

Image Sensors World        Go to the original article...

Bloomberg interviews Satoshi Yoshihara, the head of Sony’s image sensor division. He says that:
  • Sony is boosting production of its ToF sensors after getting interest from customers including Apple
  • Sony 3D business is already profitable and will make an impact on earnings from the next fiscal year starting in April 2019.
  • Sony started providing 3D SDK to outside developers to experiment with its ToF chips and create apps.
  • Cameras revolutionized phones, and based on what I’ve seen, I have the same expectation for 3D,” said Yoshihara
  • Huawei is using Sony’s ToF cameras in the next generation models, according to Bloomberg sources
  • There will be a need for two 3D cameras on smartphones, for the front and back
  • Sony ToF sensors will appear in models from several smartphone manufacturers in 2019
  • Sony starts ToF sensors mass production in late summer 2019 to meet the demand


Thanks to RP and TG for the pointer!

Go to the original article...

Hitachi-LG ToF Cameras

Image Sensors World        Go to the original article...

Hitachi-LG Data Storage presents a lineup of ToF cameras for industrial and machine vision use:



Go to the original article...

Foxconn to Manufacture CIS in China

Image Sensors World        Go to the original article...

Nikkei, EETimes: Foxconn Technology (AKA Hon Hai Precision Industry) is preparing to build a $9b chip fab in the southern Chinese city of Zhuhai. The new fab will manufacture image sensors, chips for 8K TVs, and various sensors for industrial uses and connected devices, according to Nikkei. The construction is expected to start in 2021 (2020, according to Semimedia).

A majority of the investment is to be subsidized by Zhuhai city government. The new fab will rank as one of the country's top high-tech projects, according to Nikkei sources. The fab will make chips not just for its own use but for other customers, competing with TSMC, Globalfoundries, Samsung foundry unit, and SMIC.

According to Nikkei, Foxconn is expected to form a JV for the project with Sharp, which it acquired in 2016, and the Zhuhai government. However, Semimedia reports that Sharp denies its involvement in the project.

Meanwhile, Japan Times says that Sharp is going to spin-off its semiconductor business into two entities. One of them will be responsible for lasers, the second one - for sensors and other semiconductors. Sharp Chairman and President Tai Jeng-wu told reporters that he wants to tap overseas and domestic resources, showing eagerness to forge alliances with other firms including Sharp’s parent, Hon Hai Precision Industry (Foxconn).

Currently, Sharp manufactures semiconductor-related products at its plants in the prefectures of Hiroshima and Nara.

Picture source: EENews

Go to the original article...

QIS Article

Image Sensors World        Go to the original article...

LaserFocusWorld publishes an article "The Quanta Image Sensor (QIS): Making Every Photon Count" by Eric Fossum and Kaitlin Anagnost. The article presents Dartmouth work on Megapixel single photon resolving sensors and compares it with SAPDs and EMCCDs:

Go to the original article...

Huawei Honor V20 Smartphone Features Rear ToF Camera

Image Sensors World        Go to the original article...

DeviceSpecifications: Huawei Honor V20 smartphone features Sony 48MP IMX586 image sensor combined with a rear 3D ToF camera. Apparently, the ToF camera is used for better AF, AR applications, and games:


Update: PRNewswire: "...rear camera is a TOF 3D camera, which makes the phone capable of creating a new dimension in photography and videography that brings greater usability and fun to users. It can calculate distance based upon the time-of-flight of a light signal, and has functions including depth sensing, skeletal tracking and real-time motion capture.

The TOF 3D camera can turn HONOR View20 into a motion-controlled gaming console, and allow you to play 3D motion games like never before. In addition, it can also let 3D characters dance following your gestures on your phone, and you can share these funny dancing videos with your friends.
"

Go to the original article...

Korean Companies Allocate More Resources to CIS Business

Image Sensors World        Go to the original article...

BusinessKorea: As memory prices go down, CIS business becomes more important in Korean companies:
  • Samsung has selected image sensors as its next-generation growth engine.
  • Samsung establishes a new business team to strengthen its sales of sensors for autonomous vehicles.
  • Samsung signed a contract with Tesla to supply image sensors for vehicles. Although the immediate sales impact is not significant, it will make it easier for Samsung to expand with other companies in automotive space.
  • Samsung CIS activity has been re-organized so that the process development is now done by the foundry section of the Device Solution Division (DS), while the sensor business team focuses on product planning and sales.
  • Samsung is converting some of its 11-line DRAM production in Hwaseong plant into image sensor lines.
  • LG and SK Hynix invested in LiDAR startup Aeye
  • SK Hynix aims to achieve 1 trillion won (US$900 million) in image sensor sales.
  • SK Hynix focuses on mid-end products and increase its market share in China.
  • SK Hynix leverages its DRAM strength in image sensor marketing by offering a deal on CIS purchase to a company that buys large amount of DRAM products.

Go to the original article...

Artilux Ge-on-Si IEDM Paper Reports Mass Production Readiness

Image Sensors World        Go to the original article...

Taiwan-based Artilux presented its Ge-on-Si paper "High-Performance Germanium-on-Silicon Lock-in Pixels for Indirect Time-of-Flight Applications" by N. Na, S.-L. Cheng, H.-D. Liu, M.-J. Yang, C.-Y. Chen, H.-W. Chen, Y.-T. Chou, C.-T. Lin, W.-H. Liu, C.-F. Liang, C.-L. Chen, S.-W. Chu, B.-J. Chen, Y.-F. Lyu, and S.-L. Chen.

The paper appears to be a process and device development report with statistics on process variations and various optimizations of the device parameters. The conclusion is: "Novel GOS lock-in pixel is investigated at 940nm and 1550nm wavelengths and shown to be a strong contender against conventional Si lock-in pixel. The measured statistical data further demonstrate the technology yields good within wafer and wafer-to-wafer uniformities that may be ready for mass production in the near future."

Go to the original article...

Etron and eYs3D Develop AI-on-Edge Natural Light 3D Sensing

Image Sensors World        Go to the original article...

Digitimes: Etron and its subsidiary eYs3D Microelectronics are set to launch AI-on-edge 3D sensing solutions based on Etron stereo processors. Their "3D natural light deep vision platform technology that can allow one smartphone to unlock 5-6 smartphones via recognizing faces of smartphone users under their consent, and can also connect smart devices to monitor air conditioners, TVs and other smart household electrical appliances."

Go to the original article...

Jamming Smartphone Cameras

Image Sensors World        Go to the original article...

Journal of Physics publishes an open-access paper "Tracking and Disabling Smartphone Camera for Privacy" by Qurban Memon, Khawla Al Shanqiti, AlYazia Al Falasi, Amna Al Jaberi, and Yasmeen Amer from UAE University.

"With people's easy access to various forms of recent technologies, privacy has decreased immensely. One of the major privacy breaches nowadays is taking pictures and videos using smart phones without seeking permission of those whom it concerns. This work aims to target privacy in the current mobile environment. The main contribution of this work is to block the smart phone camera without damaging the smart phone or harming people around it. The approach is divided into stages: body area detection and then camera detection in the frame. The detection stage follows pointing of laser(s) controlled by a microcontroller. Tests are conducted on built system and results show performance error less than 1%. For Safety, the beams are devised to be harmless to the people, environment and the targeted smart phones."

Go to the original article...

3D AI Camera Company Shuts Down

Image Sensors World        Go to the original article...

Not everything 3D and AI is automatically successful. Techcrunch, Engadget, and Information report that Lighthouse home security camera company is shutting down. The startup was founded in 2015 and raised $20m. The final product, while working, has been shipped late and costed too much to be successful.

The company CEO Alex Teichman writes: "I am incredibly proud of the groundbreaking work the Lighthouse team accomplished - delivering useful and accessible intelligence for our homes via advanced AI and 3D sensing. Unfortunately, we did not achieve the commercial success we were looking for and will be shutting down operations in the near future."






Go to the original article...

Nikon Invests $25m in Velodyne, Discusses Collaboration

Image Sensors World        Go to the original article...

Velodyne Lidar announces Nikon as a new strategic investor with an investment of $25M. The parties further announced they have begun discussions for a multifaceted business alliance.

Aiming to combine Nikon’s optical and precision technologies with Velodyne’s sensor technology, both companies have begun investigating a wide-ranging business relationship, including collaboration in technology development and manufacturing. The companies share a futuristic vision of advanced perception technology for a wide range of applications including robotics, mapping, security, shuttles, drones, and safety on roadways.


Tahnks to TG for the link!

Go to the original article...

Samsung Exhibits Automotive Image Sensors

Image Sensors World        Go to the original article...

Nikkei publishes an overview of Samsung booth at Electronica 2018 exhibition in Germany. Samsung demos 120dB HDR and 7.4MP high resolution automotive-qualified sensors:

Go to the original article...

Aeye Raises $40m, Unveils iDAR Product

Image Sensors World        Go to the original article...

BusinessWire: AEye, the developer of iDAR, announces the second close of its Series B financing, bringing the company’s total funding to over $60 million. AEye Series B round includes Hella Ventures, SUBARU-SBI Innovation Fund, LG Electronics, and SK Hynix. AEye previously announced that the round was led by Taiwania Capital along with existing investors Kleiner Perkins, Intel Capital, Airbus Ventures, R7 Partners, and an undisclosed OEM.

AEye's iDAR physically fuses the 1550nm solid-state LiDAR with a high-resolution camera to create a new data type called Dynamic Vixels. This real-time integration occurs in the IDAR sensor, rather than post fusing separate camera and LiDAR data after the scan. By capturing both geometric and true color (x,y,z and r,g,b) data, Dynamic Vixels uniquely mimic the data structure of the human visual cortex, capturing better data for vastly superior performance and accuracy.

This funding marks an inflection point for AEye, as we scale our staff, partnerships and investments to align with our customers’ roadmap to commercialization,” said Luis Dussan, AEye founder and CEO. “The support we have received from major players in the automotive industry validates that we are taking the right approach to addressing the challenges of artificial perception. Their confidence in AEye and iDAR will be borne out by the automotive specific products we'll be bringing to market at scale in Q2 of 2019. These products will help OEMs and Tier 1s accelerate their products and services by delivering market leading performance at the lowest cost.

Aeye AE110 iDAR is fusing 1550nm solid-state agile MOEMS LiDAR, a low-light HD camera, and embedded AI to intelligently capture data at the sensor level. The AE110’s pseudo-random beam distribution search option makes the system eight times more efficient than fixed pattern LiDARs. The AE110 is said to achieve 16 times greater coverage of the entire FOV at 10 times the frame rate (up to 100 Hz) due to its ability to support multiple regions of interest for both LiDAR and camera.

Go to the original article...

Optical AI Processor Company Raises $3.3m

Image Sensors World        Go to the original article...

VentureBeat: LightOn, a Paris, France-based AI startup, has closed a $3.3M (€2.9M) seed funding round. LightOn is developing a new optics-based data processing technology for AI. Leveraging compressive sensing, LightOn’s hardware and software can make Artificial Intelligence computations both simpler and orders of magnitude more efficient. The technology, licensed by PSL Research University, was originally developed at several of Paris’ leading research institutions.

For the past few months, LightOn has allowed access to their Optical Processing Units (OPU) to a select group of beta customers through the LightOn Cloud, thanks to a partnership with OVH, Europe’s leading cloud provider. First users from both Academia and Industry have already successfully demonstrated impressive results on this hybrid CPU/GPU/OPU server, outperforming silicon-only computing technology in a variety of large scale Machine Learning tasks. Typical use cases currently include transfer learning, change point detection, or time series prediction.

LightOn’s CEO, Igor Carron said, “It’s an exciting time as Artificial Intelligence develops rapidly. The requirements as usage scales necessitate improved power efficiency and performance. LightOn’s technology addresses these monumental challenges.

LightOn's OPU technology has been presented in 2015 arxiv.org paper "Random Projections through multiple optical scattering: Approximating kernels at the speed of light" by Alaa Saade, Francesco Caltagirone, Igor Carron, Laurent Daudet, Angélique Drémeau, Sylvain Gigan, Florent Krzakala

"Random projections have proven extremely useful in many signal processing and machine learning applications. However, they often require either to store a very large random matrix, or to use a different, structured matrix to reduce the computational and memory costs. Here, we overcome this difficulty by proposing an analog, optical device, that performs the random projections literally at the speed of light without having to store any matrix in memory. This is achieved using the physical properties of multiple coherent scattering of coherent light in random media. We use this device on a simple task of classification with a kernel machine, and we show that, on the MNIST database, the experimental results closely match the theoretical performance of the corresponding kernel. This framework can help make kernel methods practical for applications that have large training sets and/or require real-time prediction. We discuss possible extensions of the method in terms of a class of kernels, speed, memory consumption and different problems."

Go to the original article...

Himax Puts its Focus on 3D Imaging

Image Sensors World        Go to the original article...

Himax December 2018 factsheet sets 3D imaging as one of the company's main growth opportunities:

Go to the original article...

Power Efficient Neural ToF Camera

Image Sensors World        Go to the original article...

SenseTime Research and Tsinghua University publish arxiv.org paper "Very Power Efficient Neural Time-of-Flight" by Yan Chen, Jimmy S. Ren, Xuanye Cheng, Keyuan Qian, and Jinwei Gu.

"Time-of-Flight (ToF) cameras require active illumination to obtain depth information thus the power of illumination directly affects the performance of ToF cameras. Traditional ToF imaging algorithms is very sensitive to illumination and the depth accuracy degenerates rapidly with the power of it. Therefore, the design of a power efficient ToF camera always creates a painful dilemma for the illumination and the performance trade-off. In this paper, we show that despite the weak signals in many areas under extreme short exposure setting, these signals as a whole can be well utilized through a learning process which directly translates the weak and noisy ToF camera raw to depth map. This creates an opportunity to tackle the aforementioned dilemma and make a very power efficient ToF camera possible. To enable the learning, we collect a comprehensive dataset under a variety of scenes and photographic conditions by a specialized ToF camera. Experiments show that our method is able to robustly process ToF camera raw with the exposure time of one order of magnitude shorter than that used in conventional ToF cameras. In addition to evaluating our approach both quantitatively and qualitatively, we also discuss its implication to designing the next generation power efficient ToF cameras. We will make our dataset and code publicly available."

Go to the original article...

FLIR Periodic Table of Image Sensors

Image Sensors World        Go to the original article...

FLIR (former Point Grey) publishes a Periodic Table of Image Sensors for industrial and machine vision applications - a nice attempt to make an order in somewhat messy product lines from different companies. The high resolution pdf files can be downloaded from December 2019 issue of FLIR Insights newsletter:

"With so many sensors coming from different manufacturers, it's hard to remember them all. To help with this, we are giving away a handy guide that organizes 120 sensors, from classic CCDs to the latest CMOS technology, based on resolution, readout method, speed, and FPS. We suggest downloading and laminating it, then pinning it on your wall for easy reference. You will never have to memorize sensor names again. Enjoy, and have a happy holiday!"

Go to the original article...

Xiaomi Mi 8EE 3D FaceID Module

Image Sensors World        Go to the original article...

SystemPlus Consulting publishes teardown report on Xiaomi Mi 8 Explorer Edition smartphone featuring Mantis Vision structured light 3D module on the front side:

"Xiaomi has chosen Mantis Vision’s solution and its coded structured light to provide the 3D sensing capability. The 3D systems comprise a dot projector and a camera module assembly configuration. On the receiver side, the near infrared (NIR) image is captured by a global shutter (GS) NIR camera module.

The front optical hub is packaged in one metal enclosure, featuring several cameras and sensors. The complete system features a red-green-blue (RGB) camera module, a proximity sensor and an ambient light sensor. The 3D depth sensing system comprises the NIR camera module, the flood illuminator and the dot projector.

All components are standard products that can be readily found in the market. That includes a GS image sensor featuring 3µm x 3µm pixels and standard resolution of one megapixel and two vertical cavity surface emitting lasers (VCSELs), one for the dot projector and one for the flood illuminator. Both are from the same supplier. Both the camera and dot projector use standard camera module assemblies with wire bonding and optical modules featuring lenses. In order to provide coded structured light features, a mask is integrated into the dot projector structure.
"

Go to the original article...

ToF Developers Conference

Image Sensors World        Go to the original article...

Espros Photonics announces ToF Developers Conference to be held in San Francisco on January 29–31, 2019:

"A successful design of a 3D TOF camera for example needs a deep understanding of the underlying optical physics - theoretical and practical. In addition, the behavioral model of the imaging system and an excellent understanding of the sensing artifacts in real applications is key knowhow. And further more, thermal management is an issue because these cameras have an active illumination, typically quite powerful. And, as a consequence,
eye-safety becomes an issue as well. A TOF camera consist of 9 functional building blocks which have to be understood and fine-tuned carefully to create a powerful but cost effective design.

So, many more disciplines than just electronics and software are in the game. It's not rocket science, but the relevant understanding of these 9 blocks is a must to know if someone gets the duty to design a 3D TOF camera

There is, at least to our knowledge, no engineering school which addresses TOF and LiDAR as an own discipline. We at ESPROS decided to fill the gap with a training program called TOF Developer Conference. The objective is to provide a solid theoretical background, a guideline to working implementations based on examples and practical work with TOF systems. Thus, the TOF Developer Conference shall become the enabler for electronics engineers (BS and MS in EE engineering) to design working TOF systems. It is ideally for engineers who are or will be, involved in the design of TOF system. We hope that our initiative helps to close the gap between the desire of TOF sensors to massively deployed TOF applications.

Course topics:

TOF history; TOF principles; main parts of a TOF camera; relevant optical physics; light detection; receiver physics; noise considerations; SNR; light emission and light sources; eye safety; light power budget calculation; optics basics; optical systems and key requirements; bringing it all together; electronics, PCB layout guidelines; power considerations; calibration and compensation; filtering computing power requirements; interference detection and suppression; artifacts and how to deal with them; practical lab experiments; Q&A; much more…
"


The next two ToF conferences are planned to be held in China: Shanghai on April 2, 2019 and Shenzhen on April 9, 2019.

Go to the original article...

Sony Announces 5.4MP Automotive Sensor with HDR and LED Flicker Mitigation

Image Sensors World        Go to the original article...

Sony announces the 1/1.55-inch, 5.4MP (effective) IMX490 CMOS sensor for automotive cameras. Sony will begin shipping samples in March 2019.

The new sensor simultaneously achieves HDR and LED flicker mitigation at what Sony calls the industry’s highest 5.4MP resolution in automotive cameras. Sony has also improved the saturation illuminance through a proprietary pixel structure and exposure method. When using the HDR imaging and LED flicker mitigation functions at the same time, this offers a wide 120dB DR (measured in accordance to the EMVA 1288 standard. 140dB when set to prioritize DR.) This DR is said to be three times higher than that of the previous product. This means highlight oversaturation can be mitigated, even in situations where 100,000 lux sunlight is directly reflecting off a light-colored car in the front, and the like, thereby capturing the subject more accurately even under road conditions where there is a dramatic lighting contrast such as when entering and exiting a tunnel.

Moreover, this unique method is said to prevent motion artifacts that occur when capturing moving subjects compared with other HDR technologies. The new sensor also improves the sensitivity by about 15% compared to that of the previous generation product, improving the capability to recognize pedestrians and obstacles in low illuminance conditions of 0.1 lux, the equivalent of moonlight.

This product is scheduled to meet the AEC-Q100 Grade 2 reliability standards for automobile electronic components for mass production. Sony has also introduced a development process compliant with ISO 26262 functional safety standards for automobiles to ensure that design quality meets the functional safety requirements for automotive applications, thereby supporting functional safety level ASIL D for fault detection, notification and control.*6 Moreover, the new sensor has security functions to protect the output image from tampering.


Go to the original article...

Ambarella Announces 8MP ADAS Processor

Image Sensors World        Go to the original article...

BusinessWire: Ambarella introduces the CV22AQ automotive camera SoC, featuring the Ambarella CVflow computer vision architecture for powerful Deep Neural Network (DNN) processing. Target applications include front ADAS cameras, electronic mirrors with Blind Spot Detection (BSD), interior driver and cabin monitoring cameras, and Around View Monitors (AVM) with parking assist. Fabricated in advanced 10nm process technology, its low power consumption supports the small form factor and thermal requirements of windshield-mounted forward ADAS cameras.

The CV22AQ’s CVflow architecture provides computer vision processing in 8MP resolution at 30 fps, to enable object recognition over long distances and with high accuracy. CV22AQ supports multiple image sensor inputs for multi-FOV cameras and can also create multiple digital FOVs using a single high-resolution image sensor to reduce system cost.

To date, front ADAS cameras have been performance-constrained due to power consumption limits inherent in the form factor,” said Fermi Wang, CEO of Ambarella. “CV22AQ provides an industry-leading combination of outstanding neural network performance and very low typical power consumption of below 2.5 watts. This breakthrough in power and performance, coupled with best-in-class image processing, allows tier-1 and OEM customers to greatly increase the performance and accuracy of ADAS algorithms.

Go to the original article...

SEMI Forecasts Fab Investment Drop

Image Sensors World        Go to the original article...

SEMI: Total fab equipment spending in 2019 is projected to drop 8%, a sharp reversal from the previously forecast increase of 7% as fab investment growth has been revised downward for 2018 to 10% from the 14% predicted in August, according to the latest edition of the World Fab Forecast Report.

However, image sensor fab spending remains a bright spot: "Opto – especially CMOS image sensors – shows strong growth, surging 33 percent to US$3.8 billion in 2019:"

Go to the original article...

css.php