Sony vs Canon 135mm – can a 23 year-old lens really compete

Cameralabs        Go to the original article...

Owners of the Canon EF 135mm f2L USM consider it a legend, but can a 23 year old lens really compete with a modern design? Ben Harvey pitches the blisteringly-sharp Sony FE 135mm f1.8 GM against his beloved Canon to find out.…

The post Sony vs Canon 135mm – can a 23 year-old lens really compete appeared first on Cameralabs.

Go to the original article...

Framos on Industrial and Machine Vision Market Trends

Image Sensors World        Go to the original article...

Framos presents the results of its study of Industrial Cameras and Vision Systems Market and Trends:



Go to the original article...

Polarization Imaging Use Cases

Image Sensors World        Go to the original article...

Lucid Vision Labs presents its Sony sensor-based polarization camera use cases:



Fraunhofer IIS talks about polarization imaging applications too:

Go to the original article...

Image Processing News

Image Sensors World        Go to the original article...

Synopsys demos vision functions of its ARC EV6x Embedded Vision Processor IP:



Photron explains operation of its 6D marker and the accompanying software:



Light.co gives some details on its Light ASIC that does almost everything, but image stabilization:

"The Light ASIC is a dedicated chip that can control and transfer image data for up to six cameras simultaneously. On popular SoC platforms up to four Light ASICs can be used to coordinate as many as 24 cameras. When necessary, multiple Light ASICs can be interfaced to one another to allow even larger camera arrays. This chip was built specifically for computational imaging applications in everything from mobile phones, to security systems, to automotive systems.

The Light ASIC is small and incredibly efficient. The Package-on-Package chip is only 14 millimeters square, and designed for its memory to be stacked thereby saving valuable board space. The ASIC is also built for efficiency. It actively manages power consumption for active, preview, and standby modes, optimizing your device’s battery usage and thermal profile.

The Light ASIC and camera array work with the latest chipsets including the Qualcomm Snapdragon series, and multiple peripherals such as LED flashes, Time-of-Flight sensors, and Inertial Measurement Units. The Light ASIC independently coordinates control of all camera modules, simultaneously. It can achieve focus for all modules at a given region-of-interest, adjust exposure levels per aperture, while calculating white balance, all using Light proprietary calibration.
"


Techcrunch: Elon Musk explains the functions of camera looking inside Tesla’s Model 3 from its rear view mirror: Dog mode and Sentry mode. In Dog mode, the camera recognizes a dog left in the car unattended and adjusts air conditioning to keep a comfortable temperature in the car. In Sentry mode, the car uses its cameras to guard itself monitoring for any suspicious activity.

Go to the original article...

MIPI Test Board for Legacy ATE

Image Sensors World        Go to the original article...

PRWeb: Introspect Technology releases its SV4D Direct Attach MIPI Test Module that enables at-speed production testing for MIPI C-PHY or D-PHY transmitter or receiver interfaces.

Whereas we could use conventional ATE for DC parametric testing and a loop-back methodology for high-speed testing on our standard SerDes interfaces, we could not find a solution that could provide the necessary fault coverage for the MIPI ports on our devices,” said Ibrahim Aljabiri, Sr. Manager, Product & Test Engineering, Synaptics. “The SV4D’s strong MIPI features, high operating speed, and compact size allowed us to deploy a high-parallelism multi-site solution on our existing ATE.

Mohamed Hafed, CEO of Introspect Technology, explains, “we found that product engineers all over the world were looking for mimicking system-level functionality as much as possible during wafer sort and final test. So, we set out to create a production test module that leveraged our unique monolithic MIPI physical layers to deliver exactly that. Not only is the SV4D able to perform structural testing using abbreviated device test modes, but it is also able to completely exercise the link and software layers of devices under test.


Go to the original article...

Doppler LiDAR with Regular CMOS Sensor

Image Sensors World        Go to the original article...

Arxiv.org paper "A Time-of-Flight Imaging System Based on Resonant Photoelastic Modulation" by Okan Atalar, Raphaël Van Laer, Christopher J. Sarabalis, Amir H. Safavi-Naeini, and Amin Arbabian from Stanford University proposes a regular CMOS sensor-based Doppler LiDAR:

"To realize this system, a new device, a free-space optical mixer, is designed and fabricated. A scene is illuminated (flashed) with a megahertz level amplitude modulated light source and the reflected light from the scene is collected by a receiver. The receiver consists of the free-space optical mixer, comprising a photoelastic modulator sandwiched between polarizers, placed in front of a standard CMOS image sensor. This free-space optical mixer downconverts the megahertz level amplitude modulation frequencies into the temporal bandwidth of the image sensor. A full scale extension of the demonstrated system will be able to measure phases and Doppler shifts for the beat tones and use signal processing techniques to estimate the distance and velocity of each point in the illuminated scene with high accuracy."

Go to the original article...

Panasonic Lumix G90 G95 review

Cameralabs        Go to the original article...

The Panasonic Lumix G90 / G95 is a mid-range mirrorless camera based on the Micro Four Thirds standard, with a 20 Megapixel sensor, built-in stabilisation, viewfinder, fully-articulated touchscreen, and unlimited 4k recording. Find out how it compares to its rivals in my in-depth review!…

The post Panasonic Lumix G90 G95 review appeared first on Cameralabs.

Go to the original article...

1mW Always-On Imaging

Image Sensors World        Go to the original article...

1mW always-on imaging becomes quite a popular topic. TinyML Summit held in Sunnyvale, CA on March 20-21, has a number of presentations on that.

Pixart presents its approach to the low power CIS:


Qualcomm presents its view on "Ultra-low Power Always-On Computer Vision:"

"The CVM is built on a custom ASIC, which is a 28nm ultra-low-power ARM-based SoC featuring a control processor, a DSP-like hardware accelerator, a dedicated vision processor, and embedded PMU. It also incorporates a lower-power QVGA CMOS grayscale image sensor and a custom-designed wide field-of-view lens. The image sensor is sensitive to near-IR wavelengths, and can be used for low-light scenarios with IR illumination. The entire CVM, including the image sensor and the ASIC, consumes less than 1 mW power while actively performing computer vision tasks such as object detection."

Go to the original article...

Automotive LiDARs in China

Image Sensors World        Go to the original article...

ResearchInChina publishes a report on "ADAS and Autonomous Driving Industry Chain Report, 2018-2019– Automotive Lidar." Few interesting quotes, including Velodyne LiDAR wholesale prices going down to $150:

"In the markets where Chinese companies master core technologies, price of products is bound to plummet. Take IPG for example, its 20W fiber lasers were priced at over RMB150,000 per unit in 2010, compared with current quote at RMB8,800 from the peer -- Shenzhen REEKO Information Technology Co., Ltd.. Maxphotonics Co., Ltd. and Shenzhen JPT Opto-electronics Co., Ltd. are another two rivals in the fiber laser price war.

The similar stories echo in the LiDAR market where price competition pricks up in 2019 as Hu Xiaobo, a founder of Maxphotonics Co., Ltd., ventures into the LiDAR field for a new undertaking.

Velodyne’s new factory in San Jose which already becomes operational, can produce as many as 1 million units a year. If acquiring orders for 100,000 units, Velodyne will cut down the price of its VLS 128-channel products to less than $1,000, and that of VLS 32 to roughly $650, let alone $500 for mass-produced 32-channel Velarray solid-state LiDAR and $150 for 8-channel ones.

It is clear that LiDAR price may be 10 times lower than what it is now, and the reduction hinges on how many are demanded.

Comparing with the previous year, Chinese LiDAR vendors have come a long way in factory construction, mass production, shipment, financing and other aspects.

In 2018, Hesai Tech announced to close Series B funding rounds of RMB250 million, with its automotive LiDAR sales only second to Velodyne’s.

RoboSense raised RMB300 million from investors like Cainiao, SAIC and BAIC. Its shipments of 16/32-channel mechanical LiDARs boomed in 2018. The vendor also acquired a MEMS micromirror firm in the year.

Although the automotive market is “wintering”, the financing story in LiDAR industry still goes on.
"

Go to the original article...

LFoundry Changes Hands Again

Image Sensors World        Go to the original article...

LFoundry and SMIC announce that they have entered into a binding agreement to sell LFoundry to Jiangsu CAS-IGBT Technology Co., Ltd. The transaction also includes LFoundry and SMIC groups in Bulgaria.

Jiangsu CAS-IGBT Technology Co., Ltd. is a group focusing on the research, design and development of new power and electronic chips such as IGBT (Insulated Gate Bipolar Transistor) and FRD (Fast Recovery Diode).

"We are setting the stage for a new era and we are satisfied with it," said Sergio Galbiati and Guenther Ernst, respectively Vice-Chairman and CEO of LFoundry. “The technological and production capacity of the Avezzano plant (specially focused on the automotive sector, but also on security and industrial field with applications such as CMOS image sensors, smart power, embedded memory and others) will provide Jiangsu CAS-IGBT a unique platform from which to grow existing and new Lines of Businesses that will allow for the potential of a brighter future in Avezzano by serving a more diverse set of applications."

The HK stock exchange document filed by SMIC says: "The Consideration is USD112,816,089, which was determined after arm’s length negotiation between the Vendor and the Purchaser by reference to fair value of LFoundry per the Company internal analysis and research, including the investment costs of a newly set up 200mm wafer fabrication facility, valuation of the property, plant and equipment and the market value of other 200mm wafer fabrication facility. The Directors consider that the Consideration is fair and reasonable and in the interest of the Company and its shareholders as a whole.

In accordance with the International Financial Reporting Standards, the net loss before or after taxation (unaudited) of the Target Group for the financial year ended 31 December 2018 and the financial year ended 31 December 2017 were USD8.1 million and USD14.9 million, respectively.

The unaudited total asset value of the Target Group as at 31 December 2018 was USD256.2 million.
"

The formal acquisition finalization is scheduled for the end of June.

Go to the original article...

Image Sensors Europe 2019 Notes

Image Sensors World        Go to the original article...

Image Sensors Europe conference held in London, UK on March 13-14, 2019 has a couple of interesting messages:

Mantis Vision reports that smartphone 3D cameras based on structured light approach has been largely rejected by the market due to a large display "notch" needed for stereo base:


Amazon asks a question whether image sensors can be as power efficient as audio sensors? For example, modern always-on audio solutions consume just 19uA while waiting for a wake-up phrase "OK Google":


Martin Wany shows that several CMOSIS key designers have left the company after AMS acquired it and started new companies:


NHK presented its Selenium-based image sensor:


ON Semi shows the capabilities of its AR0430 sensor with SuperDepth technology:


Sony quotes few papers on DNN potential of improving image quality:

Go to the original article...

Himax Presents 1mW Always-On Intelligent Camera

Image Sensors World        Go to the original article...

Globenewswire: Himax and its wholly-owned subsidiary Emza Visual Sense release their second generation “WiseEye IoT” intelligent vision solution. Compared to first generation solutions, WiseEye 2.0 is “IoT Ready” adding a proprietary processor to Emza’s AI-based machine learning computer vision algorithms and Himax’s low-power CMOS sensor. The new camera provides higher resolution and better efficiency with less power consumption. These new developments enable cost effective addition of human presence detection and identification to next generation consumer IoT devices in security systems, smart homes and buildings.

The key features of the WiseEye 2.0 IoT solution include:

  • Battery-powered human detection sensor: Designed with the combination of an ultra-low-power image sensor and energy efficient CV image processing algorithm, the battery-powered IoT visual sensor enables the always-on camera to wake up devices based on specific patterns or movements.
  • AI-based machine learning at the edge: Unique combination of ultra-low power consumption combined with AI-based machine learning, enables battery operated devices with advanced intelligence that were never previously available for smart home, security and consumer IoT applications.
  • No passive infrared (PIR) sensors required: Current PIR-based sensors used for low power motion detection have no intelligence and as a result deliver a costly level of false-positives. WiseEye 2.0 provides low power with high intelligence to significantly increase accuracy and decrease false alarms.
  • Pre-roll feature: The always-on camera stores all frames related to an alarm including footage from before the event occurred.
  • High accuracy human classification: With human recognition from up to 10 meters away, WiseEye 2.0 is significantly more accurate than first generation solutions.

WiseEye 2.0 brings an enhanced user experience and better-informed decision-making based on minimal power and cost requirements. We plan to release the reference design in Q3 2019 including all components and functions for OEMs and ODMs to simplify integration of advanced vision functionality into their current and next generation IoT devices,” said Yoram Zylberberg, CEO of Emza Visual Sense.

"We are excited about WiseEye 2.0 and the level of integration we have achieved between the new HM0360 camera, algorithm and processor," said Amit Mittra, CTO of Himax Imaging. "The result is sub 1 mW always-on functionality, faster response times and power requirements 1-2 orders of magnitude lower than previous solutions. This is what our customers are specifying for their smart home/building, security, automotive, and consumer IoT applications."

Go to the original article...

Sony Rumored to Prepare 102MP Full-Frame Sensor Capable of 6K Video

Image Sensors World        Go to the original article...

SonyAlphaRumors publishes a rumor of new Sony 102MP full-frame sensor capable of 6K video at 30fps frame rate:


"2.91um pixel architecture, 100MP @ 10fps, 6K video using 12bit ADC with on-chip binning/line-skipping. 4K RGB 4:4:4 video with on-chip colour-aware binning.

This 12288 x 8192 100MP sensor employs a unique, CFA-based column-parallel ADC design:
"

Go to the original article...

Event-based Face Detection

Image Sensors World        Go to the original article...

Neuromorphic Vision paper claims that event-based sensors can detect faces with much lower power: "High Speed Event-based Face Detection and Tracking in the Blink of an Eye" by Gregor Lenz, Sio-Hoi Ieng, and Ryad Benosman.

"We present the first purely event-based method for face detection using the high temporal resolution of an event-based camera. We will rely on a new feature that has never been used for such a task that relies on detecting eye blinks. Eye blinks are a unique natural dynamic signature of human faces that is captured well by event-based sensors that rely on relative changes of luminance. Although an eye blink can be captured with conventional cameras, we will show that the dynamics of eye blinks combined with the fact that two eyes act simultaneously allows to derive a robust methodology for face detection at a low computational cost. We show that eye blinks have a unique temporal signature over time that can be easily detected by correlating the acquired local activity with a generic temporal model of eye blinks that has been generated from a wide population of users. We show that once the face is reliably detected it is possible to apply a probabilistic framework to track the spatial position of a face for each incoming event while updating the position of trackers. Results are shown for several indoor and outdoor experiments. We will also release an annotated data set that can be used for future work on the topic."

Go to the original article...

SystemPlus on Mobile CIS Comparison

Image Sensors World        Go to the original article...

SystemPlus publishes "Mobile CMOS Image Sensor Comparison 2019:"

"Discover the comparative study to provide insights into the structure and technology of 28 CIS die in seven flagship smartphones from several major brands: the Apple iPhone X; Samsung Galaxy S9 Plus; Huawei P20 Pro; Huawei Mate 20 Pro; Xiaomi Mi8 Explorer Version; Oppo Find X; and Vivo X21UD.

The report has shown that the four manufacturers of CIS presented in the flagships, Sony, Samsung, Omnivision and STMicroelectronics, have totally different approaches. For example, Sony is the only manufacturer using hybrid bonding in the analyzed devices, having completely dropped fusion bonding with Through-Silicon Vias (TSVs). We have extracted further technical choices from the four players from the analysis and comparisons.
"

Comparison Omnivision-Samsung-Sony

Go to the original article...

Korean Companies Working to Expand CIS Business

Image Sensors World        Go to the original article...

KoreaHerald reports that Samsung and Hynix are increasing their efforts to expand their image sensor market share:

"President Moon Jae-in most recently ordered immediate measures to raise the domestic semiconductor industry’s competitiveness in the non-memory field at a state affairs meeting, saying, “Measures are needed to reduce the country’s overreliance on the memory chip market.”

Japan’s Sony leads the image sensor market. [Samsung] comes second in the image sensor market after Sony with about 30 percent share as of last year.

SK hynix, the second-largest player in the global memory market after Samsung, has so far remained silent about its non-memory business, shy of revealing the reality of its small business in the image sensor market.

According to market researcher TSR, the company claimed a 9.9 percent market share in the first quarter of 2018. But its image sensor sales -- at 800 billion won ($706 million) -- accounted for a mere 1 percent of the company’s total sales last year.
"

SK Hynix company blog reviews the latest development in smartphone imaging, mostly quoting other Korean companies:

Go to the original article...

Sub-Threshold 200GHz Detector

Image Sensors World        Go to the original article...

MDPI paper "Quasi-static Analysis Based on an Equivalent Circuit Model for a CMOS Terahertz Plasmon Detector in the Subthreshold Region" by Ju-Hee Son and Jong-Ryul Yang from Yeungnam University, Gyeongsan, Korea, claims that sub-threshold-biased nmos transistor in 0.25um process is capable of 200GHz radiation detection:

"An analytic method for a complementary metal-oxide-semiconductor (CMOS) terahertz plasmon detector operating in the subthreshold region is presented using the equivalent circuit model. With respect to design optimization of the detector, the signal transmission from the antenna port to the output of the detector is described by using the proposed circuit model, which does not include a complicated physical operating principle and mathematical expressions. Characteristics from the antenna port to the input gate node of the detector are analyzed through the superposition method by using the characteristic impedance of transmission lines. The superposition method shows that the effect of interconnection lines at the input is simplified with the optimum bias point. The characteristics of the plasmon detection are expressed by using small-signal analysis of the single transistor at the sub-threshold operation. The results of the small-signal analysis show that the unity gain preamplifier located between the detector core and the main amplifier can improve the detection performances such as the voltage responsivity and the noise equivalent power. The measurement results using the fabricated CMOS plasmon detector at 200 GHz suggest that the unity gain preamplifier improves the detector performances, which are the same results as we received from the proposed analytic method."

Go to the original article...

Fast Polarization Imaging

Image Sensors World        Go to the original article...

Photron publishes nice videos shot by its Crysta polarization camera:



Go to the original article...

ESA Investments in CIS Technology

Image Sensors World        Go to the original article...

Caeleste publishes a link to slides on European Space Agency programs and budgets on image sensor technology development. Some of the projects are successful, others end with failures:

Go to the original article...

Front Side Microlens for BSI Pixel

Image Sensors World        Go to the original article...

MDPI paper "Front-Inner Lens for High Sensitivity of CMOS Image Sensors" by Godeun Seok and Yunkyung Kim from Dong-A University, Busan, Korea propose dual-side microlens for small pixels:

"Due to the continuing improvements in camera technology, a high-resolution CMOS image sensor is required. However, a high-resolution camera requires that the pixel pitch is smaller than 1.0 μm in the limited sensor area. Accordingly, the optical performance of the pixel deteriorates with the aspect ratio. If the pixel depth is shallow, the aspect ratio is enhanced. Also, optical performance can improve if the sensitivity in the long wavelengths is guaranteed. In this current work, we propose a front-inner lens structure that enhances the sensitivity to the small pixel size and the shallow pixel depth. The front-inner lens was located on the front side of the backside illuminated pixel for enhancement of the absorption. The proposed structures in the 1.0 μm pixel pitch were investigated with 3D optical simulation. The pixel depths were 3.0, 2.0, and 1.0 μm. The materials of the front-inner lens were varied, including air and magnesium fluoride (MgF2). For analysis of the sensitivity enhancement, we compared the typical pixel with the suggested pixel and confirmed that the absorption rate of the suggested pixel was improved by a maximum of 7.27%, 10.47%, and 29.28% for 3.0, 2.0, and 1.0 μm pixel depths, respectively."

Go to the original article...

Collabo Innovations vs Sony Appeal Oral Argument Recording

Image Sensors World        Go to the original article...

For those who are curious how the US court arguments sound like, can listen to an mp3 recording of Collabo Innovations vs Sony case appeal held on March 5, 2019:



The patents in this lawsuit are US5,952,714 "Solid-state image sensing apparatus and manufacturing method thereof" and US8,030,724 "Solid-state imaging device and method for fabricating the same." Collabo acquired both of them from Panasonic.

Go to the original article...

Prophesee Invests in Software

Image Sensors World        Go to the original article...

Prophesee releases a driver for Robot Operating System (ROS).

The company also publishes an arxiv.org paper "Speed Invariant Time Surface for Learning to Detect Corner Points with Event-Based Cameras" by Jacques Manderscheid, Amos Sironi, Nicolas Bourdis, Davide Migliore, and Vincent Lepetit.

"We propose a learning approach to corner detection for event-based cameras that is stable even under fast and abrupt motions. Event-based cameras offer high temporal resolution, power efficiency, and high dynamic range. However, the properties of event-based data are very different compared to standard intensity images, and simple extensions of corner detection methods designed for these images do not perform well on event-based data. We first introduce an efficient way to compute a time surface that is invariant to the speed of the objects. We then show that we can train a Random Forest to recognize events generated by a moving corner from our time surface. Random Forests are also extremely efficient, and therefore a good choice to deal with the high capture frequency of event-based cameras ---our implementation processes up to 1.6Mev/s on a single CPU. Thanks to our time surface formulation and this learning approach, our method is significantly more robust to abrupt changes of direction of the corners compared to previous ones. Our method also naturally assigns a confidence score for the corners, which can be useful for postprocessing. Moreover, we introduce a high-resolution dataset suitable for quantitative evaluation and comparison of corner detection methods for event-based cameras. We call our approach SILC, for Speed Invariant Learned Corners, and compare it to the state-of-the-art with extensive experiments, showing better performance."


Thanks to TL for the pointer!

Go to the original article...

SPAD-based LiDAR in Bright Sunlight

Image Sensors World        Go to the original article...

A group of researches from University of Wisconsin-Madison publishes a nice arxiv.org paper analyzing SPAD LiDAR performance in bright sunlight: "Photon-Flooded Single-Photon 3D Cameras" by Anant Gupta, Atul Ingle, Andreas Velten, and Mohit Gupta:

"Single photon avalanche diodes (SPADs) are starting to play a pivotal role in the development of photon-efficient, long-range LiDAR systems. However, due to non-linearities in their image formation model, a high photon flux (e.g., due to strong sunlight) leads to distortion of the incident temporal waveform, and potentially, large depth errors. Operating SPADs in low flux regimes can mitigate these distortions, but, often requires attenuating the signal and thus, results in low signal-to-noise ratio. In this paper, we address the following basic question: what is the optimal photon flux that a SPAD-based LiDAR should be operated in? We derive a closed form expression for the optimal flux, which is quasi-depth-invariant, and depends on the ambient light strength. The optimal flux is lower than what a SPAD typically measures in real world scenarios, but surprisingly, considerably higher than what is conventionally suggested for avoiding distortions. We propose a simple, adaptive approach for achieving the optimal flux by attenuating incident flux based on an estimate of ambient light strength. Using extensive simulations and a hardware prototype, we show that the optimal flux criterion holds for several depth estimators, under a wide range of illumination conditions."

Go to the original article...

e2v Announces Another 5MP GS Sensor

Image Sensors World        Go to the original article...

Globenewswire: Following 5MP sensor in Emerald family, Teledyne e2v adds a new 5MP GS sensor in its Snappy family for barcode reading, 2D scanning and other applications. Available in both monochrome and color, the Snappy 5M has a 1/1.8 inch optical format, containing a 2.8 μm global shutter pixel and able to output video at ~50 fps at 10 bits over a 4 wire, MIPI CSI-2 interface.

Snappy 5M is designed to enable fast, extended range scanning and includes powerful unique patented features and region of interest modes:
  • A Fast Self Exposure (FSE) mode automatically calculates the optimum integration time that is applied to the first image from the device. The mode is user programmable and provides continuous fast decoding, tolerating any kind of lighting or dynamic lighting environment. This is advantageous compared with conventional auto exposure methods, improving convergence speed and robustness.
  • A Smart ROI feature searches for barcodes in the image frame, and reports their locations as metadata in the image footer. The regions of the image containing barcodes are discerned from the background image to considerably reduce downstream image processing (FPGA/CPU/DSP) power, time and cost. Up to 16 different regions can be detected simultaneously. Other forms of repetitive signatures such as printed character strings can also be detected for document scanning and OCR applications.

Go to the original article...

Yole on LiDAR Market

Image Sensors World        Go to the original article...

Yole Developpement publishes a report "LiDAR for Automotive and Industrial Applications 2019." Few quotes:

"The total LiDAR market was worth $1.3B in 2018 and is expected to reach $6B by 2024. Automotive applications should represent 70% of the total market.

Huge investments have been made since 2016, surpassing $1B and showing the great interest in LiDAR technology and more generally autonomous driving features. MEMS is the technology attracting most investments, followed by optical phased arrays, although investments in the latter have considerably declined since 2016.

Therefore, MEMS and flash technologies seem to be favored by LiDAR manufacturers. These two technologies are promising and should be introduced rapidly into the market: MEMS will be introduced by BMW in 2021; flash is being pushed by Continental, expecting an introduction by a carmaker in 2020.

A majority of LiDAR manufacturers are using optical components at a wavelength of 905nm due to their large availability at a reasonable cost compared with 1550nm components. Edge emitting lasers and avalanche photodiodes at 905nm are typical components of LiDARs developed today. Other components like vertical cavity surface emitting lasers (VCSELs), single-photon avalanche diodes (SPADs), and silicon photomultipliers (SiPMs) can also be used. However, they are expected in the next generation of LiDAR, as time is needed to increase their performance and reduce their cost.
"

Go to the original article...

Huawei P30 Pro Gets Highest DxOMark, Uses RYYB CFA in Main Sensor and Dedicated ToF Sensor

Image Sensors World        Go to the original article...

Huawei new flagship smartphone, P30 Pro wins the highest DxOMark for its camera:


The most interesting fact is that such a high score has been achieved with RYYB color filter pattern in its main sensor. Aptina has patented that pattern several years ago. It's not immediately clear whether Huawei or its CIS supplier have licensed this patent or found a way around it.






Thanks to TS for the pointer!

Go to the original article...

Emmy Award for Color Filter Technology

Image Sensors World        Go to the original article...

Globenewswire: Peter Dillon and Albert Brault will receive Technology Emmy Awards from the National Academy of Television Arts & Sciences for their “Pioneering Development of the Single-Chip Color Camera” on April 7th in Las Vegas, NV. Their inventions include coating a mosaic of color filters over the light sensitive pixels on an image sensor and developing demosaicing algorithms to generate color video images. This technology is widely used to produce television programs and movies. It’s also used to create color photos and video clips in a broad range of products, including smart phone cameras, drones, and medical imaging devices.

Peter Dillon said “We’re delighted to receive this recognition for our research. We’d like to thank all our team members, who helped us develop and demonstrate the world’s first integral color image sensors and cameras. It’s amazing what a revolution this has created in how people around the world use color images to communicate.

Albert Brault said “By combining my knowledge of chemistry with Peter’s understanding of solid-state electronics, we created a new way of sensing color images. We were fortunate to have all the support and infrastructure needed to turn our ideas into working devices. Decades later, we’re thrilled that Rochester remains the world’s center for photonics and imaging.

In early 1974, while at Kodak Research Labs (KRL), Dillon lead a team developing an early prototype color video camcorder. Instead of the conventional design using a large color prism and three CCD sensors, he conceived the idea of fabricating a color filter mosaic over the individual pixels of a single CCD. Brault, his KRL colleague, then perfected a process for coating organic color dyes through photoresist windows during wafer fabrication. To determine the optimum color pattern, Peter consulted KRL mathematician Bryce Bayer, who invented the checkerboard arrangement now known as the “Bayer Pattern”. These visionary ideas made capturing color digital images inexpensive and ubiquitous. Today, nearly everyone carries the technology they developed in their purse or pocket, since billions of integral color sensors are used each year in smart phones.

Dillon presented a paper, co-authored by Brault, describing the world's first single-chip color sensor in December 1976 at IEDM in Washington D.C. The auditorium was packed with scientists from the leading semiconductor and video camera manufacturers. Afterwards, many visited KRL to learn about this important imaging breakthrough.

The research building where Dillon and Brault worked was expanded in the early 1980s to manufacture the world’s first color megapixel imagers, which were used in many pioneering digital cameras. The facility is used today by ON Semiconductor to fabricate color CCDs with up to 50MP.

First Color CCD image sensor
First single-chip color camera

Go to the original article...

Ouster Reports 400 Customers, Raises $60M

Image Sensors World        Go to the original article...

PRNewswire: LiDAR startup Ouster announces it has a roster of over 400 customers and the addition of over $60M in funding. The company also opens of a new manufacturing facility in San Francisco producing hundreds of LiDARs per month and capable of producing thousands of LiDARs per month toward the end of 2019. Transparent pricing and short 2-3 week lead times for sensor delivery are claimed to help the company stand out in a crowded market.

The additional $60M in equity and debt funding includes investments from Runway Growth Capital and Silicon Valley Bank, as well as additional funding from Series A participants Cox Enterprises, Constellation Tech Ventures, Fontinalis Partners, Carthona, and others.

Since Ouster launched in late 2017, the company has announced 4 LiDARs with resolutions from 16 to 128 channels, as well as two product lines: the OS-1 and OS-2. What started as a 4-person team working in a tiny warehouse three years ago has grown to over 100 full-time employees across engineering, operations, business development, and marketing. The company expects to nearly double its headcount in the coming year to support further product line development and meet the global demand for its high-resolution LiDARs.



Update: In comparison, Velodyne claims "only" 250 customers so far.

Go to the original article...

Teledyne e2v Releases 4MP BSI Rad-Hard Sensor

Image Sensors World        Go to the original article...

Teledyne e2v has released a new image sensor, the CIS120, for harsh environments such as space applications. Samples have been available since February 2019 along with a full test and demonstration system.

Key specifications of the CIS120 include a resolution of 2048 x 2048 and BSI 10µm square pixels with a QE of 90% at 550 nm (typical). The sensor offers both a rolling shutter mode with a frame rate of 30 fps (8 bit) and a global shutter mode with a frame rate of 20 fps (12 bit).

Key features include:

  • Good latch-up immunity and high SEU threshold by design and is resistant to ionising radiation by process choice
  • Pixel read timing is set by an on-chip sequencer to simplify use and to reduce pin count
  • A column parallel ADC is controlled by its own sequencer
  • Resolution can be set anywhere from 8 to 14 bits
  • Four LVDS channels output the image data and are controlled by the readout sequencer to scan along each row in turn.
  • All configuration settings are programmed over an SPI. This includes shutter mode, ADC resolution and bias current values
  • Package options include ceramic PGA and metal and ceramic three-side butting designs for use in mosaic focal planes
  • CIS120 is stitched, so other sizes are possible from 2048 × 1024, up to 2048 × 8192 pixels, without the cost of new masks as well as other customer-specific requirements such as anti-reflective coatings
  • An increased charge capacity of 100 ke– is possible by a metal pattern change

Go to the original article...

Innoviz Raises $132M More

Image Sensors World        Go to the original article...

PRNewswire: LiDAR startup Innoviz has raised $132M in Series C funding. The round is marked by the entrance of new major investors China Merchants Capital, Shenzhen Capital Group and New Alliance Capital; Harel Insurance Investments and Financial Services and Phoenix Insurance Company. Given demand from additional investors, the Series C round will remain open for a second closing to be announced in the coming months.

The new round will support Innoviz's commercialization of InnovizPro and InnovizOne LiDARs. A partnership with Magna International, which also participated in the round, resulted in Innoviz's automotive-grade LiDAR, InnovizOne, and its computer vision software being selected by BMW for series production of vehicles starting in 2021.

"We've experienced significant growth over the past year to meet increased demand for solid-state LiDAR. This fundraising enables many of the substantial commitments it takes to bring this technology to market at a massive scale — the scale required by Tier 1 suppliers and automakers leveraging LiDAR to deliver autonomous vehicles to the masses by 2021. We're excited to transition our production, manufacturing and research and development efforts into the next phase and continue to furnish the full stack of LiDAR hardware and software solutions to the industry," said Omer Keilaf, CEO and co-founder of Innoviz. "This round is a strong testament to the excellent progress we've made in cementing our technology as a true market leader capable of meeting the rigorous automotive standards at a cost that makes mass production realistic."


Go to the original article...

css.php