e2v Announces 37.7MP 86fps GS Sensor in 4/3-inch Format

Image Sensors World        Go to the original article...

Teledyne e2v announces its new Emerald 36M, a 37.7MP sensor for industrial and outdoor applications requiring both high resolution and high speed.

Emerald 36M combines 6k square resolution, high frame rate, low noise, high QE, and wide angular response. The sensor is available in an ultra-high speed and a high-speed versions, respectively providing 86fps and 43fps at full resolution.

The sensor fits standard Four Thirds optics, and is said to be the world’s highest resolution global shutter sensor to fit these lenses. Emerald 36M is pin-to-pin and optically compatible with Emerald 67M, so that multiple resolutions and speed grades are possible from a single camera design.

Marie-Charlotte Leclerc, Marketing Manager at Teledyne e2v said, “We are delighted to release the Emerald 36M. The sensor is already gathering interest from vision system designers looking forward to improved accuracy and throughput with optimized inspection system paths. And beyond the factory floor, Emerald 36M enables surveillance over wider fields of view and provides higher autonomy for aerial mapping and security solutions.

Evaluation kits and samples of Emerald 36M are available now.

Go to the original article...

Samsung Surface Plasmon Enhanced Organic Image Sensor

Image Sensors World        Go to the original article...

Nature paper "Surface plasmon enhanced Organic color image sensor with Ag nanoparticles coated with silicon oxynitride" by Sung Heo, Jooho lee, Gae Hwang Lee, Chul-Joon Heo, Seong Heon Kim, Dong-Jin Yun, Jong-Bong Park, Kihong Kim, Yongsung Kim, Dongwook Lee, Gyeong-Su Park, Hoon Young Cho, Taeho Shin, Sung Young Yun, Sunghan Kim, Yong Wan Jin, and Kyung-Bae Park from Samsung Advanced Institute of Technology, Dongguk University, Yonsei University, Seoul National University, and Chonbuk National University shows Samsung efforts to improve OPD pixels:

"As organic photodetectors with less than 1 μm pixel size are in demand, a new way of enhancing the sensitivity of the photodetectors is required to compensate for its degradation due to the reduction in pixel size. Here, we used Ag nanoparticles coated with SiOxNy as a light-absorbing layer to realize the scale-down of the pixel size without the loss of sensitivity. The surface plasmon resonance appeared at the interface between Ag nanoparticles and SiOxNy. The plasmon resonance endowed the organic photodetector with boosted photon absorption and external quantum efficiency. As the Ag nanoparticles with SiOxNy are easily deposited on ITO/SiO2, it can be adapted into various organic color image sensors. The plasmon-supported organic photodetector is a promising solution for realizing color image sensors with high resolution below 1 μm."


"In summary,... Although the effective area for receiving the incident photon is expected to decrease with the scaling-down of the pixels, the introduction of the SPR in OCIS counters the problem without losing the spatial resolution. With further systematic research conducted on the pattern and size of Ag NPs, the SPR is likely to be the sole solution for realizing OCISs with high resolution below 1 μm."

Go to the original article...

LIGO Interferometer Reduces Quantum Fluctuations of Light

Image Sensors World        Go to the original article...

LIGO interferometer for gravitational waves observations announces that it uses squeezed light to improve the measurement accuracy:

"The Heisenberg uncertainty principle states that we can't know both the position and the velocity of a quantum particle perfectly--the better we know the position, the worse we know the velocity, and vice versa. For light waves, the Heisenberg principle tells us that there are unavoidable uncertainties in amplitude and phase that are connected in a similar way. One of the stranger consequences of quantum theory is that there must be fluctuating electric and magnetic fields, even in a total vacuum. In a normal vacuum state, these "zero-point" fluctuations are completely random and the total uncertainty is distributed equally between the amplitude and the phase. However, by using a crystal with non-linear optical properties, it is possible to prepare a special state of light where most of the uncertainty is concentrated in only one of the two variables. Such a crystal can convert normal vacuum to "squeezed vacuum", which has phase fluctuations SMALLER than normal vacuum! At the same time, the amplitude fluctuations are larger, but phase noise is what really matters for LIGO."

Go to the original article...

SPAD PDP Simulation

Image Sensors World        Go to the original article...

National Chiao Tung University, Taiwan paper "Photon-Detection-Probability Simulation Method for CMOS Single-Photon Avalanche Diodes" by Chin-An Hsieh, Chia-Ming Tsai, Bing-Yue Tsui, Bo-Jen Hsiao, and Sheng-Di Lin is a part of MPDI Special issue on the 2019 International Image Sensor Workshop (IISW2019).

"Single-photon avalanche diodes (SPADs) in complementary metal-oxide-semiconductor (CMOS) technology have excellent timing resolution and are capable to detect single photons. The most important indicator for its sensitivity, photon-detection probability (PDP), defines the probability of a successful detection for a single incident photon. To optimize PDP is a cost- and time-consuming task due to the complicated and expensive CMOS process. In this work, we have developed a simulation procedure to predict the PDP without any fitting parameter. With the given process parameters, our method combines the process, the electrical, and the optical simulations in commercially available software and the calculation of breakdown trigger probability. The simulation results have been compared with the experimental data conducted in an 800-nm CMOS technology and obtained a good consistence at the wavelength longer than 600 nm. The possible reasons for the disagreement at the short wavelength have been discussed. Our work provides an effective way to optimize the PDP of a SPAD prior to its fabrication."

Go to the original article...

SmartSens Announces Starlight Upgrade Technology, IoT Award

Image Sensors World        Go to the original article...

The Starlight H-Series sensor announcement has been removed at SmartSens request.

PRNewswire: SmartSens SC132GS sensor has been selected as the winner of the "IoT Semiconductor Solution of the Year" award in the 4th annual IoT Breakthrough Awards program from IoT Breakthrough, a market intelligence organization.

"We expect to see the SmartSens SC132GS perform outstandingly in fields such as IoT that demand high efficiency and efficacy," said Chris Yiu, CMO, SmartSens. "IoT solutions, ITS, machine vision for manufacturing automation, and intelligent security and surveillance are just a few examples of these fields. SmartSens is confident that as AI and 5G technologies mature, more applications will arise which the SC132GS and subsequent products from SmartSens will be able to handle in stride. We are proud to receive this significant industry recognition from IoT Breakthrough in recognition of our innovation and success."

Go to the original article...

4th International Workshop on Image Sensor and Systems (IWISS2018)

Image Sensors World        Go to the original article...

A full collection of papers from 4th International Workshop on Image Sensor and Systems (IWISS2018) held in November 2018 at Tokyo Institute of Technology, Japan, is published on-line at International Image Sensor Society site.

Go to the original article...

Espros ToF Performance in Sunlight

Image Sensors World        Go to the original article...

Espros publishes a demo video of its ToF camera performance with sunlight in the frame:

Go to the original article...

SPAD ToF Imager Thesis

Image Sensors World        Go to the original article...

1University of Oulu, Finland, publishes PhD Thesis "Time-gating technique for a single-photon detection-based solid-state time-of-flight 3D range imager" by Henna Ruokamo.

"This thesis is concerned with the development of a solid-state 3D range imager based on use of the sliding time-gate technique in a SPAD array and short (~200 ps), intensive laser pulses. The area of the in-pixel electronics needed in time-gated imagers is small, which leads to a high fill factor and a possibility for implementing large arrays. The use of short laser pulses increases the precision and frame rate, since depth measurement can be limited to the range of interest, e.g. around the surface of the target. To increase the frame rate further, the array can be divided into subarrays with independently defined ranges. Tolerance of high background light is achieved by using sub-ns time-gate widths.

A SPAD array of 80 x 25 pixels is developed and realized here. The array is divided into 40 subarrays, the narrow (less than 0.8 ns) time-gating positions for which can be set independently. The time-gating for each of the subarrays is selected separately with an on-chip DLL block that has 240 outputs and a delay grid of ~100 ps. The fill factor of the sensor area is 32%. A 3D range image measurement at ~10 frames per second with centimetre-level precision is demonstrated for the case of passive targets within a range of ~4 metres and a field of view of 18 × 28 degrees, requiring an average active illumination power of only 0.1 mW. A frame rate of 70 range images per second was achieved with a higher laser average illumination power (~5 mW) and pulsing rate (700 kHz) when limiting the scanning range for each subarray to 30 cm around the surfaces of the targets.

An FPGA-based algorithm which controls the time-gating of the SPAD array and produces the range images in real time was also developed and realized.


Go to the original article...

Opnous ToF Presentation

Image Sensors World        Go to the original article...

China-based startup Opnous publishes a presentation on its ToF products, reportedly licensed from Brookman and Shizuoka University.

Go to the original article...

LiDAR News: SK Telecom, Pioneer, Canon, Outsight, SOS Lab, ON Semi, Valeo, Kyocera, Livox

Image Sensors World        Go to the original article...

SK Telecom announces ‘Next-Generation Single Photon LiDAR’ developed with Pioneer Smart Sensing Innovations Corporation (PSSI). SK Telcom and PSSI entered into a joint development agreement in September 2019 to develop a next-generation single photon LiDAR and have been actively working together since to commercialize the new LiDAR by 2021. The LiDAR combines SK Telecom's 1550nm SPAD with Pioneer's 2D MEMS scanning mirror.

SK Telecom’s single photon LiDAR transceiver technologies consist of 1550 nm laser, SPAD, and Time-Correlated Single Photon Counting (TCSPC). 1550 nm laser, much stronger than 905 nm laser, is said to enable extended detection of objects at a distance of up to 500m.

Rather than a linear-mode APD, SK Telecom uses SPAD to ensure higher sensitivity to light. With SPAD, Next-Generation Single Photon LiDAR can accurately detect low-reflectivity objects like tires or pedestrians dressed in black.


Pioneer adds that Canon too is a part of this joint 500m-range LiDAR project with SK Telecom:

"Together with Canon Inc. (“Canon,” hereafter), PSSI is engaged in co-development of 3D-LiDAR sensors, which are regarded as an indispensable key device for the realization of autonomous driving in level-three and above autonomous vehicles (conditional automation).., which utilizes Micro Electric Memory Systems (MEMS) mirror-based scanning method and Canon’s optical technologies... The newly developed next-generation 3D-LiDAR sensor is a 1550nm wavelength sensor model which—although based on the same core technologies developed by PSSI and Canon—offers a greatly extended measurement distance made possible by the addition of transceiver (transmitter / receiver) technologies developed by SK Telecom Co., Ltd. (“SK Telecom,” hereafter) of South Korea. The new sensor is capable of high resolution and measurement at long distance of 500m."


EIN Newsdesk: Hyperspectral LiDAR startup Outsight just got a first customer. After an international competition, the Paris airport group (ADP) has chosen Outsight's "3D Smart Monitoring" system to be used in two areas of Paris-Charles de Gaulle airport in the international Terminal 2E, including the baggage claim zone. One of the key elements of this technology is the Edge Privacy feature: the video stream doesn't leave the sensors. Because the calculations on the images are created on the device itself, it is completely autonomous in the analysis of the data captured, thus avoiding the transit of sensitive data via the networks.

Raul Bravo, President and Co-Founder of Outsight, says: "This first deployment at Paris Charles-de-Gaulle Airport, just a few months after the company's creation, demonstrates the relevance of our 3D perception approach in the context of improved operations and security. This first implementation follows a succession of announcements for Outsight, and can then be extended to other areas such as shopping malls and train stations."


BusinessWire: SOS Lab is to use ON Semi SPADs in its LiDAR. “We expect to develop (solid-state) type lidars more quickly and plan to mass-produce lidars for vehicles with built-in headlamps and bumpers within 2-3 years,” says JiSeong Jeong, CEO of SOS Lab.


Valeo Investor Presentation releases some data on its LiDAR sales, as a response to skeptics saying there is no market for automotive LiDARs now:


Kyocera presents LiDAR and camera in a single box:

There is a big problem with many of these LIDAR systems,” Senior Manager of Research Hiroyuki Minagawa says, “they use mechanical motors to rotate their scanning mirrors, and they can’t withstand the shaking and vibration that happens normally in a car. I don’t think they would last more than a couple of years.To overcome this limitation, Kyocera develops another exclusive solution, using a MEMS Mirror housed inside Kyocera’s exclusive Ceramic Packaging Technology. Ceramics are in Kyocera’s DNA, right down to its name, and breakthroughs in ceramic technology are an integral part of Kyocera’s history. This advanced ceramic technology makes Kyocera’s Camera-LIDAR Fusion Sensor far more durable in real-world driving conditions, making it the superior choice for autonomous driving systems.


Livox explains its non-repetitive scanning pattern advantages:

"The environment scanned by a Livox sensor increases with longer integration time as the laser explores new spaces within its Field of View (or FOV). As seen in the image below, a Livox Mid-40 or Mid-100 sensor generates a unique flower-like scanning pattern to create a 3D image of the surrounding environment. Image fidelity increases rapidly over time. In comparison, conventional lidar sensors use horizontal linear scanning methods that run the risk of blind spots, causing some objects in their FOV to remain undetected regardless of how long the scan lasts. The unique non-repetitive scanning method of the Livox lidar sensors enables nearly 100% FOV coverage with longer integration time which does not exist in any market alternatives today at this cost."

Livox Horizon LiDAR is sold for for $999, Tele-15 costs $1,499.

Go to the original article...

TPSCo 2.5um GS Pixel

Image Sensors World        Go to the original article...

TPSCo paper "A High-Performance 2.5 μm Charge Domain Global Shutter Pixel and Near Infrared Enhancement with Light Pipe Technology" by Ikuo Mizuno, Masafumi Tsutsui, Toshifumi Yokoyama, Tatsuya Hirata, Yoshiaki Nishi, Dmitry Veinger, Adi Birman, and Assaf Lahav is a part of MDPI Special Issue on the 2019 International Image Sensor Workshop (IISW2019).

"We developed a new 2.5 μm global shutter (GS) pixel using a 65 nm process with an advanced light pipe (LP) structure. This is the world’s smallest charge domain GS pixel reported so far. This new developed pixel platform is a key enabler for ultra-high resolution sensors, industrial cameras with wide aperture lenses, and low form factors optical modules for mobile applications. The 2.5 μm GS pixel showed excellent optical performances: 68% quantum efficiency (QE) at 530 nm, ±12.5 degrees angular response (AR), and quite low parasitic light sensitivity (PLS)—10,400 1/PLS with the F#2.8 lens. In addition, we achieved an extremely low memory node (MN) dark current 13 e−/s at 60 °C by fully pinned MN. Furthermore, we studied how the LP technology contributes to the improvement of the modulation transfer function (MTF) in near infrared (NIR) enhanced GS pixel. The 2.8 μm GS pixel using a p-substrate showed 109 lp/mm MTF@50% at 940 nm, which is 1.6 times better than that without an LP. The MTF can be more enhanced by the combination of the LP and the deep photodiode (PD) electrically isolated from the substrate. We demonstrated the advantage of using LP technology and our advanced stacked deep photodiode (SDP) technology together. This unique combination showed an improvement of more than 100% in NIR QE while maintaining an MTF that is close to the theoretical Nyquist limit (MTF @50% = 156 lp/mm)."

Go to the original article...

ISSCC Forum ‘Sensors for Health’

Image Sensors World        Go to the original article...

ISSCC 2020 Forum "Sensors for Health" organized by Matteo Perenzoni, FBK, Italy, has a number of image sensing presentations:
  • Flexible Electronics for Medical Imaging: from Patches to Large-Area X-Ray Imaging
    Kris Myny, IMEC, Leuven, Belgium
  • SPADs, ISFETs and Photodiodes: Mixed-Mode Sensing for Healthcare
    David Cumming, University of Glasgow, Glasgow, United Kingdom
  • CMOS Sensor Architectures and Circuital Solutions for Nuclear Medicine: from Scintillator-Based dSiPM to Monolithic Detectors
    Nicola Massari, FBK, Trento, Italy
  • CMOS/BiCMOS THz System-on-Chip for Life-Science Applications
    Ullrich Pfeiffer, University of Wuppertal, Wuppertal, Germany

Go to the original article...

Sony Announces 2.74um GS Products, 5.8um Quad-Bayer Sensor

Image Sensors World        Go to the original article...

Sony adds 2.74um GS pixel products to its lineup - shown in pink color below:


For security and surveillance applications, Sony introduces 5.8um Starvis quad-Bayer pixel in its new 1080p90 IMX482LQR sensor:

Go to the original article...

Ams Presents First Fruit of its Cooperation with SmartSens

Image Sensors World        Go to the original article...

BusinessWire: ams introduces the NIR CMOS Global Shutter Sensor CGSS130 aimed to 3D optical sensing applications such as face recognition, payment authentication and more to operate at much lower power than alternative implementations.

The CGSS130 sensor is said to be 4 times more sensitive to NIR wavelengths than most other image sensor on the market today. Since the IR emitter consumes most of the power in face recognition and other 3D sensing applications, the use of the CGSS130 sensor will enable manufacturers to extend battery run-time in mobile devices. The sensor also creates the opportunity to implement face recognition in wearable devices and in other products which are powered by a very small battery, or to enable a new range of applications beyond face recognition as the increased sensitivity extends the measurement range for the same power budget.

Stephane Curral, EVP and GM at ams’ ISS division, says: “Following the announcement of ams’ partnership with SmartSens Technology earlier this year, we are delighted to announce the first 3D Active Stereo Vision (ASV) reference design based on the CGSS130 voltage-based NIR enhanced global shutter image sensor. The 1.3MP stacked BSI sensor offers the highest Quantum Efficiency at 940nm, ideally suited for battery-powered devices. By supplying all main parts of the 3D system (illumination, receiver, SW) ams enables superior system performance with lower costs and a faster time to market for its customers.

Development of the CGSS130 has been accelerated by ams’ partnership with SmartSens Technology. The new sensor is available for sampling.

The CGSS130 has a QE up to 40% at 940nm, and up to 58% at 850nm. The stacked BSI process allows for a small footprint of just 3.8mm x 4.2mm and the GS pixel size of 2.7um.

The sensor produces monochrome images with an effective pixel array of 1080H × 1280V at a maximum frame rate of 120 fps. This high frame rate and global shutter operation produce clean images free of blur or other motion artifacts.

Go to the original article...

Are There Too Many LiDAR Companies?

Image Sensors World        Go to the original article...

Wired article "There Are Too Many Lidar Companies. They Can't All Survive" quotes Shahin Farshchi, a partner at VC fund Lux who invests in LiDAR company Aeva, saying that for every 10 LiDAR startups "three will fold, four will be acquired for modest sums, and the remainder will produce impressive returns."

Recently, the LiDAR industry has shown signs of consolidation, and some see a shakeout coming. Luminar CEO Austin Russell says he has been approached by a half-dozen competitors asking if Luminar would be interested in acquiring them.

Meanwhile, France’s Valeo has logged $564 million worth of orders for its LiDARs, which work best for shorter distances.

Reuters: Alternative uses and customers are needed to keep revenue flowing at LiDAR startups waiting for the expected boom in self-driving cars, which still looks to be years away. For smaller LiDAR companies backed by venture capital, developing new markets is key.


Henry Patent Law Firm publishes its LiDAR patent landscape analysis:


According to Woodside Capital Partners presentation, investment in LiDAR companies is far greater than in other AV vision solutions:


PhotonicsSpectra publishes Greg Smolka's, VP of business development at Insight LiDAR, article on the VC market:

"According to IDTechEx, as of August, $1.9 billion was invested in the 2019 lidar market. PitchBook’s third quarter 2019 Mobility Tech report shows that the lidar industry was on track for a record year, with approximately $1.2 billion in venture capital (VC) investment in the first three quarters of the year. Since 2009, investors have deployed over $2.5 billion in VC dollars into the industry.

With 100 or so players in the market, many offering similar technology, consolidation is bound to happen. [Dexin Chen, senior analyst at IHS Markit] said we’ve begun to see an uptick in mergers and acquisitions for lidar makers. Ultimately, those that prevail will have the ability to meet all of the specifications necessary for L4/L5 AVs, including cost targets. The vast majority of companies working on automotive lidar are also pursuing other applications such as security, mapping, and industrial automation. While the automotive lidar market may consolidate, technological advancements driven by this market will have far-reaching effects.
"

Go to the original article...

More LiDAR News: Velodyne $100 LiDAR, XenomatiX Partners with Marelli

Image Sensors World        Go to the original article...

BusinessWire: Velodyne introduces $199 Velabit, Velodyne’s smallest LiDAR which "delivers the same technology and performance found on Velodyne’s full suite of state-of-the-art sensors and will be the catalyst for creating endless possibilities for new applications in a variety of industries."

The Velabit democratizes lidar with its ultra-small form factor and its sensor pricing targeted at $100 in high-volume production, making 3D lidar available for all safety-critical applications,” said Anand Gopalan, CEO, Velodyne Lidar. “Its combination of performance, size and price position the Velabit to drive a quantum leap in the number of lidar-powered applications. The sensor delivers what the industry has been seeking: a breakthrough innovation that can jump-start a new era of autonomous solutions on a global scale.

Velabit’s features:
  • Integrated processing in a compact size of 2.4” x 2.4” x 1.38” – smaller than a deck of playing cards – to be easily embedded in a wide range of solutions.
  • Range up to 100 meters.
  • 60-deg horizontal FoV x 10-deg vertical FOV.
  • Highly configurable to support a range of applications.
  • Class 1 eye-safe 903nm laser.
  • Bottom connector with cable length options.
  • Multiple manufacturing sources scheduled to be available for qualified production projects.


PRNewswire: Marelli and XenomatiX enter into a technical and commercial development agreement in the autonomous driving field.

"Marelli is a leading automotive supplier with the right competencies to develop modular LiDAR solutions fulfilling different Automotive OEM needs, integrating them into larger systems, based on the True Solid State LiDAR technology we designed for the automotive market," states Filip Geuens, CEO of XenomatiX. "Marelli`s long-standing experience in the automotive field and with the 3D sensors is key to this partnership."

Marelli introduces the Smart Corner, a solution integrating sensors for autonomous driving within vehicle headlamps and tail lamps, while maintaining attractive styling and world-class lighting performance:



PRNewswire: Carnavicom, a South Korean automotive supplier, presents its new LiDAR that has achieved an average of a 26% decrease in costs thanks to local procurement of parts and key components such as brushless direct current motor (BLDC), laser diode (LD) and avalanche photodiode (APD). Such efforts have enabled LiDAR sensors to be more affordable to integrate into other LiDAR products and applications.

Go to the original article...

Pig Facial Recognition

Image Sensors World        Go to the original article...

CounterpointResearch: Chinese Alibaba, JD.com, and Tencent are developing AI-based Smart Agriculture platforms which they believe will lead to improved agricultural efficiency, particularly with respect to pig rearing. Pig facial recognition works in a similar way to human facial recognition, recording details of the pig’s eyes, ears, snout and bristles.

Start-up Yingzi Technology, one of the first Chinese companies to come up with a pig facial recognition system, is trialing its system on a farm with 3,000 pigs. Yingzi claims that identification accuracy is more than 98% and that a pig can be identified even if it is moving in a herd.

Go to the original article...

Low Cost SWIR Options

Image Sensors World        Go to the original article...

IMVE publishes an article "Saving on SWIR" talks about cheaper than InGaAs option for SWIR imaging. One of them is Imec quantum dots:

"Quantum dots are nanocrystals that, depending on their size, offer different light absorption properties. For example, particles approximately 3nm in size absorb at 940nm, while particles around 5.5nm in size absorb at 1,450nm. The pixel stacks of the new sensor can be tuned to target a spectrum from visible light all the way up to 2µm wavelength.

‘Right now there isn’t much of a SWIR imaging market, because there is such a high [price] threshold for acquiring a SWIR camera,’ said
[Pawel Malinowski, Imec’s thin-film imagers programme manager.] ‘In a lot of machine vision applications people are not using SWIR because they cannot get a camera, so what we are hoping for is that because we can offer SWIR imaging at orders of magnitude lower price, then new applications will pop up.’

The first generation of Imec’s quantum dot sensor has a resolution of 758 x 512 pixels and a pixel pitch of 5µm. According to Malinowski, however, the second-generation chips, currently being tested, will have a pixel pitch as low as 1.8μm. He noted that the typical pixel pitch of an InGaAs sensor is between 15μm and 20μm.

Despite the lower fabrication cost and higher resolutions achievable with the new sensor technology, Malinowski said quantum efficiency – the performance achieved for the amount of light – will only be around 30 to 40 per cent; InGaAs sensors are able to offer 80 to 90 per cent quantum efficiency. He added: ‘I think that InGaAs will remain unbeatable in terms of high-end performance for the time being.
"

SWIR Vision company too pursues quantum dots:

"Quantum dot-based SWIR imaging technologies are also available from US-based SWIR Vision Systems, which has been selling its Acuros colloidal quantum dot (CQD) VIS-SWIR cameras from Q3 2018. The cameras are available in VGA (640 x 512 pixels), one-megapixel (1,280 x 1024 pixels), and full HD (1,920 x 1,080 pixels) formats.

‘Demand for these cameras has been increasing throughout 2019,’ said George Wildeman, CEO of SWIR Vision Systems, who remarked that the 1,920 x 1,080-pixel model, which has a resolution six times higher than the current standard 640 x 512-pixel InGaAs cameras, is the first of its kind to be commercially available. ‘There are a few high-resolution InGaAs cameras with 1,280 x 1,024-pixel sensor arrays, but these are very high cost,’ he said. ‘It is a big challenge to scale InGaAs cameras to larger array sizes without a large increase in their price point.


Yet another option is Emberion graphene imagers:

"The first product samples of the sensor, which offers VGA resolution, 20µm pixel pitch, 100fps frame rate, and a spectral range from 400nm to 2,000nm, will be available in June 2020.

‘This wide spectral range is the key advantage that our sensor provides over standard InGaAs sensors, which tend to go between 900nm to 1,700nm,’ said Jyri Hämäläinen, director of sales and marketing at Emberion. ‘Beyond 1,700nm is usually called “extended InGaAs”, and it is here that InGaAs technology becomes very expensive. In comparison our sensor is much more affordable while being able to detect these wavelengths.


Thanks to TL for the link!

Go to the original article...

Nikon Z 70-200mm f2.8 VR S review

Cameralabs        Go to the original article...

The Nikon Z 70-200mm f2.8 VR S is a pro telephoto zoom designed for Nikon’s full-frame Z-series mirrorless cameras. Alongside the earlier Z 24-70mm f2.8 and newer Z 14-24mm f2.8, it completes the holy trinity for the Z system, providing 14-200mm at f2.8. Find out whether it meets or surpasses its already excellent predecessor in our in-depth review!…

The post Nikon Z 70-200mm f2.8 VR S review appeared first on Cameralabs.

Go to the original article...

Vision Processing Limitations in Stacked Image Sensors

Image Sensors World        Go to the original article...

Arizona State University, Tempe, publishes arxiv.org paper "Stagioni: Temperature management to enable near-sensor processing for energy-efficient high-fidelity imaging" by Venkatesh Kodukula, Saad Katrawala, Britton Jones, Carole-Jean Wu, and Robert LiKamWa.

"Many researchers advocate pushing processing close to the sensor to substantially reduce data movement. However, continuous near-sensor processing raises the sensor temperature, impairing the fidelity of imaging/vision tasks. We characterize the thermal implications of using 3D stacked image sensors with near-sensor vision processing units. Our characterization reveals that near-sensor processing reduces system power but degrades image quality. For reasonable image fidelity, the sensor temperature needs to stay below a threshold, situationally determined by application needs. Fortunately, our characterization also identifies opportunities -- unique to the needs of near-sensor processing -- to regulate temperature based on dynamic visual task requirements and rapidly increase capture quality on demand. Based on our characterization, we propose and investigate two thermal management strategies -- stop-capture-go and seasonal migration -- for imaging-aware thermal management. We present parameters that govern the policy decisions and explore the trade-offs between system power and policy overhead. Our evaluation shows that our novel dynamic thermal management strategies can unlock the energy-efficiency potential of near-sensor processing. For our evaluated tasks, our strategies save up to 53% of system power with negligible performance impact and sustained image fidelity."

Go to the original article...

LiDAR News: Velodyne, Luminar, Innovusion, Insight, Outsight, Baraja, Ouster, Sony, Mobileye

Image Sensors World        Go to the original article...

BusinessWire: Velodyne announces Anand Gopalan as its new CEO. Gopalan, who previously was Velodyne’s CTO, assumes the position from Velodyne’s legendary founder David Hall. Hall will continue as full-time Chairman of the Board and remain actively involved in directing the company’s technology, product vision and business strategy.

David Hall is more than the founder of Velodyne, he is also the founder of our industry. I am grateful for the trust he has placed in me and excited to lead a company with such a deep history in innovation as Velodyne. We are the forefront of our market, ready to drive the age of autonomy. Velodyne is bringing improved mobility and safety through versatility, responsiveness and agility,” said Gopalan.


BusinessWire: Luminar attempts to switch to a recurring revenue model. The company introduces Hydra Perception Compute Unit (PCU) reference design powered by the NVIDIA Xavier SoC. This solution is said to substantially shorten the industry timelines, enabling autonomy to be commercialized in production in 2022.

Hydra begins shipping this quarter and is available through a new subscription model -- the first of its kind for LiDAR. With the release of Hydra, Luminar has transitioned its core business from selling sensors to a subscription-based service for its autonomous vehicle development partners that enables a deeper integration throughout development cycles, increasing development speed as well as enabling more focused feature development.

Luminar LiDAR is now the established industry gold standard for performance and safety, and the perfect platform to enable the dramatic software and perception improvements required for automakers to transition from test vehicles to commercial autonomy,” said Austin Russell, Founder and CEO, Luminar. “We’ve been quietly developing Hydra, the most advanced 3D perception system in the industry, for over three years now and it’s time for our 40 partners and the rest of the world to see it.

Hydra is an integrated product of three key self-driving technologies:
  • Luminar’s LiDAR, built from the chip-level up;
  • Luminar’s new software suite, built and optimized specifically for Luminar LiDAR;
  • Luminar’s new perception computer, a reference design built on the NVIDIA Xavier SoC


BusinessWire: Innovusion announces Falcon long-range LiDAR. With a vertical and horizontal resolution of 0.07 degrees at 10 fps and a FoV of 110 degrees x 30 degrees, Falcon reaches a range of 120 meters on pedestrians for the entire 110-degree FoV.

In the last year, there has been an industry-wide delay in roadmaps for the deployment of Level 4 autonomous driving while driver assistance technology has become more prevalent. There are still gaps in autonomous system performance and the disengagement is still too high,” said Ian Zhu, Managing Partner at NIO Capital. “We are confident that with the release of Falcon, Innovusion is enabling the automotive industry to take the necessary steps towards LiDAR adoption for the greater market, whether that is in self-driving cars or otherwise.


BusinessWire: Insight LiDAR announced its Digital Coherent LiDAR, an ultra-high resolution, long-range LiDAR sensor targeted at the emerging autonomous vehicle (AV) market. Among the breakthroughs built into Digital Coherent LiDAR are:

  • Long Range – 200 meters to 10 percent reflectivity targets
  • Ultra-High resolution – up to 0.025 x 0.025 degrees
  • Large Field of View – 120 x 340 degrees
  • Direct Doppler velocity in every pixel
  • True solid-state, flexible fast-axis scanning
  • Complete immunity from sunlight and other lidar
  • Low-cost chip scale, all-semiconductor approach

Insight LiDAR’s patent portfolio covers not only the design and control of the laser source, critical for the FMCW detection technique, but also includes key system IP enabling Insight’s high-resolution, foveation, large field of view and long-range performance.


LaTribune: As reported earlier, Outsight raises its seed $20M investment for development of hyperspectral LiDAR that analyses the object material simultaneously with distance. The production model is expected to be completed by 2021.


BusinessWire: Baraja, developer of Spectrum-Scan LiDAR, unveils its sensing platform with inherent interference immunity. Baraja LiDAR is said to be the only system available today using randomly modulated continuous wave, technology that completely blocks interference from other LiDARs and environmental light sources.

Sensor interference is one of the leading causes of disengagements for autonomous vehicles today and the issue will only continue to grow as more LiDAR-equipped vehicles hit the road,” said Baraja Co-Founder and CEO, Federico Collarte. “Interference risks leaving the vehicle with blind-spots, and driving blind is obviously unacceptable. Our experience developing technology in the telecom industry uniquely positions Baraja to address the problem of interference by encoding the light transmitted by our laser, using the same mature, volume-produced components that encode information for interference-free communications.

Interference occurs when a LiDAR transmits laser light and picks up another source of light, from a different laser or environmental source, like bright sunlight, creating errors and uncertainty that manifest as vehicle blind spots. Today, this situation triggers the autonomous technology to disengage and hand over to the safe driver.

Baraja is addressing interference at the sensor level with its Spectrum-Scan technology, which forms the basis of its sensing platform. Spectrum-Scan works by rapidly switching the laser’s wavelength and transmitting light through a prism, which diffracts each color of light in a different direction. When the light returns to the sensor, it is only processed if wavelength, angle, timing and encoding matches on all signals, insuring immunity to interference. Baraja’s LiDAR operates at 1550 nm and exceeds the industry long-range sensing requirement of detecting a 10% reflectivity objects at more than 200m.



BusinessWire: Ouster introduces the ultra-wide 90-deg FoV OS0-128 LiDAR. “High-resolution perception has always been reserved for expensive, long-range applications. That’s finally beginning to change,” said Angus Pacala, CEO and co-founder of Ouster. “With Ouster’s full range of 128-channel sensors, we have a complete high-resolution sensor suite for every application, and for short-range applications, the OS0-128 is in a class of its own.

The OS0 and OS2 series offer a full range of resolution options, with the OS0 available with 32 or 128 channels, while the OS2 is available in 32, 64, and 128 configurations. The OS0-32 is priced at $6,000 and the OS0-128 at $18,000. The OS2-32 is priced at $16,000, the OS2-64 at $20,000 and the OS2-128 at $24,000.


PRNewswire
: Sony is to demo its "Solid State LiDAR which uses highly accurate distance measurement to gain a precise 3D grasp of real-life spaces" this week at CES.

Meanwhile, Intel Mobileye presents VIDAR - LiDAR functionality with cameras only. The name VIDAR has been coined by academic cycles:

Go to the original article...

PoLight Announces First Design Win

Image Sensors World        Go to the original article...

poLight announces that its AF TLens is being used in a smartwatch for children launched to market on 7th January 2020. The OEM is undisclosed. The watch has two cameras, one main camera used to take pictures which includes an advanced autofocus (AF) function delivered by poLight, and one camera integrated in the screen used for face camera without AF.

This is an important milestone for poLight and we are very proud to be included in this innovative smartwatch flagship,” said Øyvind Isaksen, CEO of poLight.

Go to the original article...

Infineon and PMD Present 5th Generation REAL3 Sensor

Image Sensors World        Go to the original article...

Webwire: Infineon has collaborated with software and 3D ToF system company pmdtechnologies to develop the world’s smallest 3D image sensor measuring just 4.4 x 5.1 mm. It can be incorporated into even the smallest devices with just a few elements.

With the fifth generation of our REAL3 chip we are once again demonstrating our leading position in the field of 3D sensors,” says Andreas Urschitz, President of the Power Management and Multimarket Division at Infineon, which also includes sensor business. “It’s robust, reliable, powerful, energy efficient and at the same time decisively small. We see great growth potential for 3D sensors, since the range of applications in the areas of security, image use and context-based interaction with the devices will steadily increase.” The 3D sensor also allows the device to be controlled via gestures, so that human-machine interaction is context-based and without touch.

The new 3D image sensor chip (IRS2887C) was developed in Graz, Dresden and Siegen and combines the expertise of Infineon’s and pmdtechnologies’ German and Austrian locations. Series production will begin in the middle of 2020. In addition, Infineon Technologies offers an optimized illumination driver (IRS9100C) that further improves performance, size and cost as a complete solution.


BusinessWire: pmdtechnologies is presenting its latest 3D ToF camera module based on the 5th generation REAL3 ToF image sensor from pmd and Infineon. The IRS2877C with VGA resolution depth data output and a newly designed 5µm pmd pixel core.

The new VGA 3D imager is the highest resolution, most flexible and robust depth sensor that has ever been developed by pmd and Infineon. "We’re passionate about setting new standards – and with the new VGA 3D camera module, which uses our IRS2877C imager, we did it again. Not only do we feature best in class performance, but we also provide the most dedicated depth sensing platform to our customers to develop their 3D application, which they can get on the market,” says Jochen Penne, Executive Board Member and Head of Business Development at pmd.

Go to the original article...

Nikon D780 review – preview

Cameralabs        Go to the original article...

The Nikon D780 is a DSLR with a 24 Megapixel full-frame sensor, uncropped 4k video with autofocus, 7fps bursts and a tilting touchscreen. Essentially upgrading the best-selling D750 with the live view and movie benefits of the Z6 mirrorless, it looks set to be a tempting option for those who still prefer DSLR viewfinders and bodies. Check out my preview. …

The post Nikon D780 review – preview appeared first on Cameralabs.

Go to the original article...

Automotive News: Bosch, Sense Photonics, Trieye

Image Sensors World        Go to the original article...

Bosch presents camera-based Virtual Visor:

"Bosch is offering a solution with the revolutionary Virtual Visor, a transparent LCD and intuitive camera, which replaces the traditional vehicle sun visor completely. As the first reimagined visor in nearly a century, Bosch’s technology utilizes intelligent algorithms to intuitively block the sun’s glare and not the view of the road ahead.

Virtual Visor links an LCD panel with a driver or occupant-monitoring camera to track the sun’s casted shadow on the driver’s face. The system uses artificial intelligence to locate the driver within the image from the driver-facing camera. It also utilizes AI to determine the landmarks on the face ‒ including where the eyes, nose and mouth are located ‒ so that it can identify shadows on the face. The algorithm analyzes the driver’s view, darkening only the section of the display through which light hits the driver’s eyes. The rest of the display remains transparent, no longer obscuring a large section of the driver’s field of vision.
"




PRNewswire: Sense Photonics announces Osprey, a short range automotive flash LiDAR based on Infineon-PMD ToF sensor:

"Infineon is very excited to be working closely with Sense Photonics as it continues to push the limits in terms of near-field LiDAR solutions based on our automotive qualified REAL3 Time-of-Flight imager," said Christian Herzum, Head of 3D-Sensing product line at Infineon.

"Our simple, camera-like architecture is a significant benefit to customers looking for a scalable LiDAR product," said Sense Photonics CEO Scott Burroughs. "By eliminating mechanical-scanning mechanisms, we've made Osprey much more manufacturable than other approaches. We believe this is critical to bringing the vision of autonomous driving to life."

Sense Photonics is now accepting pre-orders, with initial product availability beginning in Q2 2020. The cost per unit is $3,200 (plus shipping).


Trieye shows the advantages of its SWIR camera:

Go to the original article...

Omivision Unveils 48MP Smartphone Sensor with 1.2um Pixels

Image Sensors World        Go to the original article...

PRNewswire: OmniVision announces the OV48C, a 48MP image sensor with a large 1.2um pixel size for flagship smartphone cameras. The OV48C is the industry's first image sensor for high resolution mobile cameras with on-chip dual conversion gain HDR, which reduces motion artifacts and produces better SNR. This sensor also offers a staggered HDR option with on-chip combination for the maximum flexibility to select the best HDR method for a given scene.

"The combination of high resolution, large pixel size and high dynamic range is essential to providing the image quality required by flagship mobile phone designers for features such as night mode," said Arun Jayaseelan, staff marketing manager at OmniVision. "The OV48C is the only flagship mobile image sensor in the industry to offer the combination of high 48MP resolution, a large 1.2 micron pixel, high speed, and on-chip high dynamic range, which provides superior SNR, unparalleled low light performance and high quality 4K video."

Built on OmniVision's PureCel Plus stacked die technology, this 1/1.3" format sensor also integrates an on-chip, 4-cell color filter array and hardware remosaic, which provides 48MP Bayer output, or 8K video, in real time. In low light conditions, this sensor can use near-pixel binning to output a 12MP image for 4K2K video with four times the sensitivity, yielding a 2.4 micron-equivalent performance. The OV48C also uses 4C Half Shield phase detection for fast autofocus support.

Output formats include 48MP at 15 fps, 12MP with 4-cell binning at 60 fps, and 4K2K video at 60 fps with the extra pixels needed for EIS. This sensor also offers 1080p video with slow motion support at 240 fps, as well as 720p at 360 fps. OV48C samples are available now.

Go to the original article...

Isorg Optical Fingerprint Promo Video

Image Sensors World        Go to the original article...

Isorg publishes a video promoting its optical fingerprint sensor for smartphones:



Some more info from the company's page:

Go to the original article...

SmartSens Releases "Full HD Pro" Sensors

Image Sensors World        Go to the original article...

PRNewswire: SmartSens releases two 3MP CMOS sensors SC3235 and SC3320 for webcams and security cameras. With the new SmartPixel-2 DSI technology, The SC3320 comes equipped with larger optical format at 1/2.5" and Full HD Pro pixel count. This also guarantees a DR of up to 100dB with 60fps supporting 2-exposure HDR and NIR (850nm-940nm) imaging. SC3235 has 1/2.7" 2304H x 1296V array and leverages SmartSens' mature SmartPixel architecture.

Go to the original article...

Ambarella, Lumentum, and ON Semi Collaborate on 3D Sensing

Image Sensors World        Go to the original article...

BusinessWire: Ambarella, Lumentum, and ON Semiconductor announce a joint 3D sensing platform for access control systems and smart video security products such as video doorbells and door locks. The platform is based on Ambarella’s CV25 CVflow AI vision SoC, structured-light powered by Lumentum’s VCSEL technology, and ON Semiconductor’s AR0237IR image sensor.

ON Semiconductor’s RGB-IR sensor technology enables single sensor solutions to provide both visible and IR images in security and vision IoT applications,” said Gianluca Colli, VP and GM of the Commercial Sensing Division at ON Semiconductor. “Ambarella’s CV25 computer vision SoC, with its next-generation image signal processor (ISP), brings out the best image quality of our RGB-IR sensor, while providing powerful AI processing capability for innovative use cases in security applications.” Ambarella’s CV25 chip includes native support for RGB-IR CFA and HDR processing.

BusinessWire: Ambarella also announces CV22FS and CV2FS automotive camera ADAS SoCs with native support for RGGB, RCCB, RCCC, RGB-IR, and monochrome sensor formats:

Go to the original article...

Panasonic Presents Smart Fridge with Image Sensor

Image Sensors World        Go to the original article...

Panasonic publishes an arxiv.org paper "Smart Home Appliances: Chat with Your Fridge" by Denis Gudovskiy, Gyuri Han, Takuya Yamaguchi, and Sotaro Tsukizawa proposing an AI-equipped camera in a fridge:

"Current home appliances are capable to execute a limited number of voice commands such as turning devices on or off, adjusting music volume or light conditions. Recent progress in machine reasoning gives an opportunity to develop new types of conversational user interfaces for home appliances. In this paper, we apply state-of-the-art visual reasoning model and demonstrate that it is feasible to ask a smart fridge about its contents and various properties of the food with close-to-natural conversation experience. Our visual reasoning model answers user questions about existence, count, category and freshness of each product by analyzing photos made by the image sensor inside the smart fridge. Users may chat with their fridge using off-the-shelf phone messenger while being away from home, for example, when shopping in the supermarket. We generate a visually realistic synthetic dataset to train machine learning reasoning model that achieves 95% answer accuracy on test data. We present the results of initial user tests and discuss how we modify distribution of generated questions for model training based on human-in-the-loop guidance. We open source code for the whole system including dataset generation, reasoning model and demonstration scripts."

Go to the original article...

css.php