Archives for February 2018

A Clever Idea on Paper Falls Short in Tests

Image Sensors World        Go to the original article...

Imaging Resource publishes a review of Light.co L16 computational camera and the conclusion is quite bad:

"After years of hype and teasers, we finally got our hands on one, and suffice it to say, the image quality and performance leave a lot to desired.

...shooting out in the real world, the L16 is pretty much underwhelming on all fronts."

Fine "detail" crop
Light L16 camera

Go to the original article...

LiDAR Videos

Image Sensors World        Go to the original article...

Three new LiDAR videos have been published on Youtube today. AutoSens publishes Yole Developpment analyst Pierre Cambou presentation on LiDAR market:

The video is currently taken off-line. Will be re-posted here when it's available again.

Now, a shortened video is re-instated:



Waymo publishes self-driving experience from its imaging systems point of view:



SOSLab shows its "Hybrid Scanning" LiDAR demo:

Go to the original article...

AutoSens Detroit 2018

Image Sensors World        Go to the original article...

AutoSens Detroit conference to be held on May 14-17, 2018 announces it agenda with a rich image sensing content:

Near-Infrared QE Enhancing Technology for Automotive Applications
Boyd Fowler
CTO, OmniVision Technologies, Inc.
• Why is near infrared sensitivity important in automotive machine vision applications ?
• Combining thicker EPI, deep trench isolation and surface scattering to improve quantum efficiency, in CMOS image sensors, while still retaining excellent spatial resolution.
• Improving the performance of CMOS image sensors for in cabin monitoring and external night time imaging.

Challenges, opportunities and deep learning for thermal cameras in ADAS and autonomous vehicle applications
Mike Walters, VP of Product Management for Uncooled Thermal Camerast, FLIR Systems
• Deep learning analytic techniques including full scene segmentation, an AI technique that enables ADAS developers to create full scene classification of every pixel in the thermal image.

The emerging field of free-form optics in cameras, and its use in automotive
Li Han Chan, CEO, DynaOptics

Panel discussion: how many cameras are enough?
Tom Toma, Global Product Manager, Magna Electronics
Sven Fleck, Managing Director, SmartSurv Vision Systems GmbH
Patrick Denny, Senior Expert, Valeo
• OEM design engineer – can we make sensors a cool feature not an ugly bolt-on?
• Retail side – how to make ADAS features sexy?
• Tier 1 – minimal technical requirements
• Outside perspective – learning from an industry where safety sells (B2C market)

A review of relevant existing IQ challenges
Uwe Artmann
CTO/Partner , Image Engineering

Addressing LED flicker
Brian Deegan, Senior Expert - Vision Research Engineer , Valeo Vision Systems
• Definition, root cause and manifestations of LED flicker
• Impact of LED flicker for viewing and machine vision applications
• Initial proposals for test setup and KPIs, as defined by P2020 working group
• Preliminary benchmarking results from a number of cameras

CDP – contrast detection probability
Marc Geese, System Architect for Optical Capturing Systems, Robert Bosch

Moving from legacy LiDAR to Next Generation iDAR
Barry Behnken, VP of Engineering, AEye
• How can OEMs and Tier 1s leverage iDAR to not just capture a scene, but to dynamically perceive it?
• Learn how iDAR optimizes data collection, allowing for situational configurability at the hardware level that enables the system to emulate legacy systems, define regions of interest, focus on threat detection and/or be programmed for variable environments.
• Learn how this type of configurability will optimize data collection, reduce bandwidth, improve vision perception and intelligence, and speed up motion planning for autonomous vehicles.

Enhanced Time-Of-Flight – a CMOS full solution for automotive LIDAR
Nadav Haas, Product Manager, Newsight Imaging
• The need for a real 3D solid state lidar solution to overcome challenges associated with lidar.
• Enabling very wide dynamic range by means of standard processing tools, to amplify very weak signals to achieve high SNR and accurately detect objects with high resolution at long range.
• Eliminating blinding by mitigating or blocking background sunlight, random light from sources in other cars, and secondary reflections.
• Enabling very precise timing of the transmitted and received pulses, essential to obtain the desired overall performance.

Panel discussion: do we have a lidar bubble?
Abhay Rai, Director Product Marketing: Automotive Imaging, Sony Electronics
• Do we even need lidar in AV?
• Which is the right combo; lidar + cornering radar or no lidar just radar + camera?
• How many sensors are the minimum for autonomous driving
• Are image sensors and cameras fit for autonomous driving?

All-weather vision for automotive safety: which spectral band?
Emmanuel Bercier, Project Manager, AWARE Project
• The AWARE (All Weather All Roads Enhanced vision) French public funded project is aiming at the development of a low-cost sensor fitting to automotive requirements, and enabling a vision in all poor visibility conditions.
• Evaluation of the relevance of four different spectral bands: Visible RGB, Visible RGB Near-Infrared (NIR) extended, Short-Wave Infrared (SWIR) and Long-Wave Infrared (LWIR).
• Outcome of two test campaigns in outdoor natural conditions and in artificial fog tunnel, with four cameras recording simultaneously.
• Presentation of the detailed results of this comparative study, focusing on pedestrians, vehicles, traffic signs and lanes detection.

Automotive Sensor Design Enablement; a discussion of multiple design enablement tools/IP to achieve smart Lidar
Ian Dennison, Senior Group Director R&D, Cadence Design Systems
• Demands of advanced automotive sensors, driving design of silicon photonics, MEMS, uW/RF, advanced node SoC, and advanced SiP.
• Examining design enablement requirements for automotive sensors that utilize advanced design fabrics, and their integration.

Role of Specialty Analog Foundry in Enabling Advanced Driver Assistance Systems (ADAS) and Autonomous Driving
Amol Kalburge, Head of the Automotive Program , TowerJazz
• Driving improvements in device level figures of merit to meet the technical requirements of key ADAS sensors such as automotive radar, LiDAR and camera systems.
• Optimizing the Rdson vs breakdown voltage to enable higher bus voltages of the future hybrid/EV systems.
• Presenting an overview of advanced design enablement and design services capabilities required for designers to build robust products: design it once, design it right.

Go to the original article...

PMD/Infineon Smallest ToF Camera

Image Sensors World        Go to the original article...

PMD and Infineon present what the call the smallest ToF camera:

Go to the original article...

Himax Presents its Smartphone 3D Sensing Solution

Image Sensors World        Go to the original article...

GlobeNewswire: Himax presents Android smartphone samples equipped with its 3D sensing total solution with face recognition capability. The solution is now ready for mass production.

SLiM, Himax’s structured light based 3D sensing total solution which the Company jointly announced with Qualcomm last August, brings together Qualcomm’s 3D algorithm with Himax’s design and manufacturing capabilities in optics and NIR sensors as well as know-how in 3D sensing system integration. The Qualcomm/Himax solution is claimed to be by far the best performing 3D sensing and face recognition total solution available for the Android smartphone market right now.

The key features of the Himax SLiMTM 3D sensing total solution include:
  • Dot projector: More than 33,000 invisible dots, the highest in the industry, projected onto object to build the most sophisticated 3D depth map among all structured light solutions
  • Depth map accuracy: Error rate of < 1% within the entire operation range of 20cm-100cm
  • Face recognition: Enabled by the most sophisticated 3D depth data to build unique facial map that can be used for instant unlock and secure online payment
  • Indoor/outdoor sensitivity: Superior sensing capability even under total darkness or bright sunlight
  • Eye safety: Certified for IEC 60825 Class 1, the international laser product standard which governs laser product safety under all conditions of normal use with naked eyes
  • Glass broken detection: Patented glass broken detection mechanism in the dot projector whereby laser is shut down instantaneously in the event of broken glass in the projector
  • Power consumption: Less than 400mW for projector, sensor and depth decoding combined, making it the lowest power consuming 3D sensing device by far among all structured light solutions
  • Module size: the smallest structured light solution in the market, ideal for embedded and mobile device integration

3D sensing is among the most significant new features for smartphone. We are pleased to announce that our SLiM total solution is now ready for mass production. It outperforms all the peers targeting Android market in each and all aspects of engineering. We are working with multiple tier-1 Android smartphone makers, on target to launch 3D sensing on their premium smartphones starting the first half of 2018,” said Jordan Wu, President and CEO of Himax.

Go to the original article...

Mediatek P60 Features Triple ISP

Image Sensors World        Go to the original article...

PRNewswire: Mediatek flagship P60 application processor features triple ISP and AI processor:

"Compared to the previous Helio P series, MediaTek Helio P60's three image signal processors (ISPs) increase power efficiency by using 18 percent less power for dual-cameras set-ups. By combining the Helio P60's incredible camera technology with its powerful Mobile APU, users can enjoy AI-infused experiences in apps with real-time beautification, novel, real-time overlays, AR/MR acceleration, enhancements to photography, real-time video previews and more."

Go to the original article...

Sony A7 III review

Cameralabs        Go to the original article...

The Sony Alpha A7 III is the latest 'entry-level' model in the full-frame Alpha mirrorless series. It features a new 24 MP back-illuminated sensor, built-in stabilisation, a 693-point AF system, 10fps shooting, 4k video, and the longest battery of a mirrorless to date. Check out my in-depth review!…

The post Sony A7 III review appeared first on Cameralabs.

Go to the original article...

Leica Enters 3D ToF Imaging with PMD

Image Sensors World        Go to the original article...

BusinessWire: Leica Camera AG and pmdtechnologies announce a strategic alliance to jointly develop and market 3D ToF sensing camera solutions for mobile devices. The geographical proximity of the two companies allows a particularly fast and efficient coordination during development, testing and optimization of the lenses for the 3D sensor systems.

During the last months Leica designed a dedicated state-of-the-art optical lens for pmd’s recently announced new 3D depth sensing imager for mobile devices. By decreasing the f-number by 25% and simultanously decreasing the height of the pmd module by 30% to 11.5x7x4.2mm the dedicated lens for pmd’s latest 3D ToF pixel- and imager generation leads to a significant improvement compared to past lenses. As the Leica lens is optimized for a wavelength of 940nm, it enables ambient light robustness. With a depth data accuracy of 1%, the system is expected to reach the best in class performance despite the miniaturization regarding pixel, imager and module size. First samples of the new lens will be available in May 2018.

The co-work between Leica and pmd has as the result the most sophisticated and smallest optic design, which pmd used so far. The co-work with Leica aligned perfectly with our mission to miniaturize 3D depth sensing without sacrificing data quality so that 3D depth sensing can be put into any device and make 3D depth sensing ubiquitous. We are looking forward to the mobile device opportunities, which the super-small 3D depth sensing modules, which use Leica’s optic, will enable. And we are more than happy that with Leica we found a top-class partner, who will join us on this exciting journey,” stated Jochen Penne, Executive Board Member of pmdtechnologies ag.

Markus Limberger, COO of Leica Camera AG said: “The cooperation between pmd and Leica is an excellent example of how two globally leading companies combine their core competencies to drive market oriented innovation efficiently. The foremost position of pmdtechnologies in Time-of-flight sensor technology and Leica’s expertise in cutting edge optical design were used to develop a very compact and powerful lens, which fits perfect to the specific requirements and the uncompromising quality of the new 3D sensor generation of pmd.

Go to the original article...

16um Time-Gated SPAD Pixels Achieve 61% FF

Image Sensors World        Go to the original article...

OSA Optics Express publishes Heriot-Watt University's paper "High-resolution depth profiling using a range-gated CMOS SPAD quanta image sensor" by Ximing Ren, Peter W. R. Connolly, Abderrahim Halimi, Yoann Altmann, Stephen McLaughlin, Istvan Gyongy, Robert K. Henderson, and Gerald S. Buller.

"A CMOS single-photon avalanche diode (SPAD) quanta image sensor is used to reconstruct depth and intensity profiles when operating in a range-gated mode used in conjunction with pulsed laser illumination. By designing the CMOS SPAD array to acquire photons within a pre-determined temporal gate, the need for timing circuitry was avoided and it was therefore possible to have an enhanced fill factor (61% in this case) and a frame rate (100,000 frames per second) that is more difficult to achieve in a SPAD array which uses time-correlated single-photon counting. When coupled with appropriate image reconstruction algorithms, millimeter resolution depth profiles were achieved by iterating through a sequence of temporal delay steps in synchronization with laser illumination pulses. For photon data with high signal-to-noise ratios, depth images with millimeter scale depth uncertainty can be estimated using a standard cross-correlation approach. To enhance the estimation of depth and intensity images in the sparse photon regime, we used a bespoke clustering-based image restoration strategy, taking into account the binomial statistics of the photon data and non-local spatial correlations within the scene. For sparse photon data with total exposure times of 75 ms or less, the bespoke algorithm can reconstruct depth images with millimeter scale depth uncertainty at a stand-off distance of approximately 2 meters. We demonstrate a new approach to single-photon depth and intensity profiling using different target scenes, taking full advantage of the high fill-factor, high frame rate and large array format of this range-gated CMOS SPAD array."

Go to the original article...

SmartSens Unveils SmartClarity

Image Sensors World        Go to the original article...

PRNewswire: SmartSens launches the 5MP 1/2.7-inch SC5235 BSI sensor. The new sensor is capable of running 5MP (2608H x 1960V) at 25 fps and supports the interline HDR image synthetic algorithm that expands DR up to 100dB. It can be used in security surveillance systems, ip cameras, car digital video recorders, sport cameras and video telephone conference systems.

SmartSens Technology is also launching the NIR enhanced edition-SC5238. It extends the performance advantage of SC5235 based on the optimization of technology to improve QE in 850nm-940nm band. Moreover, SC5238 can run at a speed of 30 fps and supports the image format at 4MP 50 fps for 16:9 video. Both chips are expected to go into mass production in March 2018.

Go to the original article...

Samsung Announces 3-Layer ISOCELL Fast Sensor

Image Sensors World        Go to the original article...

BusinessWire: Samsung introduces the 3-stack ISOCELL Fast 2L3. The 1.4-μm 12MP image sensor with 2Gb of integrated LPDDR4 DRAM delivers fast data readout speeds for super-slow motion and sharper still photographs with less noise and distortion.

Samsung’s ISOCELL image sensors have made great leaps over the generations, with technologies such as ISOCELL for high color fidelity and Dual Pixel for ultra-fast autofocusing, bringing the smartphone camera ever closer to DSLR-grade photography,” said Ben K. Hur, VP of System LSI marketing at Samsung Electronics. “With an added DRAM layer, Samsung’s new 3-stack ISOCELL Fast 2L3 will enable users to create more unique and mesmerizing content.

Conventional image sensors are constructed with two silicon layers; a pixel array layer that converts light information into an electric signal, and an analog logic layer that processes the electric signal into digital code. The digital code is then sent via MIPI interface to the device’s mobile processor for further image tuning before being saved to the device’s DRAM. While all these steps are done instantaneously to implement features like zero-shutter lag, capturing smooth super-slow-motion video requires image readouts at a much higher rate.

The 2Gb LPDDR4 DRAM layer is attached below the analog logic layer. With the integration, the image sensor can temporarily store a larger number of frames taken in high speed quickly onto the sensor’s DRAM layer before sending frames out to the mobile processor and then to the device’s DRAM. This not only allows the sensor to capture a full-frame snapshot at 1/120 of a second but also to record super-slow motion video at up to 960fps.

By storing multiple frames in the split of a second, the sensor can support 3-Dimensional Noise Reduction (3DNR) when shooting in low-light, as well as real time HDR imaging, and detect even the slightest hint of movement for automatic instant slow-motion recording.

The image sensor is also equipped with Dual Pixel technology, which allows each and every one of the 12M pixels of the image sensor to employ two photodiodes that respectively work as a PDAF agent.

The ISOCELL Fast 2L3 is currently in mass production.

Go to the original article...

Canon EOS M50 review

Cameralabs        Go to the original article...

The Canon EOS M50 is a mid-range mirrorless camera with a 24 Megapixel APSC sensor, viewfinder, Wifi and Bluetooth, and becomes Canon's first mirrorless with 4k video, a fully-articulated touch-screen, eye detection and silent shooting options. Find out more in my review!…

The post Canon EOS M50 review appeared first on Cameralabs.

Go to the original article...

Canon EOS 4000D review

Cameralabs        Go to the original article...

Canon's EOS 4000D is a low-priced DSLR that aims to make creative photography more affordable. It offers an 18 Megapixel APSC sensor, 1080p video, 9-point AF, 3fps and Wifi. There's few frills, but it's a solid spec for the money. Check out my review!…

The post Canon EOS 4000D review appeared first on Cameralabs.

Go to the original article...

Canon EOS 2000D / Rebel T7 review

Cameralabs        Go to the original article...

Canon's EOS 2000D / Rebel T7 is an entry-level DSLR aimed at beginners, sporting a 24 Megapixel APSC sensor, 1080p video, 9-point AF, 3fps shooting, Wifi and NFC. It may lack the frills of higher-end bodies, but provides the basics at a low price. Check out my review!…

The post Canon EOS 2000D / Rebel T7 review appeared first on Cameralabs.

Go to the original article...

Samsung Galaxy S9 Imaging and Vision Features

Image Sensors World        Go to the original article...

Samsung Galaxy S9 presentation seems to be build mostly around its cameras, imaging and vision features:








Go to the original article...

Samsung Galaxy S9 Imaging and Vision Features

Image Sensors World        Go to the original article...

Samsung Galaxy S9 presentation seems to be build mostly around its cameras, imaging and vision features:








Go to the original article...

Magic Leap to Raise Another $400M

Image Sensors World        Go to the original article...

TradeArabia quotes FT report that Saudi Arabia’s sovereign wealth fund is in discussions to invest $400M in Magic Leap on valuation of $6B. This is supposed to be an extension of October 2017 financial round when the company raised $502M. The Saudi investment is to bring the total raised capital to $2.3B.

Magic Leap is said to be developing its own silicon, optics, operating system, and applications which explains the unprecedented scale of the fundraising.

Go to the original article...

Omnivision Paper on 2nd Generation Stacking Technology

Image Sensors World        Go to the original article...

MDPI Special Issue on the 2017 International Image Sensor Workshop publishes Omnivision paper "Second Generation Small Pixel Technology Using Hybrid Bond Stacking" by Vincent C. Venezia, Alan Chih-Wei Hsiung, Wu-Zang Yang, Yuying Zhang, Cheng Zhao, Zhiqiang Lin, and Lindsay A. Grant.

"In this work, OmniVision’s second generation (Gen2) of small-pixel BSI stacking technologies is reviewed. The key features of this technology are hybrid-bond stacking, deeper back-side, deep-trench isolation, new back-side composite metal-oxide grid, and improved gate oxide quality. This Gen2 technology achieves state-of-the-art low-light image-sensor performance for 1.1, 1.0, and 0.9 µm pixel products. Additional improvements on this technology include less than 100 ppm white-pixel process and a high near-infrared (NIR) QE technology."

Go to the original article...

Yole on Automotive Sensing

Image Sensors World        Go to the original article...

Yole Developpement releases "Sensors for Robotic Vehicles 2018" report:

"As far as we know, each robotic vehicle will be equipped with a suite of sensors encompassing Lidars, radars, cameras, Inertial Measurement Units (IMUs) and Global Navigation Satellite Systems (GNSS). The technology is ready and the business models associated with autonomous driving (AD) seem to match the average selling prices for those sensors. We therefore expect exponential growth of AD technology within the next 15 years, leading to a total paradigm shift in the transportation ecosystem by 2032. This will have huge consequences for high-end sensor and computing semiconductor players and the associated system-level ecosystems as well.

...in 2022 we expect sensor revenues to reach $1.6B for Lidar, $44M for radar, $0.6B for cameras, $0.9B for IMUs and $0.1B for GNSS. The split between the different sensor modalities may not stay the same for the 15 years to come. Nevertheless the total envelope for sensing hardware should reach $77B in 2032, while, for comparative purposes, computing should be in the range of $52B.
"

Go to the original article...

TowerJazz Update on its CIS Business

Image Sensors World        Go to the original article...

SeekingAlpha: TowerJazz Q4 2017 earnings report has an update on the foundry's image sensor business:

"For CMOS image sensor we use the 300 millimeter 65 nanometer capability to develop unique high dynamic range and extremely high sensitivity pixels with very low dark current for the high-end digital SLR and cinematography and broadcasting markets.

In these developments, we've included are fab 2 stitching technology to enable large full frame sensors. In addition, we developed a unique family of global shutter state-of-the-art pixels ranging from 3.6 micron down to 2.5 micron to note the smallest in the world with extremely high-shutter efficiency using the unique dual light pipe technology already developed at TPS Go for high quantum efficiency and high image uniformity.

And lastly within the CIS regime, we've pushed the limits of our x-ray dye size developing a one dye per wafer x-ray stitch sensor to produce a 300 millimeter a 21 cm x 21 cm imager. All of the above technologies have been or are being implemented in our CIS customers next generation products and are ramping or are plan to begin ramping this year with some additional next year.

Our Image sensor end markets including medical, machine vision, digital SLR camera, cinematography and security among others represented about 15% of our corporate revenues or $210 million and provided the highest margins in the company. We are offering the most advanced global shutter pixel for industrial sensor market with a 2.8 micron global shutter pixel on 110 nanometer platform. The smallest global shutter pixel in the world already in manufacturing. Additionally, as mentioned we have a 2.5 micron state of the art global shutter pixel in development at 65 nanometer, 300 platforms with several leading customers allowing high sensor resolution for any given sensor size enabling TowerJazz to further grow its market leadership.

We also offer single photon avalanche diode which is state of the art technology and ultra fact global shutter pixel for automotive radars based on time of flight principle, answering automotive market needs. We have engaged with several customers in the development of their automotive radar and expect to be a major player in this market in the coming future.

During 2017, we announced a partnership with Yuanchen Microelectronics for backside illumination manufacturing in Changchun China that provide us the BSI process segment for CIS 8 inch wafer manufactured by TowerJazz to increase our service to our worldwide customer base in mass production. So I will be ready for this mass production early second half of this year with multiple customers already having started their product designs.

In addition, we developed backside illumination and stack way for technology on 12 inch wafers in the Uozu factory serving as a next generation platform for high end photography and high end security market. We now offer both BSI and column level stack wafer PDKs to our customers.

We are investing today in three main directions. Next generation global shutter technology for industrial sensor market. Backside illumination stack wafers for the high end photography market and special pixel technology for the automotive market.
"

An earlier presentation shows the company's CIS business in a graphical format:

Go to the original article...

Automotive Videos

Image Sensors World        Go to the original article...

ULIS publishes a Youtube demo of its thermal sensors usefulness in ADAS applications. One can see how hot the car tires become on the highway, while keep being cool in city driving:



Sensata prizes Quanergy LiDAR performance:

Go to the original article...

Denso Vision Sensor for Improved Night Driving Safety

Image Sensors World        Go to the original article...

DENSO has developed a new vision sensor that detects pedestrians, cyclists, road signs, driving lanes and other road users at night. Working in conjunction with a millimeter-wave radar sensor, the new vision sensor allows automobiles to automatically activate emergency braking when obstacles are identified, helping reduce accidents and improve overall vehicle safety. It is featured in the 2018 Toyota Alphard and Vellfire, which were released in January this year.

It improves night vision by using a unique lens specifically designed for low-light use, and a solid-state imaging device with higher sensitivity. An improved white-line detection algorithm and road-edge detection algorithm also broaden the operating range of lane-keeping assistance and lane departure alert functions, while a 40% size reduction from previous models reduces costs and makes installation easier.

Recognition of human eyes
Recognition of vision sensor

Go to the original article...

Chronocam Changes Name to Prophesee, Raises More Money

Image Sensors World        Go to the original article...

GlobeNewswire: Chronocam, said to be the inventor of the world’s most advanced neuromorphic vision system, is now Prophesee, a branding and identity transformation that reflects the company's expanded vision for revolutionizing how machines see.

Prophesee SA (formerly Chronocam) receives the initial tranche of its Series B financing round, which will total $19M. Led by a new unnamed strategic investor from the electronics industry, the round also includes staged investments from Prophesee’s existing investors: 360 Capital Partners, Supernova Invest, iBionext, Intel Capital, Renault Group, and Robert Bosch Venture Capital. The latest round builds on the $20m Prophesee has raised over the past three years, and will allow it to accelerate the development and industrialization of the company’s image sensor technology.

The roots of Prophesee’s technology run deep into areas of significant achievements in vision, including the breakthrough research carried out by the Vision Institute (CNRS, UPMC, INSERM) on the human brain and eye during the past 20 years, as well as by CERN, where it was instrumental in the discovery of the invisible Higgs Boson, or “The God Particle” in 2012 after more than 30 years of research. Early incarnations of the Prophesee technology helped in the development of the first industry-grade silicon retina which is currently deployed to restore sight to the blind.

Thanks to its fast vision processing equivalent to up to 100,000 fps, Prophesee’s bio-inspired technology enables machines to capture scene changes not previously possible in machine vision systems for robotics, industrial automation and automotive.

Its HDR of more than 120dB lets systems operate and adapt effectively in a wide range of lighting conditions. It sets a new standard for power efficiency with operating characteristics of less than 10mW, opening new types of applications and use models for mobile, wearable and remote vision-enabled products.

Our event-based approach to vision sensing and processing has resonated well with our customers in the automotive, industrial and IoT sectors, and the technology continues to achieve impressive results in benchmarking and prototyping exercises. This latest round of financing will help us move rapidly from technology development to market deployment,” said Luca Verre, co-founder and CEO of Prophesee. “Having the backing of our original investors, plus a world leader in electronics and consumer devices, further strengthens our strategy and will help Prophesee win the many market opportunities we are seeing.

Prophesee AI-based neuromorphic vision sensor

Go to the original article...

Inerview with Nobukazu Teranishi

Image Sensors World        Go to the original article...

Nikkei publishes an interview with Nobukazu Teranishi, inventor of the pinned PD who recently was awarded the Queen Elizabeth Prize for Engineering.

"Now... except for Sony, which leads the world in the image sensor sector, Japanese companies have fallen behind, particularly in the semiconductor industry.

Teranishi said that changes are necessary for Japan to continue to compete globally.

He also suggested that engineers and technical experts should be held in higher esteem in Japan.

"Excellent engineers are a significant asset. Companies overseas shouldn't be able to lure them out of Japan just with better salaries. If they are that valuable, their value should to be recognized in Japan as well," he said.

Determining salaries by how long people have been at the company seems like "quite a rigid structure," he said.

He added that engineers get little recognition for the work they do, with individual names rarely mentioned within the company or in the media.

Looking ahead to the future of image sensors, Teranishi feels one peak has been reached, with around 400 million phones produced annually that incorporate his technology. Next, he says, is the era of "images that you don't see."

For facial recognition and gesture input for games, he said, "No one sees the image but the computer is processing information. So there are many cases where a human doesn't see the image.
"

Go to the original article...

CIS Wafer Testing Presentation

Image Sensors World        Go to the original article...

Taiwan Jetek Technology publishes a presentation on CIS wafer-level testing.

Go to the original article...

IR-Enhancing Surface Structures Compared

Image Sensors World        Go to the original article...

IEEE Spectrum: TED publishes UCD and W&WSens Devices invited paper on light-bending microstructures to enhance PD QE and IR sensitivity "A New Paradigm in High-Speed and High-Efficiency Silicon Photodiodes for Communication—Part I: Enhancing Photon–Material Interactions via Low-Dimensional Structures" by Hilal Cansizoglu, Ekaterina Ponizovskaya Devine, Yang Gao, Soroush Ghandiparsi, Toshishige Yamada, Aly F. Elrefaie, Shih-Yuan Wang, and M. Saif Islam.

"[Saif] Islam and his colleagues came up with a silicon structure that makes photodiodes both fast and efficient by being both thin and good at capturing light. The structure is an array of tapered holes in the silicon that have the effect of steering the light into the plane of the silicon. “So basically, we’re bending light 90 degrees,” he says."


The paper compares the proposed approach with other surface structures for IR sensitivity enhancement:

Go to the original article...

Sony Imaging Pro Support review

Cameralabs        Go to the original article...

Canon and Nikon may be the established names in the pro sports market, but Sony's hungry for a piece of the action. In 2017 it launched it's fastest pro body yet, the Alpha A9, alongside an enhanced pro support program. In this article I'll take a look at the program so far.…

The post Sony Imaging Pro Support review appeared first on Cameralabs.

Go to the original article...

Corephotonics and Sunny Ship Millions of Dual Camera Modules to Oppo, Xiaomi and Others

Image Sensors World        Go to the original article...

Optics.org: Corephotonics has partnered with Sunny Optical to bring to market a variety of solutions based on the company’s dual camera technologies. Under this agreement, Sunny has already shipped millions of dual cameras powered by Corephotonics IP to various smartphone OEMs, including Xiaomi, OPPO and others.

The new partnership combines Sunny’s automatic manufacturing capacity, quality control and optical development capabilities with Corephotonics’ innovation in optics, camera mechanics and computational imaging. This strategic license agreement covers various dual camera products, including typical wide + tele cameras, as well as various folded dual camera offerings, allowing an increased zoom factor, optical stabilization and a reduced module height.

The partnership allows Sunny to act as a one-stop-shop dual camera vendor, providing customized dual camera designs in combination with well-optimized software features. The collaboration leverages Sunny's manufacturing lead and strong presence in the Chinese dual-camera market.

Sunny Optical has the powerful optical development capability and automatic lean manufacturing capacity. We have experimented with virtually all dual camera innovations introduced in recent years, and have found Corephotonics dual camera technologies to have the greatest contribution in camera performance and user experience. Just as important is the compliance of their dual camera architecture with high volume production and harsh environmental requirements,” said Cerberus Wu, Senior Marketing Director of Sunny Optical.

We are deeply impressed by Sunny's dual camera manufacturing technologies, clearly setting a new benchmark in the thin camera industry," added Eran Briman, VP of Marketing & Business Development at Corephotonics. “The dual camera modules produced under this collaboration present smartphone manufacturers with the means to distinguish their handsets from those of their rivals through greatly improved imaging capabilities, as well as maximum flexibility and customizability."

Go to the original article...

Fujifilm XP130 review

Cameralabs        Go to the original article...

The Fujifilm XP130 is a rugged compact with 16 Megapixels, a 5x zoom, 3in screen and Wifi with Bluetooth. It's waterproof down to 20m, freezeproof to -10C and can withstand a drop from height of 1.75 metres. Find out how it compares in Ken's review!…

The post Fujifilm XP130 review appeared first on Cameralabs.

Go to the original article...

EETimes Reviews ISSCC 2018

Image Sensors World        Go to the original article...

EETimes Junko Yoshida publishes a review of ISSCC 2018 image sensor session, covering Sony motion detecting event-driven sensor:


Microsoft 1MP ToF sensor:


Toshiba 200m-range LiDAR:


and much more...

Go to the original article...

css.php