Archives for July 2018

Imaging Resource on Nikon Image Sensor Design

Image Sensors World        Go to the original article...

Imaging Resource publishes a very nice article "Pixels for Geeks: A peek inside Nikon’s super-secret sensor design lab" by Dave Etchells. There is quite a lot on Nikon-internal stuff that has been publicly released for the first time. Just a few interesting quotes out of many:

"Nikon actually designs their own sensors, to a fairly minute level of detail. I think this is almost unknown in the photo community; most people just assume that “design” in Nikon’s case simply consists of ordering-up different combinations of specs from sensor manufacturers.

In actuality, they have a full staff of sensor engineers who design cutting-edge sensors like those in the D5 and D850 from the ground up, optimizing their designs to work optimally with NIKKOR lenses and Nikon's EXPEED image-processor architecture.

As part of matching their sensors to NIKKOR optics, Nikon’s sensor designers pay a lot of attention to the microlenses and the structures between them and the silicon surface. Like essentially all camera sensors today, Nikon’s microlenses are offset a variable amount relative to the pixels beneath, to compensate for light rays arriving at oblique angles near the edges of the array.
"

Apparently, Nikon uses Silvaco tools for pixel device and process simulations:

Go to the original article...

Taiwanese CIS Makers Look for New Opportunities

Image Sensors World        Go to the original article...

Digitimes reports that Silicon Optronics and Pixart, and backend houses Xintec, VisEra, King Yuan Electronics (KYEC) and Tong Hsing Electronic Industries are moving their focus to niche markets as they fail to find grow in the mainstream smartphone cameras.

Kingpak manages to survive on smartphone market due to its contract with Sony to provide backend services for CIS. Other Taiwan-based CIS suppliers are unable to grow in smartphone applications, despite promising demand for multi-camera phones.

Meanwhile, Silicon Optronics (SOI) has held its IPO on the Taiwan Stock Exchange (TWSE) on July 16. Its share price has raised by 44.55% in its first day of trading.

Go to the original article...

Mantis Vision Raises $55m in Round D

Image Sensors World        Go to the original article...

PRNewswire: Mantis Vision announces the closing of its Series D round of $55m with a total investment of $83m to date. New funds will serve to extend the company's technological edge, accelerate Mantis Vision's go-to-market strategy, expand its international workforce and support external growth opportunities. The Series D investment was led by Luenmei Quantum Co. Ltd., a new investor in Mantis Vision, and Samsung Catalyst Fund, an existing shareholder of the company.

Mantis Vision and Luenmei Quantum also announced the formation of a new joint venture, "MantisVision Technologies", to further strengthen Mantis Vision's position and growth in the Greater China Market.

Mantis Vision is planning to double its global workforce with an additional 140 employees in Israel, U.S., China and Slovak Republic by the end of 2020. As part of the latest series funding, Mantis Vision will expand its pool of talent engineers for advanced R&D algorithmic research in computer vision and deep learning, advanced optics experts, mobile camera engineers, 3D apps developers and 3D Volumetric studio experts amongst other open positions in program management and business development.

According to So Chong Keung, Luenmei Quantum Co. Ltd. President and GM: "Luenmei Quantum is closely following the Israeli high-tech industry, which creates outstanding technology. Mantis Vision's versatile and advanced 3D technologies is well positioned and suited for mobile, secure face ID applications and entertainment industries in China. We found that Mantis Vision is the right match for Luenmei Quantum, combining hi-tech, innovation and passion."

According to Gur Arie Bitan, Founder and CEO of Mantis Vision: "This latest announcement is another proof of Mantis Vision's meteoric advancements in this recent period, technologically and business-wise. We regard our continued partnership with Samsung Catalyst Fund and Luenmei Quantum Co. as a strategic partnership and thanks to our new joint venture, we will be able to further strengthen our grip in the Greater Chinese market."

Go to the original article...

NIST Publishes LiDAR Characterization Standard

Image Sensors World        Go to the original article...

Spar3D: NIST's Physical Measurement Laboratory develops an international performance evaluation standard for for LiDARs.

An invitation was sent to all leading manufacturers of 3D laser scanners to visit NIST and run the approximately 100 tests specified in the draft standard. Four of the manufacturers traveled to NIST (one each from Germany and France) to participate in the runoff. Another sent an instrument during the week for testing. Two other manufacturers who could not attend have expressed interest in visiting NIST soon to try out the tests. These seven manufacturers represent about four-fifths of the entire market for large volume laser scanners.

The new standard ASTM E3125 is available here.

Go to the original article...

PMD Presentation at AWE 2018

Image Sensors World        Go to the original article...

PMD VP Business Development Mitchel Reifel presents the company ToF solutions for AR and VR applications at Augmented World Expo:

Go to the original article...

Parrot Anafi review

Cameralabs        Go to the original article...

The Parrot Anafi is a mid-range drone with 4k video and a powered gimbal. Parrot's most sophisticated drone to date, it's pitched squarely against the DJI Mavic Air, undercutting it on price and boasting some unique features. Adam takes it for a spin in his full review!…

The post Parrot Anafi review appeared first on Cameralabs.

Go to the original article...

Panasonic PIR and Thermopile Sensor Presentation

Image Sensors World        Go to the original article...

Panasonic publishes a video presenting its PIR and Thermopile sensor lineup:

Go to the original article...

TI Unveils AFE for ToF Proximity Sensor

Image Sensors World        Go to the original article...

TI ToF proximity sensor AFE OPT3101 integrates most of the ToF system on a single chip:

Go to the original article...

Rode NT USB review

Cameralabs        Go to the original article...

The Rode NT USB is a broadcast-quality USB microphone designed to capture a wide range of sound from vocals to musical instruments. The condenser design captures a broader range of frequencies than dynamic mics like the Podcaster, allowing it to deliver more transparent audio. Check out my review!…

The post Rode NT USB review appeared first on Cameralabs.

Go to the original article...

Magic Leap Gets Investment from AT&T

Image Sensors World        Go to the original article...

Techcrunch reports that AT&T makes a strategic investment into Magic Leap, a developer of AR glasses. Magic Leap last round D valued the startup at $6.3b, and the companies have confirmed that this AT&T completes the Series D round of $963m.

So far, Magic Leap has raised $2.35b from a number of strategic backers including Google, Alibaba and Axel Springer.

Go to the original article...

AutoSens Announces its Awards Finalists

Image Sensors World        Go to the original article...

AutoSens Awards reveals the shortlisted finalists for 2018 with some of them related to imaging:

Most Engaging Content:

  • Mentor Graphics, Andrew Macleod
  • videantis, Marco Jacobs
  • Toyota Motor North America, CSRC, Rini Sherony
  • 2025AD, Stephan Giesler
  • EE Times, Junko Yoshida

Hardware Innovation:

  • NXP Semiconductors
  • Cepton
  • Renesas
  • OmniVision
  • Velodyne Lidar
  • Robert Bosch

Software Innovation:

  • Dibotics
  • Algolux
  • Brodmann17
  • Civil Maps
  • Dataspeed
  • Immervision
  • Prophesee

Most Exciting Start-Up:

  • Hailo
  • Metamoto
  • May Mobility
  • AEye
  • Ouster
  • Arbe Robotics

Game Changer:

  • Siddartha Khastgir, WMG, University of Warwick, UK
  • Marc Geese, Robert Bosch
  • Kalray
  • Prof. Nabeel Riza, University College Cork
  • Intel
  • NVIDIA and Continental partnership

Greatest Exploration:

  • Ding Zhao, University of Michigan
  • Prof Philip Koopman, Carnegie Mellon University
  • Prof Alexander Braun, University of Applied Sciences Düsseldorf
  • Cranfield University Multi-User Environment for Autonomous Vehicle Innovation (MUEAVI)
  • Professor Natasha Merat, Institute for Transport Studies
  • Dr Valentina Donzella, WMG University of Warwick

Best Outreach Project:

  • NWAPW
  • Detroit Autonomous Vehicle Group
  • DIY Robocars
  • RobotLAB
  • Udacity

Go to the original article...

Image Sensors America Agenda

Image Sensors World        Go to the original article...

Image Sensors America to be held on October 11-12, 2018 in San Francisco announces its agenda with many interesting papers:

State of the Art Uncooled InGaAs Short Wave Infrared Sensors
Dr. Martin H. Ettenberg | President of Princeton Infrared Technologies

Super-Wide-Angle Cameras- The Next Smartphone Frontier Enabled by Miniature Lens Design and the Latest Sensors
Patrice Roulet Fontani | Vice President,Technology and Co-Founder of ImmerVision

SPAD vs. CMOS Image Sensor Design Challenges – Jitter vs. Noise
Dr. Daniel Van Blerkom | CTO & Co-Founder of Forza Silicon

sCMOS Technology: The Most Versatile Imaging Tool in Science
Dr. Scott Metzler | PCO Tech

Image Sensor Architecture
Presentation By Sub2R

Using Depth Sensing Cameras for 3D Eye Tracking
Kenneth Funes Mora | CEO and Co-founder of Eyeware

Autonomous Driving The Development of Image Sensors?
Ronald Mueller | CEO of Vision Markets of Associate Consultant of Smithers Apex

SPAD Arrays for LiDAR Applications
Carl Jackson | CTO and Founder of SensL Division, OnSemi

Future Image Sensors for SLAM and Indoor 3D Mapping
Vitality Goncharuk | CEO & Founder | Augmented Pixels

Future Trends in Imaging Beyond the Mobile Market
Amos Fenigstein | Senior Director of R&D for Image Sensors of TowerJazz

Presentation by Gigajot

Go to the original article...

ST FlightSense Presentation

Image Sensors World        Go to the original article...

ST publishes its presentation on ToF proximity sensor products:

Go to the original article...

Four Challenges for Automotive LiDARs

Image Sensors World        Go to the original article...

DesignNews publishes a list of four challenges that LiDARs have to overcome on the way to wide acceptance in vehicles:

Price reduction:

Every technology gets commoditized at some point. It will happen with LiDAR,” said Angus Pacala, co-founder and CEO of LiDAR startup Ouster. “Automotive radars used to be $15,000. Now, they are $50. And it did take 15 years. We’re five years into a 15-year lifecycle for LiDAR. So, cost isn’t going to be a problem.

Increase detection range:

Range isn’t always range,” said John Eggert, director of automotive sales and marketing at Velodyne. “[It’s] dynamic range. What do you see and when can you see it? We see a lot of ‘specs’ around 200 meters. What do you see at 200 meters if you have a very reflective surface? Most any LiDAR can see at 100, 200, 300 meters. Can you see that dark object? Can you get some detections off a dark object? It’s not just a matter of reputed range, but range at what reflectivity? While you’re able to see something very dark and very far away, how about something very bright and very close simultaneously?

Improve robustness:

It comes down to vibration and shock, wear and tear, cleaning—all the aspects that we see on our cars,” said Jada Smith, VP engineering and external affairs at Aptiv, Delphi spin-off. “LiDAR systems have to be able to withstand that. We need perfection in the algorithms. We have to be confident that the use cases are going to be supported time and time again.

Withstand the environment and different weather conditions:

Jim Schwyn, CTO of Valeo North America, said “What if the LiDAR is dirty? Are we in a situation where we are going to take the gasoline tank from a car and replace it with a windshield washer reservoir to be able to keep these things clean?

The potentially fatal LiDAR flaws that need to be corrected:

  • Bright sun against a white background
  • A blizzard that causes whiteout conditions
  • Early morning fog
Another article on somewhat similar matter has been published by Lidarradar.com:

Go to the original article...

SmartSens Unveils GS BSI VGA Sensor

Image Sensors World        Go to the original article...

PRNewswire: SmartSens launches SC031GS calling it "the world's first commercial-grade 300,000-pixel Global Shutter CMOS image sensor based on BSI pixel technology." While other companies announced GS BSI sensors, they have higher than VGA resolution.

The SC031GS is aimed to a wide range of commercial products, including smart barcode readers, drones, smart modules (Gesture Recognition/vSLAM/Depth Information/Optical Flow) and other image recognition-based AI applications, such as facial recognition and gesture control.

SC031GS uses 3.75um large pixels (1/6" optical size) and SmartSens' single-frame HDR technology, combined with a global shutter. The maximum frame rate is 240fps.

Leo Bai, GM of SmartSens' AI Image Sensors Division, stated: "SmartSens is not only a new force in the global CMOS image sensor market, but also a company that commits to designing and developing products that meet the market needs and reflect industry trends. We partnered with key players in the AI field to integrate AI functions into the product design. SC031GS is such a revolutionary product that is powered by our leading Global Shutter CMOS image sensing technology and designed for trending AI applications."

SC031GS is now in mass production.

Go to the original article...

SenseTime to Expand into Automotive Applications

Image Sensors World        Go to the original article...

South China Morning Post: Face recognition startup SenseTime announces its plans to expand in automotive applications.

Our leading algorithms for facial recognition have already proven a big success,” said SenseTime co-founder Xu Bing, “and now comes [new technologies for] autonomous driving, which enable machines to recognise images both inside and outside cars, and an augmented reality engine, integrating know-how in reading facial expressions and body movement.

SenseTime raised $620m in May caling itself world’s most valuable AI start-up, with a valuation of $4.5b. Known for providing AI-powered surveillance software for China’s police, SenseTime said it achieved profitability last year, selling AI-powered applications for smart cities, surveillance, smartphones, internet entertainment, finance, retail and other industries.

Last year, Honda announced a partnership with SenseTime for automated driving technologies.

Go to the original article...

Nvidia AI-Enhanced Noise Removal

Image Sensors World        Go to the original article...

DPReview quotes Nvidia blog presenting a joint research with MIT and Aalto University on AI-enhanced noise removal with pretty impressive results:


Go to the original article...

Omnivision Releases Sensor Optimized for Structured Light FaceID Applications

Image Sensors World        Go to the original article...

OmniVision announces a global shutter sensors targeting facial authentication in mobile devices, along with other machine vision applications such as AR/VR, drones and robotics. The high-resolution OV9286 sensor, with 20% more pixels than the previous-generation sensor, is said to enable a new level of accuracy in facial authentication for smartphone applications requiring the high levels of security. The OV9286 is optimized for payment-level facial authentication using a structured light solution for high-quality 3D images.

The market for facial recognition components is expected to grow rapidly to $9.2b by 2022, according to a report from Allied Market Research.

A higher level of image-sensing accuracy is required to safely authenticate smartphones for payment applications, compared to using facial authentication for unlocking a device,” said Arun Jayaseelan, senior marketing manager at OmniVision. “The increased resolution of the OV9286 image sensor meets these requirements, while using the global shutter technology to optimize system power consumption as well as to eliminate motion artifacts and blurring.

The sensor is available in two versions: the OV9286 for smartphone applications, and the OV9285 for other machine vision applications that also need high-resolution sensors to enable a broad range of image-sensing functions. The OV9286 has a high CRA of 26.7 degrees for low z-height and slim-profile smartphone designs. The OV9285 has a lower CRA of 9 degrees for applications where that tight z-height restriction does not apply, supporting wide field-of-view lens designs.

Both the OV9285 and the OV9286 incorporate 1328 x 1120 resolution at 90 fps, an optical format of 1/3.5-inch and 3x3-micrometer OmniPixel3-GS technology. These global shutter sensors, in combination with excellent NIR sensitivity at 850 nm and 940 nm, reduce device power consumption to extend battery life.

The OV9285 and OV9286 sensors are available now.

Go to the original article...

Nikon COOLPIX P1000 review

Cameralabs        Go to the original article...

The Nikon COOLPIX P1000 is a DSLR-styled super-zoom camera with a massive 125x range, taking it from an equivalent of 24-3000mm. The P1000 has a 16 Megapixel sensor, can film 4k video and has a built-in OLED viewfinder and fully-articulated screen. Check out Ken's review!…

The post Nikon COOLPIX P1000 review appeared first on Cameralabs.

Go to the original article...

3D Imaging Fundamentals (Open Access)

Image Sensors World        Go to the original article...

OSA Advances in Optics and Photonics publishes "Fundamentals of 3D imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems" by Manuel Martínez-Corral (University of Valencia, Spain) and Bahram Javidi (University of Connecticut, Storrs).

"This tutorial is addressed to the students and researchers in different disciplines who are interested to learn about integral imaging and light-field systems and who may or may not have a strong background in optics. Our aim is to provide the readers with a tutorial that teaches fundamental principles as well as more advanced concepts to understand, analyze, and implement integral imaging and light-field-type capture and display systems. The tutorial is organized to begin with reviewing the fundamentals of imaging, and then it progresses to more advanced topics in 3D imaging and displays. More specifically, this tutorial begins by covering the fundamentals of geometrical optics and wave optics tools for understanding and analyzing optical imaging systems. Then, we proceed to use these tools to describe integral imaging, light-field, or plenoptics systems, the methods for implementing the 3D capture procedures and monitors, their properties, resolution, field of view, performance, and metrics to assess them. We have illustrated with simple laboratory setups and experiments the principles of integral imaging capture and display systems. Also, we have discussed 3D biomedical applications, such as integral microscopy."


OSA Advances in Optics and Photonics site also has a 2011 open access paper "Structured-light 3D surface imaging: a tutorial" by Jason Geng. Since the structured light approach has progressed a lot over the recent years, the information in this tutorial is largely obsolete. Still, it could be good start for learning basics or for history-inclined readers.

Go to the original article...

Espros on ToF FaceID Calibration Challenges

Image Sensors World        Go to the original article...

Espros Dieter Kaegi presentation "3D Facial Scanning" at Swiss Photonics Workshop, held at Chur on June 21, 2018, talks about many challenges on the way of ToF-based FaceID module development:

Go to the original article...

AMS Presentation on 3D Sensing for Consumer Applications

Image Sensors World        Go to the original article...

AMS Markus Rossi presentation "3D Cameras for Consumer Application" at Swiss Photonics Workshop held in Chur on June 21, 2018 has interesting comparisons between different depth sensing approaches:

Go to the original article...

Leti-SNRS Full-Frame Curved Sensor Paper

Image Sensors World        Go to the original article...

Leti-SNRS curved sensor paper "Curved detectors developments and characterization: application to astronomical instruments" by Simona Lombardo, Thibault Behaghel, Bertrand Chambion, Wilfried Jahn, Emmanuel Hugot, Eduard Muslimov, Melanie Roulet, Marc Ferrari, Christophe Gaschet, Stephane Caplet, and David Henry is available on-line. This work was first announced a year ago.

"We describe here the first concave curved CMOS detector developed within a collaboration between CNRS-LAM and CEA-LETI. This fully-functional detector 20 Mpix (CMOSIS CMV20000) has been curved down to a radius of Rc =150 mm over a size of 24x32 mm2. We present here the methodology adopted for its characterization and describe in detail all the results obtained. We also discuss the main components of noise, such as the readout noise, the fixed pattern noise and the dark current. Finally we provide a comparison with the flat version of the same sensor in order to establish the impact of the curving process on the main characteristics of the sensor.

The curving process of these sensors consists of two steps: firstly the sensors are thinned with a grinding equipment to increase their mechanical flexibility, then they are glued onto a curved substrate. The required shape of the CMOS is, hence, due to the shape of the substrate. The sensors are then wire bonded keeping the packaging identical to the original one before curving. The final product is, therefore, a plug-and-play commercial component ready to be used or tested (figure 1B).
"


"The PRNU factor of the concave sensor shows an increase of 0.8% with respect to the flat sensor one. The difference between the two is not significant. However more investigations are required as it might be due to the curving process and it could explain the appearance of a strong 2D pattern for higher illumination levels."

Go to the original article...

Yole Webcast on Autonomous Driving

Image Sensors World        Go to the original article...

Yole Developpement publishes a recording of its April 2018 webcast "Core Technologies for Robotic Vehicle" that talks about cameras and LiDARs among the other key technologies:


Go to the original article...

Harvard University Proposes Flat Lens

Image Sensors World        Go to the original article...

Photonics.com: Harvard University Prof. Federico Capasso and his group present "a single flat lens that can focus the entire visible spectrum of light in the same spot and in high resolution. Professor Federico Capasso and members of the Capasso Group explain why this breakthrough in metalenses could have major implications in the field of optics, and could replace bulky, curved lenses currently used in optical devices."

Go to the original article...

CEA-Leti with Partners to Develop LiDAR Benchmarks

Image Sensors World        Go to the original article...

LiDAR performance claims are a bit of Wild West today as there are no standardized performance tests. Every company can claim, basically, anything measuring the performance in its own unique way. Not anymore. Leti is aiming to change that.

CEA-Leti and its partner companies Transdev and IRT Nanoelec are to develop a list of criteria and objective parameters by which various commercial LiDAR systems could be evaluated and compared. Leti teams will focus on perception requirements and challenges from a LiDAR system perspective and evaluate the sensors in real-world conditions. Vehicles will be exposed to objects with varying reflectivity, such as tires and street signs, as well as environmental conditions, such as weather, available light, and fog.

Go to the original article...

e2v Unveils 67MP APS-C Sensor with 2.5um Global Shutter Pixels

Image Sensors World        Go to the original article...

Teledyne e2v announces its Emerald 67MP CMOS image sensor. The new sensor features the smallest global shutter pixel (2.5µm) on the market, ideal for high end automated optical inspection, microscopy and surveillance.

Emerald 67M has 2.8e- of readout noise, 70% QE, and high speed, which significantly enhances production line throughput.

Vincent Richard, Marketing Manager at Teledyne e2v, said, “We are very pleased to widen our sensor portfolio with the addition of Emerald 67M, the first 8192 x 8192 global shutter sensor, running at high frame rates and offering a comprehensive set of features. Developed through close discussions with leading OEM’s in the automated optical inspection market, this new sensor offers application features such as our unique Region of Interest mode, which helps to improve customer yield. Combined with its 67M resolution, our newest Emerald sensor tackles the challenge of image instability as a result of inspection system vibration.

Go to the original article...

EVG Wafer Bonding Machine Alignment Accuracy Improved to 50nm

Image Sensors World        Go to the original article...

PRNewswire: EV Group (EVG) unveiled the SmartView NT3 aligner, which is available on the company's GEMINI FB XT integrated fusion bonding system for high-volume manufacturing (HVM) applications. The SmartView NT3 aligner provides sub-50-nm wafer-to-wafer alignment accuracy—a 2-3X improvement—as well as significantly higher throughput (up to 20 wafers per hour) compared to the previous-generation platform.

Eric Beyne, imec fellow and program director 3D system integration says "area of particular focus is wafer-to-wafer bonding, where we are achieving excellent results in part through our work with industry partners such as EV Group. Last year, we succeeded in reducing the distance between the chip connections, or pitch, in hybrid wafer-to-wafer bonding to 1.4 microns, which is four times smaller than the current standard pitch in the industry. This year we are working to reduce the pitch by at least half again."

"EVG's GEMINI FB XT fusion bonding system has consistently led the industry in not only meeting but exceeding performance requirements for advanced packaging applications, with key overlay accuracy milestones achieved with several industry partners within the last year alone," stated Paul Lindner, executive technology director, EV Group. "With the new SmartView NT3 aligner specifically engineered for the direct bonding market and added to our widely adopted GEMINI FB XT fusion bonder, EVG once again redefines what is possible in wafer bonding—helping the industry to continue to push the envelope in enabling stacked devices with increasing density and performance, lower power consumption and smaller footprint."

Go to the original article...

Digitimes Image Sensor Market Forecast

Image Sensors World        Go to the original article...

Digitimes Research forecasts global CMOS sensors and CCDs sales to reach $15b in 2020. The shipments increased by over 15% YoY to $12.2b in 2017. Sony market share in CMOS sensors is estimated at 45% in both 2016 and 2017.

As smartphone market slows down, Sony moves its resources to automotive CIS market where its share is relatively low 9% in 2017. Sony sells its image sensors to Toyota and looks to expand its customer base to include Bosch, Nissan and Hyundai this year.

Go to the original article...

Apple to Integrate Rear 3D Camera in Next Year iPhone

Image Sensors World        Go to the original article...

DeviceSpecifications quotes Korean site ETNews saying that Hynix group assembly house JSCK works with Apple on the next generation 3D sensing camera:

"Apple has revealed the iPhone of 2019 will have a triple rear camera setup with 3D sensing capability that will be a step ahead of the technology that was used for the front-facing camera of the iPhone X released in 2017. The front camera will be used for unlocking purposes, and the rear ones will be used to provide augmented reality (AR) experience. According to industry sources, Jesset Taunch Chippak Korea (JSCK), a Korean company in China, has been developing the 3D sensing module since the beginning of this year. It will be placed in the middle of the rear triple camera module... Apple used infrared (IR) as a light source for the iPhone's front-facing camera 3D sensing, but the rear camera plans to use a different light source than the IR because it needs to sense a wider range."

Go to the original article...

css.php