Archives for March 2018

Omnivision Nyxel Technology Wins Smart Products Leadership Award

Image Sensors World        Go to the original article...

Frost & Sullivan’s Manufacturing Leadership Council prizes Omnivision by Smart Products and Services Leadership Award for Nyxel NIR imaging technology.

Go to the original article...

SF Current and RTN

Image Sensors World        Go to the original article...

Japanese Journal of Applied Physics publishes Tohoku University paper "Effect of drain current on appearance probability and amplitude of random telegraph noise in low-noise CMOS image sensors" by Shinya Ichino, Takezo Mawaki, Akinobu Teramoto, Rihito Kuroda, Hyeonwoo Park, Shunichi Wakashima, Tetsuya Goto, Tomoyuki Suwa, and Shigetoshi Sugawa. It turns out that lower SF current can reduce RTN, at least for 0.18um process used in the test chip:

Go to the original article...

ST Announces 4m Range ToF Sensor

Image Sensors World        Go to the original article...

The VL53L1X TOF sensor extends the detection range of ST's FlightSense technology to four meters, bringing high-accuracy, low-power distance measurement, and proximity detection to an even wider variety of applications. The fully integrated VL53L1X measures only 4.9mm x 2.5mm x 1.56mm, allowing use even where space is very limited. It is also pin-compatible with its predecessor, the VL53L0X, allowing easy upgrading of existing products. The compact package contains the laser driver and emitter as well as SPAD array light receiver that gives ST’s FlightSense sensors their ranging speed and reliability. Furthermore, the 940nm emitter, operating in the non-visible spectrum, eliminates distracting light emission and can be hidden behind a protective window without impairing measurement performance.

ST publishes quite a detailed datasheet with the performance data:

Go to the original article...

GM 4th Gen Self-Driving Car Roof Module

Image Sensors World        Go to the original article...

GM has started production of a roof rack for its fourth generation Cruise AV featuring 5 Velodyne LiDARs and, at least, 7 cameras:

Go to the original article...

MEMSDrive OIS Technology Presentation

Image Sensors World        Go to the original article...

MEMSDrive kindly sent me a presentation on its OIS technology:




Go to the original article...

Pictures from Image Sensors Europe 2018

Image Sensors World        Go to the original article...

Few assorted pictures from Image Sensors Europe conference being held these days in London, UK.

From Ron (Vision Markets) twitter:


Image Sensors twitter:


From X-Fab presentation:

Go to the original article...

Rumor: Mantis Vision 3D Camera to Appear in Samsung Galaxy S10 Phone

Image Sensors World        Go to the original article...

Korean newspaper The Investor quotes local media reports that Mantis Vision and camera module maker Namuga are developing 3-D sensing camera for Samsung next-generation Galaxy S smartphones, tentatively called the Galaxy S10. Namuga is also providing 3-D sensing modules for Intel’s RealSense AR cameras.

Go to the original article...

TechInsights: Samsung Galaxy S9+ Cameras Cost 12.7% of BOM

Image Sensors World        Go to the original article...

TechInsights Samsung Galaxy S9+ cost table estimates cameras cost at $48 out of $379 total. The previous generation S8 camera was estimated at $25.50 or 7.8% of the total BOM.


Update:
TechInsights publishes a cost comparison of this year and last yera;s flagship phones. Galaxy S9+ appears to have the largest investment in camera and imaging hardware:

Go to the original article...

ICFO Graphene Image Sensors

Image Sensors World        Go to the original article...

ICFO food analyzer demo at MWC in Barcelona in February 2018:



UV graphene sensors:


Go to the original article...

Samsung CIS Production Capacity to Beat Sony

Image Sensors World        Go to the original article...

ETNews reports that Samsung is to convert its 300mm DRAM 13 line in Hwasung to CMOS sensors production. Since last year, the company also working to convert its DRAM 11 line in Hwasung into an image sensor production (named as S4 line). Conversion of S4 line will be done by end of this year. Right after that, Samsung is going to convert its 300mm 13 line. The 13 line can produce about 100,000 DRAM wafers per month. Because image sensor has more manufacturing steps than DRAM, the production capacity is said to be reduced by about 50% after conversion.

At the end of last year, production capacity of image sensor from 300mm plant based on wafer input was about 45,000 units.” said ETNews source. “Because production capacities of image sensor that will be added from 11 line and 13 line will exceed 70,000 units per month, Samsung Electronics will have production capacity of 120,000 units of image sensor after these conversion processes are over.

Sony CIS capacity is about 100,000 wafers per month. Even with Sony capacity extension plans are accounted, Samsung should be able to match or exceed Sony production capacity.

While increasing production capacity of 300mm CIS lines for 13MP and larger sensors, Samsung is planning to slowly decrease output of 200mm line located in Giheung.

Samsung capacity expansion demonstrates its market confidence. Samsung believes that its image sensor capabilities approach that of Sony. The number of the company's outside CIS customers is over 10.

Go to the original article...

ULIS Video

Image Sensors World        Go to the original article...

ULIS publishes a promotional video about its capabilities and products:

Go to the original article...

Vivo Announces SuperHDR

Image Sensors World        Go to the original article...

One of the largest smartphone makers in China, Vivo, announces its AI-powered Super HDR that follows the same principles as regular multi-frame HDR but merges more frames.

The Super HDR’s DR is said to reach up to 14 EV. With a single press of the shutter, Super HDR captures up to 12 frames, significantly more than former HDR schemes. AI algorithms are used to adapt to different scenarios. The moment the shutter is pressed, the AI will detect the scene to determine the ideal exposure strategy and accordingly select the frames for merging.

Alex Feng, SVP at Vivo says “Vivo continues to push the boundaries and provide the ultimate camera experience for consumers. This goes beyond just adding powerful functions, but to developing innovations that our users can immediately enjoy. Today’s showcase of Super HDR is an example of our continued commitment to mobile photography, to enable our consumers to shoot professional quality photos at the touch of a button. Using intelligent AI, Super HDR can capture more detail under any conditions, without additional demands on the user.

Go to the original article...

Prophesee Expands Event Driven Concept to LiDARs

Image Sensors World        Go to the original article...

EETimes publishes an article on event-driven image sensors such as Prophesee's (former Chronocam) Asynchronous Time-Based Image Sensor (ATIS) chip.

The company CEO Luca Verre "disclosed to us that Prophesee is exploring the possibility that its event-driven approach can apply to other sensors such as lidars and radars. Verre asked: “What if we can steer lidars to capture data focused on only what’s relevant and just the region of interest?” If it can be done, it will not only speed up data acquisition but also reduce the data volume that needs processing.

Phrophesee is currently “evaluating” the idea, said Luca, cautioning that it will take “some months” before the company can reach that conclusion. But he added, “We’re quite confident that we can pull it off.”

Asked about Prophesee’s new idea — to extend the event-driven approach to other sensors — Yole Développement’s analyst Cambou told us, “Merging the advantages of an event-based camera with a lidar (which offers the “Z” information) is extremely interesting.”

Noting that problems with traditional lidars are tied to limited resolution — “relatively less than typical high-end industrial cameras” — and the speed of analysis, Cambou said that the event-driven approach can help improve lidars, “especially for fast and close-by events, such as a pedestrian appearing in front of an autonomous car.


Go to the original article...

Samsung Galaxy S9+ Cameras

Image Sensors World        Go to the original article...

TechInsights publishes an article on Galaxy S9+ reverse engineering including its 4 cameras - a dual rear camera, a front camera and an iris recognition sensor:

"We are excited to analyze Samsung's new 3-stack ISOCELL Fast 2L3 and we'll be publishing updates as our labs capture more camera details.

Samsung is not first to market with variable mechanical apertures or 3-layer stacked image sensors, however the integration of both elements in the S-series is a bold move to differentiate from other flagship phones.

The S9 wide-angle camera system, which integrates a 2 Gbit LPDDR4 DRAM, offers similar slo-mo video functionality with 0.2 s of video expanded to 6 s of slo-mo captured at 960 fps. Samsung promotes the memory buffer as beneficial to still photography mode where higher speed readout can reduce motion artifacts and facilitate multi-frame noise reduction.
"


iFixit reverse engineering report publishes nice pictures showing a changing aperture on the wide-angle rear camera:

Go to the original article...

3DInCites Awards

Image Sensors World        Go to the original article...

Phil Garrou's IFTLE 374 reviews 3DInCites Award winners. Two of them are related to image sensors:

Device of the Year: OS05A20 Image Sensor with Nyxel Technology, OmniVision:

"OmniVision’s OS05A20 Image Sensor was nominated for being the first of its image sensors to be built with Nyxel ™ Technology. This approach to near-infrared (NIR) imaging combines thick-silicon pixel architectures with careful management of wafer surface texture to improve quantum efficiency (QE), and extended deep trench isolation to help retain modulation transfer function without affecting the sensor’s dark current. As a result, this image sensor sees better and farther under low- and no-light conditions than previous generations."

Engineer of the Year: Gill Fountain, Xperi:

"Known as Xperi’s guru on Ziptronix’ technologies, Gill was nominated for his most recent contribution, expanding the chemical mechanical polishing process window for Cu damascene from relatively fine features. His team developed a process that delivers uniform, smooth Cu/Ta/Oxide surfaces with a controlled Cu recess with very small variance across wafer sizes. He has been an integral part of Xperi’s technical team and his work allows the electronics industry to apply direct bond interconnect (DBI) for high-volume wafer-to-wafer applications."

Go to the original article...

Interview with Steven Sasson

Image Sensors World        Go to the original article...

IEEE publishes an interview with Steven J. Sasson who invented the first digital camera in 1975 while working at Eastman Kodak, in Rochester, N.Y. A notable Q&A:

Q: What tech advance in recent years has surprised you the most?

A: Cameras are everywhere! I would have never anticipated how ubiquitous the imaging of everything would become. Photos have become the universal form of casual conversation. And cameras are present in almost every type of environment, including in our own homes. I grossly underestimated how quickly it would take for us to get here.

Go to the original article...

Beer Idenitfication with Hamamatsu Micro-spectrometer

Image Sensors World        Go to the original article...

Hamamatsu publishes a beer identification article showing it as an application for its micro-spectrometers:


Go to the original article...

Forza Silicon Applies Machine Learning to Production Yield Improvement

Image Sensors World        Go to the original article...

BusinessWire: Forza Silicon CTO, Daniel Van Blerkom, is to present a paper titled “Accelerated Image Sensor Production Using Machine Learning and Data Analytics” at Image Sensors Europe 2018 in London on March 15, 2018.

The machine learning has been applied to sensor data sets to identify and measure critical yield limiting defects. “Image sensors offer the unique opportunity to image the yield limiting defect mechanisms in silicon,” said Daniel Van Blerkom. “By applying machine learning to image sensor test procedures we’re able to quickly and easily classify sensor defects, identify root-cause and feedback the results to improve the process, manufacturing flow and sensor design for our clients.

Go to the original article...

ON Semi Announces X-Class CMOS Image Sensor Platform

Image Sensors World        Go to the original article...

BusinessWire: ON Semiconductor announces X-Class image sensor platform, which allows a single camera design supporting multiple sensors across the platform. The first devices in the new platform are the 12MP XGS 12000 and 4k / UHD resolution XGS 8000 sensors for machine vision, intelligent transportation systems, and broadcast imaging applications.

The X-Class image sensor platform supports multiple CMOS pixel architectures within the same image sensor frame. This allows a single camera design to support multiple product resolutions and different pixel functionality, such as larger pixels that trade resolution at a given optical format for higher imaging sensitivity, designs optimized for low noise operation to increase DR, and more. By supporting these different pixel architectures through a common high bandwidth, low power interface, camera manufacturers can leverage existing parts inventory and accelerate time to market for new camera designs.

The initial devices in the X-Class family, the XGS 12000 and XGS 8000, are based on the first pixel architecture to be deployed in this platform – a 3.2 µm global shutter CMOS pixel. The XGS 12000 12 MP device is planned to be available in two speed grades – one that fully utilizes 10GigE interfaces by providing full resolution speeds up to 90 fps, and a lower price version providing 27 fps at full resolution that aligns with the bandwidth available from USB 3.0 computer interfaces. The XGS 8000 is also planned to be available in two speed grades (130 and 75 fps) for broadcast applications.

As the needs of industrial imaging applications such as machine vision inspection and industrial automation continue to advance, the design and performance of the image sensors targeting this growing market must continue to evolve,” said Herb Erhardt, VP and GM, Industrial Solutions Division, Image Sensor Group at ON Semiconductor. “With the X-Class platform and devices based on the new XGS pixel, end users have access to the performance and imaging capabilities they need for these applications, while camera manufacturers have the flexibility they require to develop next-generation camera designs for their customers both today and in the future.

The XGS 12000 and XGS 8000 will begin sampling in the 2Q2018, with production availability scheduled for the 3Q2018. Additional devices based on the 3.2 µm XGS pixel as well as products based on other pixel architectures are planned for the X-Class family in the future.


BusinessWire
: ON Semiconductor also announces a fully AEC-Q100 qualified version of its circa-2016 2.1 MP CMOS sensor, AR0237 for the OEM-fitted dash cam or before-market in-car DVR market.

The AR0237AT is a cost-optimized, automotive qualified version of the same sensor that can operate across the full automotive operating temperature range of -40°C to +105°C and deliver the right performance at the right price point. The low-light performance of the AR0237AT is improved when it is coupled to a Clarity+ enabled DVR processor. ON Semiconductor’s Clarity+ technology employs filtering to optimize the SNR of automotive imaging solutions, which can deliver an additional 2X increase in light capture.

Go to the original article...

Adafruit Publishes ST FlightSense Performance Data

Image Sensors World        Go to the original article...

Adafruit publishes a datasheet of its distance sensor using ST SPAD-based ToF chip VL53L0X.

Update: Upon a closer look, the official ST VL530L0X datasheet has all these tables with the performance data.

Go to the original article...

ToF Sensor Used for 3D Photometric Imaging

Image Sensors World        Go to the original article...

MDPI Sensors publishes a paper from a group of Japanese universities "The Dynamic Photometric Stereo Method Using a Multi-Tap CMOS Image Sensor" by Takuya Yoda, Hajime Nagahara, Rin-ichiro Taniguchi, Keiichiro Kagawa, Keita Yasutomi, and Shoji Kawahito. The paper proposes using of a 4-tap ToF sensor developed in Shizuoka University for 3D imaging in a different way:

"The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes."

Go to the original article...

TechInsights Publishes Samsung 0.9um Tetracell Pixel Analysis

Image Sensors World        Go to the original article...

Techinsights publishes reverse engineering report of Samsung 0.9um Tetracell pixel sensor:

There are many reasons we are excited about the Samsung S5K2X7SP 0.9µm Image Sensor, including Samsung’s claims about it:
  • “Slim 2X7 with Tetracell technology” (.9um, 24MP)
  • The first 0.9 µm generation pixels in mass production
  • Targeting both front and rear cameras
As well as its noted technology features:

Improved ISOCELL technology with deeper deep trench isolation
  • (DTI)Reduced color crosstalk
  • Expands the full-well capacity to hold more light information
  • At 0.9um, allows 24Mp image sensor to fit in a thinner camera module
Tetracell Technology
  • Merges four neighboring pixels to work as one for better light sensitivity in low light situations

Go to the original article...

Broadcom Enters ToF Sensing Business

Image Sensors World        Go to the original article...

Broadcom-Avago unveils its first ToF sensor product. The AFBR-S50MV85G has quite a nice spec:

"The AFBR-S50 is Broadcom's multipixel distance and motion measurement sensor based on the optical time-of-flight principle. It supports up to 3000 frames per second with up to 16 illuminated pixels.

This sensor has been developed with a special focus on industrial sensing applications and gesture sensing with the need for high speed, small size and very low power consumption. Through its best-in-class ambient light suppression of up to 200k Lux, its use in outside environments is no problem.

The technology has been optimized to measure distances up to 10m (black target) with an accuracy of < 1 percent on a wide variety of surfaces. It works equally well on white, black, colored and metallic reflective surfaces.

The module has an integrated 850nm laser light source and uses a single voltage supply of 5V; the data is transferred via a digital SPI interface.
"


Broadcom presented the new ToF sensor in February 2018 at Embedded World trade fair in Nuremberg, Germany:



Go to the original article...

NIT Presents Affordable HDR Sensor for Machine Vision Applications

Image Sensors World        Go to the original article...

New Imaging Technologies presents an affordable CMOS sensor (12 bits) HV2061 with native HDR capability (140dB intra-scene and inter-scene) offering three operating modes; rolling, global and differential (subtraction of two frames in pixel). Its performance is supposed to allow users to get local illumination in real time for many computer vision application such as biometrics, gesture detection, sense & avoid, etc.:

Go to the original article...

Chronocam Presentation at AutoSens 2017

Image Sensors World        Go to the original article...

AutoSens publishes Chronocam-Prophesee CTO and Co-Founder Christoph Posch presentation "Event-based vs conventional cameras for ADAS and autonomous driving applications:"

Go to the original article...

Image Sensor Performance Improvements over Time

Image Sensors World        Go to the original article...

Multianalytics Blog publishes nice videos of image sensor performance progress over the years:


Go to the original article...

Aeye Adaptive Scanning LiDAR Patents Granted

Image Sensors World        Go to the original article...

BusinessWire: AEye announces it has been awarded foundational patents for its solid state MEMS-based LiDAR. These include 71 claims covering AEye inventions ranging from an approach to dynamic scan and shot pattern for LiDAR transmissions to the ability to control and shape each laser pulse and methods for interrogating each voxel within a point cloud. These inventions are said to contribute significant performance improvements for the iDAR perception system: improving range by 400%; increasing speed by 20x; and boosting object classification accuracy while reducing laser interference.

"AEye's groundbreaking iDAR system is the first to use intelligent data capture to enable rapid perception and path planning,” said Elliot Garbus, former VP of Transportation Solutions at Intel. “Most LiDAR systems function at only 10Hz, while the human visual cortex processes at 27Hz. Autonomous vehicles need perception systems that work at least as fast as humans. iDAR is the first and only perception system to consistently deliver performance of at least 30-50Hz. Better quality information, faster. This is a game changer for the autonomous vehicle market.

Leveraging the inventions covered by our patents, we created the worlds first intelligent agile LiDAR – enabling us to interrogate a point cloud as individual voxels and control each one using multiple levers,” said Allan Steinhardt, Chief Scientist at AEye. “Traditional systems only adapt on frame size or placement. In addition to frame size and placement, Agile LiDAR – a core feature of the iDAR perception system – allows us to dynamically control frame pattern, pulse tuning, pulse shaping, pulse energy and other critical dimensions that enable embedded AI.

AEye’s first iDAR-based product, the AE100 artificial perception system, will be available this summer to OEMs and Tier 1s launching autonomous vehicle initiatives.

The granted patents are probably US9885778 and US9897689 proposing the adaptive scanning so that the laser energy is spent in a more economical way only on "interesting spots", for the most part:

Go to the original article...

Trendforce Predicts Adoption Rate of 3D Sensing in Smartphones at 13.1% in 2018

Image Sensors World        Go to the original article...

Trendforce publishes its analysis of 3D sensing in smartphones:

"According to Peter Huang, analyst at TrendForce, there are three major technical barriers in producing 3D sensing modules at present. First, it is not easy to manufacture high-efficiency VCSELs, and the current electrical-to-optical power conversion efficiency is only about 30% on average. Second, the production of diffractive optical elements (DOE), a necessary component of Structured Light technology, and CMOS image sensor (CIS) in infrared cameras, require sophisticated technology. Third, the issue of thermal expansion also needs to be taken into consideration, making 3D sensing module assembly even more challenging. In sum, all these factors contribute to low yield of 3D sensing modules.

Therefore, it is estimated that only up to two Android phone vendors, most likely Huawei and Xiaomi, would adopt 3D sensing modules in 2018 with very limited shipments. Thus, Apple will remain the major smartphone company that adopts 3D sensing this year. It is estimated that the production volume of smartphones equipped with 3D sensing modules will reach 197 million units by the end of 2018, of which 165 million units will be iPhones. In addition, the market value of 3D sensing module in 2018 is estimated to be about US$5.12 billion, with iPhones alone accounting for 84.5% of the entire value. By 2020, the market value is estimated to reach US$10.85 billion, and the CAGR will be 45.6% from 2018 to 2020.
"

Go to the original article...

Avianization vs Dinosaurization in Image Sensor Industry

Image Sensors World        Go to the original article...

Wiley Strategic Entrepreneurship Journal publishes a paper "When dinosaurs fly: The role of firm capabilities in the ‘avianization’ of incumbents during disruptive technological change" by Raja Roy, Curba Morris Lampert, and Irina Stoyneva.

"Research Summary: We investigate the image sensor industry in which the emergence of CMOS sensors challenged the manufacturers of CCD sensors. Although this disruptive technological change led to the demise of CCD technology, it also led to avianization — or strategic renewal — for some incumbents, similar to how some dinosaurs survived the mass Cretaceous-Tertiary extinction by evolving into birds. We find that CCD manufacturers that did avianize were preadapted to the disruptive CMOS technology in that they possessed relevant complementary technologies and access to in-house users that allowed them to strategically renew themselves.

Managerial Summary: We investigate the transition from CCD to CMOS image sensors in the digital image sensor industry. Although the emergence of CMOS sensors was disruptive to CCD sensors, we find that CCD sensor manufacturers such as Sony and Sharp successfully transitioned to manufacturing CMOS sensors. Contrary to popular press and prior academic research characterizing disruptive change as being a source of failure for large firms, our research reveals that firms that possess relevant complementary technologies and have access to in-house users are able to strategically renew themselves in the face of a disruptive threat."

While the main paper is behind a paywall, the supplementary material is openly available.

The complementary technologies (CT) are said to enable the CCD companies to win a place on CMOS sensor market:
  • Global or electronic shuttering
  • Microlenses
  • CDS
  • Lightpipe or light shield
  • Hole Accumulation Diode (HAD)

Another key condition for successful transition to CMOS technology is an access to in-house users. It is used to explain Kodak demise:

"The lack of access to in-house users at Kodak was consistent with its corporate strategy. According to George Fisher, ex-CEO, Eastman Kodak was a ‘horizontal firm because in a digital world, it is much more important to pick out horizontal layers where you have distinctive capabilities. In the computer world, one company specializes in microprocessors, one in monitors, and another in disk drives’ (Galaza and Fisher, 1999: 46). Chinon was eventually acquired by Kodak in 2004 (Eastman Kodak Company, 2004a) and continued to design and manufacture the point-and-shoot cameras."

Reticon/EG&G, Tektronix, and Ford Aeronutronic used to have access to in-house users but lacked relevant CTs. "We find that Reticon/EG&G, Tektronix, and Aeronutronic Ford failed to avianize themselves during the disruptive change to CMOS sensors from CCD sensors."

Go to the original article...

Active Sensing in Automotive Applications

Image Sensors World        Go to the original article...

AutoSens publishes Panasonic Soeren Molander presentation "Active sensing technologies for automotive applications:"

Go to the original article...

css.php