Archives for September 2016

Galaxy Note 7 Features 3 Cameras

Image Sensors World        Go to the original article...

Chipworks teardown reveals that Samsung Galaxy Note 7 uses a dedicated camera for iris scanning feature, in addition to the usual front and rear cameras:

Note 7 Iris Scanner

Go to the original article...

Basler ToF Camera Nears Production

Image Sensors World        Go to the original article...

Basler is about to start its first ToF camera production these days, based on Panasonic MN34902BL CCD:

Go to the original article...

InVisage Launches Spark4K "Micro-LiDAR" for Drones

Image Sensors World        Go to the original article...

BusinessWire: InVisage says it achieves a LIDAR-like performance in its Spark Micro-LiDAR (SML20) structured light module. The previously announced Spark4K 13MP, 1.1um NIR sensor enables the SML20 module to sense structured light patterns at a range of 20m, even in direct sunlight. With a sensor module of 8.5 x 8.5 x 4.5 mm, SML20 fits to drones and other mobile autonomous devices that require a lighter, more power-efficient alternative to conventional LiDAR without the limitations of ultrasonic and stereo-camera depth sensing systems.

In order to perform autonomously at a high flight speed of 20 meters per second, drones and other unmanned vehicles require at least half a second to recognize an upcoming obstacle and another half a second to change trajectory or decelerate in order to avoid it. This means accurate ranging at 20 meters is crucial,” said Jess Lee, InVisage President and CEO. “SML20 is the only solution enabling obstacle avoidance at that distance without being weighed down by a traditional bulky LiDAR.

Many obstacle avoidance systems have turned to mechanical and solid state LiDAR for depth sensing, but conventional LiDARs place high demands on weight, size and power budgets, making them unsuitable for drones. The cost of LiDAR can range from hundreds to tens of thousands of dollars per unit. Ultrasonic sensors and stereo cameras do offer more compact form factors than LiDAR, but ultrasonic systems offer only a sub-five-meter range and stereo cameras have high CPU demands and ranging capabilities limited by camera disparity. The SML20 eliminates the need to compromise, delivering effective collision avoidance with small size, minimal weight, and all-inclusive power consumption between 200 and 500 mW on average, according to the range requirements of the application.

Single camera-based obstacle avoidance systems use structured light to map their environments in 3D. Pairing Spark NIR sensors with lasers emitting a specific pattern of light, depth maps are captured by detecting modifications to that pattern. SML20 delivers QuantumFilm’s increased sensitivity to 940nm NIR light (said to be five times that of silicon) at a 1.1um pixel size. This allows autonomous devices to perceive their surroundings with an accurate depth map fused with the sharpness of 4K 30fps video previously reserved for cinema cameras, in contrast to the limited information in the series of dotted outlines offered by LiDAR.

Conventional structured light cameras have struggled to perform accurately outdoors or in bright sunlight because more than half of sunlight is in the infrared spectrum. In the resulting wash of infrared, silicon-based camera sensors easily saturate and fail to detect the structured light patterns their devices emit. Optimized for the invisible NIR 940-nanometer wavelength, SML20 takes advantage of the fact that water in the atmosphere absorbs most of the 940nm IR light in sunlight, minimizing solar interference with structured light systems.

In combination with this wavelength optimization, SML20’s 1.1um pixels have a global electronic shutter — said to be the only one of its kind at this pixel size. With global shutter, a structured light source can be pulsed in sync with a fast exposure, allowing for 20m ranging with high solar irradiance rejection while remaining eye-safe and low power.

The SML20 is said to be only the beginning — extended range options at 100 meters and beyond are promised in the coming quarters.

Go to the original article...

e2v High Sensitivity 1.3MP Sensors

Image Sensors World        Go to the original article...

e2v publishes a promotional video of its Ruby 1.3MP CMOS sensors:

Go to the original article...

VR and AR Industry Report, 2016-2020

Image Sensors World        Go to the original article...

ResearchInChina releases "Global and China VR and AR Industry Report, 2016-2020." Some predictions:

"VR, which provides immersive closed-loop experience, can only be used to a limited extent due to the size and weight of head-mounted devices, and will still be the main force of the market before consumer-grade AR products are achieved in the aspects of technology and prices of hardware. However, it is expected AR hardware and technology will be fully mature and quickly capture VR market in 2019 because of its wide applicability in business market. Hence, global VR/AR market will reach USD970 million and USD500 million in 2016 and USD30 billion and USD90.8 billion in 2020, respectively."

AR Trends:
  • Fusion of AR technology and 3D visualization technology and projection technology brings about disruptive changes to navigation;
  • Seamless docking between data visualization and users’ wearable devices;
  • Gesture interaction becomes more mature; the relationship between human and technology will be redefined; human body language will interact well with technology products;
  • Smart showroom, smart tourism, and AR theme park will develop or be presented before target audiences on the original basis;
  • Intelligent glasses will be the mainstream trend of AR hardware, bracing the upcoming era of “de-cellphone-ization”;
  • AR software will be more oriented toward consumer groups, growing daily in the aspects of shopping, entertainment, and education.

Go to the original article...

Qualcomm Unveils Its Dual Camera Technology

Image Sensors World        Go to the original article...

Qualcomm Clear Sight dual camera technology is powered by Qualcomm Spectra ISP and is designed to give photos improved DR, sharpness, and less noise in low light. The two camera lenses have identical focal length (meaning they see the same distance). But each camera has a different image sensor: one color image sensor, and a separate black and white image sensor (which can absorb more light).

The B&W sensor is unable to capture color, but its ability to capture light increases by 3x, helpful with low-light environments. Without the color filter, these black and white photos have much better contrast, and in low light have less noise and improved sharpness.

Clear Sight technology is supported by the Snapdragon 820 and 821 processors featuring Spectra ISPs. Clear Sight consists of a single, fully integrated hardware module that contains two cameras, plus computational low light imaging algorithms that enable Qualcomm Spectra ISPs, designed to take photos at the exact same time and merge the two photos together, instantaneously, with good image quality even in low lighting.

Go to the original article...

Nikon 105mm f1.4E – the brightest 105mm telephoto reviewed!

Camera Labs and DSLR Tips latest news and reviews        Go to the original article...

The Nikon 105mm f1.4E is a unique telephoto lens boasting the brightest aperture in its class. Previously the longest commonly available lenses with an f1.4 focal ratio stopped at 85mm, and while a 135mm f1.4 has been announced by Mitakon, it's limited to 100 units and not yet available. So right now, Nikon can boast the longest f1.4 lens and it also sports an electromagnetic diaphragm (hence the E label) and fluorine coatings for easier cleaning. As you might expect, the lens isn't exactly cheap, so the question is whether the quality - and in particular the bokeh rendering - is worth it. Find out in our Nikon 105mm f1.4E review!

Go to the original article...

SK Hynix to Start 12-inch Wafer CIS Production in 2017

Image Sensors World        Go to the original article...

ETNews: SK Hynix plans to start manufacturing its 13MP CIS on 300mm fab called M10, located in Icheon, Korea, in 2017. The company is currently installing the production equipment. So far, Hynix image sensors have been produced at 200mm M8 fab. Since the 13MP dies are larger, SK Hynix believes that production would be more profitable on 300mm fab.

Samsung, Huawei, LG, and other smartphone manufacturers are using SK Hynix’s 8MP CIS for their low and medium-priced smartphones.

SK Hynix is also said to increase number of employees that work on image sensors recently. ETNews says "‘Master’ personnel, who had been recognized as the best technical specialist in CIS field within Samsung Electronics, has recently started working for SK Hynix."

According to TSR, global CIS sales in 2015 were about $9.162 billion. Sony (44.8%) is the top business in this market followed by Samsung (16.5%) and OmniVision (13.2%). SK Hynix (3.7%) stands at a 6th place behind of ON Semi (acquired Aptina Imaging, 6.1%) and Canon (5.3%).

Go to the original article...

Image Sensor Stacking and Packaging Review

Image Sensors World        Go to the original article...

IFTLE 303 reviews the progress in stacked image sensors over the past few years, covering Sony, Samsung, Omnivision, Aptina, Olympus, and other companies.

Go to the original article...

Basler Launches Imaging Hub

Image Sensors World        Go to the original article...

Basler partners with Advantech, NVIDIA, and Xilinx to launch imaginghub.com, an online community portal for engineers, software developers, and hobbyists interested in embedded vision and potential applications, ranging from beginner to professional level.

It’s is supposed to pick up on the embedded trend emerging into vision applications and aims to make embedded vision less expensive by bringing developers together to work on joint projects and share their knowledge. There are different areas, such as Projects, Partners, and Forums. The Technologies forum is envisioned to be a place where people can discuss sensor features and sensor implementation, among other stuff.

Go to the original article...

Tesla Abandons Camera as the Primary Sensor for Self-Driving

Image Sensors World        Go to the original article...

Tesla announces a change of direction of its self-driving car development:

"The radar was added to all Tesla vehicles in October 2014 as part of the Autopilot hardware suite, but was only meant to be a supplementary sensor to the primary camera and image processing system.

After careful consideration, we now believe it can be used as a primary control sensor without requiring the camera to confirm visual image recognition. This is a non-trivial and counter-intuitive problem, because of how strange the world looks in radar. Photons of that wavelength travel easily through fog, dust, rain and snow, but anything metallic looks like a mirror. The radar can see people, but they appear partially translucent. Something made of wood or painted plastic, though opaque to a person, is almost as transparent as glass to radar.

On the other hand, any metal surface with a dish shape is not only reflective, but also amplifies the reflected signal to many times its actual size. A discarded soda can on the road, with its concave bottom facing towards you can appear to be a large and dangerous obstacle, but you would definitely not want to slam on the brakes to avoid it.

Therefore, the big problem in using radar to stop the car is avoiding false alarms. ...The first part of solving that problem is having a more detailed point cloud.

...The second part consists of assembling those radar snapshots, which take place every tenth of a second, into a 3D "picture" of the world.

...The third part is a lot more difficult. When the car is approaching an overhead highway road sign positioned on a rise in the road or a bridge where the road dips underneath, this often looks like a collision course. The navigation data and height accuracy of the GPS are not enough to know whether the car will pass under the object or not. By the time the car is close and the road pitch changes, it is too late to brake.

This is where fleet learning comes in handy. Initially, the vehicle fleet will take no action except to note the position of road signs, bridges and other stationary objects, mapping the world according to radar. The car computer will then silently compare when it would have braked to the driver action and upload that to the Tesla database. If several cars drive safely past a given radar object, whether Autopilot is turned on or off, then that object is added to the geocoded whitelist.
"

Go to the original article...

KGI Reveals iPhone 7 Plus Dual Camera OIS Details

Image Sensors World        Go to the original article...

9to5Mac quotes KGI Securities analyst Ming-Chi Kuo saying that in iPhone 7 dual camera, the OIS is installed only on wide angle camera:

"Note that for the dual-camera of iPhone 7 Plus, wide-angle CCM is equipped with optical image stabilization (OIS) VCM, while telephoto CCM only comes with general VCM. We believe the focus of the dual-camera upgrade will be equipping telephoto CCM with OIS CCM, so as to significantly enhance optical and digital zoom quality."

MacRumors too quotes Ming-Chi Kuo research note:

"While an attractive addition for avid picture takers and professional photographers, the iPhone 7 Plus dual-camera is not a mass-market killer application yet. Along with its high cost (estimated at over US$30-40) and the necessity for Apple to enhance the added value of high-end iPhone models, we expect only high-end new iPhone models (30-40% of them) to have a dual-camera next year."

Go to the original article...

Framos and Pyxalis Partner on Customized Sensor Design

Image Sensors World        Go to the original article...

Pyxalis and Framos partner to offer a customizable image sensor design based on Pyxalis HDPYX sensor. The sensor is designed as a modular platform rather than a fully custom sensor, the device is made to adapt to various types of scientific and surveillance applications. Scalable in terms of number of spectral bands, dedicated grading filters can be deposited directly on the sensor if required. In terms of packaging, the basic package is BGA type but custom ceramic is also offered. The sensor can also be tuned in term of wavelength as Thick EPI for NIR, ultra-tick EPI for direct X-ray and backside illuminated versions are all currently envisaged. As it is designed to be modified, each derivation of the platform is much cheaper than full-custom version while maintaining unique performance parameters.

The sensor platform was manufactured in Tower Jazz 180nm technology and characterized at Pyxalis lab.

The HDPYX platform is capable of integrating multiple HDR techniques to reach up to 120dB linear DR. Combining a dual gain with charge conservation pixel allows the HDPYX sensor to capture image linearly up to 90dB. To reach 120dB, the HDPYX sensor uses two different integration times: one for odd lines and one for even lines. Both integration times end at the same moment, with the transfer of the data from the photodiode to the floating diffusions.

Secondly, the unique feature is a true Dual-Core image sensor, running two 32bits micro-processors to operate the device. The frame management core takes care of every aspect linked to the operating mode of the sensor, such as shutter type, windowing, subsampling and integration time calculation. The line management core takes care of the black level correction, ADC driving, data formatting and HDR reconstruction.

Pyxalis presentation at EI2016 gives more details on the new platform:

Go to the original article...

How Are Cameras Assembled and Tested

Image Sensors World        Go to the original article...

TriOptics publishes a promotional video on its ProCam automated camera module assembly and testing line:



ARMDevices publishes a tour on Shuoying camera factory in Shenzhen, China. Shuoying is a large action camera, security IP camera, and single-lens and dual-lens 360 degree camera assembly house.

Go to the original article...

2017 IISW Call for Papers

Image Sensors World        Go to the original article...

The 2017 International Image Sensor Workshop (IISW) to be held in Hiroshima, Japan, on May 30 - June 2, 2017, calls for papers presenting innovative work in the area of solid-state image sensors and share new results with the imaging community. The workshop is intended for image sensor technologists and has limited attendance. As in the previous years, the workshop will emphasize active interaction and encourage exchange of information among the workshop participants in an informal and open atmosphere at a great venue.

The scope of the workshop includes all aspects of electronic image sensor design and development. In addition to regular oral and poster papers, the workshop will include invited talks and announcement of International Image Sensors Society (IISS) Award winners.

Papers on the following topics are solicited:

Image Sensor Design and Performance:
  • CMOS imagers, CCD imagers, APD arrays.
  • New and disruptive architectures
  • Global shutter image sensors
  • Low noise readout circuitry, ADC designs
  • Single photon sensitivity sensors
  • High frame rate image sensors
  • High dynamic range sensors
  • Low voltage and low power imagers
  • High image quality. Low noise. High sensitivity
  • Improved color reproduction
  • Non-standard color patterns with special digital processing
  • Imaging system-on-a-chip, On-chip image processing
Pixels and Image Sensor Device Physics:
  • New devices and pixel structures
  • Advanced materials
  • Ultra miniaturized pixels development, testing, and characterization
  • New device physics and phenomena
  • Electron multiplication pixels
  • Techniques for increasing QE, well capacity, reducing crosstalk, and improving angular response
  • Front side illuminated and back side illuminated pixels and pixel arrays
  • Pixel simulation: Optical and electrical simulation, 2D and 3D, CAD for design and simulation
  • Improved models
Application Specific Imagers:
  • Image sensors and pixels for range sensing: TOF, RGBZ, Structured light, Stereo imaging, etc.
  • Image sensors with enhanced spectral sensitivity (NIR, UV, IR)
  • Sensors for DSC, DSLR, mobile, digital video cameras and mirror-less cameras
  • Array imagers and sensors for multi-aperture imaging and computational Imaging
  • Sensors for medical applications, microbiology, genome sequencing
  • High energy photon and particle sensors (X-ray, radiation).
  • Line arrays, TDI, Very large format imagers
  • Multi and hyperspectral imagers
  • Polarization sensitive imagers
Image sensor manufacturing and testing:
  • New manufacturing techniques
  • Backside thinning
  • Stacked imagers, 3D integration
On-chip optics:
  • Advanced optical path, Color filters, Microlens, Light guide
  • Nanotechnologies for Imaging
  • Wafer level cameras
Packaging and testing:
  • Reliability, Yield, Cost
  • Defects. Leakage current.
  • Radiation damages and radiation hard imagers

Abstracts should be submitted electronically to the Technical Program Chair, Shoji Kawahito by January 19, 2017 (JST).

Grand Prince Hotel Hiroshima

Go to the original article...

Corephotonics Offers Dual Lens Camera Technology for Licensing

Image Sensors World        Go to the original article...

EETimes interviews Eran Kali, VP of licensing at Corephotonics:

Kali told EE Times that Corephotonics is “the inventor of the computational dual camera for smartphones” derived from its own IPs.

Apple, however, is not Corephotonics’ licensee, Kali said.

Refraining from discussing any potential IP issues, Kali explained that for Corephotonics, essentially an IP supplier, Apple’s iPhone 7 Plus launch is cause for “celebration, not a confrontation” [with Apple]. “We are glad that Apple… is not just confirming but validating our idea, which was once considered so radical,” he added.

In Kali’s opinion, Corephotonics’ dual camera solutions — the company has three models — have already progressed beyond iPhone 7 Plus’ current status. He explained that Corephotonics offers 2X optical zoom, continuously seamless digital zoom, an optical stabilizer, better depth of field, and enhanced imaging in low light — all squeezed into the smartphone’s most constrained form factor — the height — at 6.0mm or lower.


Go to the original article...

Apple Presents iPhone 7 Plus Dual Camera

Image Sensors World        Go to the original article...

Techcrunch: As expected, Apple officially presents iPhone 7 Plus with dual rear camera with optical zoom: one lens handles 1x zoom, the other handles 2x:


Apple is using two cameras to create a shallow DOF, DSLR quality bokeh, and depth map of image:


The depth enabled features will come later this year as part of a free update — sounds like it wont be available at launch.

Meanwhile, the smaller iPhone 7 comes with a single OIS 12MP camera featuring a new 40% faster image sensor and an Apple-designed ISP:


For each 12MP photo, iPhone ISP performs 100 billion operations in 25ms:


Update: KTVU publishes the camera part of Apple presentation:

Go to the original article...

Autoliv and Volvo Establish Autonomous Driving JV

Image Sensors World        Go to the original article...

Autoliv and Volvo are to form a new jointly-owned company to develop next generation autonomous driving software. The planned new company will have its headquarters in Gothenburg, Sweden, and an initial workforce taken from both companies of around 200, increasing to over 600 in the medium term. The company is expected to start operations in early 2017.

Once finalized, the joint venture will be a new entrant in the growing global market for autonomous driving software systems. It will mark the first time a leading premium car maker has joined forces with a tier one supplier to develop new ADAS and autonomous driving (“AD”) technologies. The new company, which has yet to be named, will develop ADAS and AD systems for use in Volvo Cars and for sale exclusively by Autoliv to all car makers globally, with revenues shared by both companies.

Both Autoliv and Volvo will provide IP for their ADAS systems to the JV. The new company is expected to have its first ADAS products available for sale by 2019 with AD technologies available by 2021.

Go to the original article...

Intel to Acquire Movidius

Image Sensors World        Go to the original article...

Intel announces its intention to acquire Movidius. Intel SVP of New Technologies Group Josh Walden says: "We see massive potential for Movidius to accelerate our initiatives in new and emerging technologies. The ability to track, navigate, map and recognize both scenes and objects using Movidius’ low-power and high-performance SoCs opens opportunities in areas where heat, battery life and form factors are key. Specifically, we will look to deploy the technology across our efforts in augmented, virtual and merged reality (AR/VR/MR), drones, robotics, digital security cameras and beyond. Movidius’ market-leading family of computer vision SoCs complements Intel’s RealSense™ offerings in addition to our broader IP and product roadmap."

Remi El-Ouazzane, Movidius CEO, says "Movidius’ mission is to give the power of sight to machines. As part of Intel, we’ll remain focused on this mission, but with the technology and resources to innovate faster and execute at scale. We will continue to operate with the same eagerness to invent and the same customer-focus attitude that we’re known for, and we will retain Movidius talent and the start-up mentality that we have demonstrated over the years."


While the acquisition price has not been officially disclosed, Irish startup tracking site Fora reports that "the conditional offer is set to value Movidius at more than €300 million." The 10 year-old company has raised more than $85M in several funding rounds so far.

Irish Times reports that the the fundraising round last year valued the Movidius at about €250 million. "The company, which was co-founded by Sean Mitchell and David Moloney, now employs 140 staff across its global operations.

It recently announced plans to create 100 additional jobs in Dublin after raising an additional $40 million from new investors.

The last available accounts for the company show it made a $15 million loss in 2014, bringing accumulated losses to €63 million from €47.9 million a year earlier.
"

Go to the original article...

ULIS Launches Mass Market 80×80 Pixel Thermal Sensor

Image Sensors World        Go to the original article...

ALA News: ULIS launches 80x80 pixel Micro80 Gen2 thermal sensor aimed at large-volume applications. It features novel packaging solutions by being the first Ball Grid Array (BGA) infrared sensor box packaged in a JEDEC tray. It is designed using a Unique Wafer Level Packaging (UWLP) with vacuum technology; this allows it to support optical fields of up to 120°. It is the first infrared sensor with a unique plastic lens holder, eliminating the need for the user to develop its own, thus saving time and lowering costs.

The new Micro80 Gen2 consumes less than 55mW. This further extends the battery life and the operating temperature range (-40°C to +85°C), while being more compact and lighter than earlier models. It supports a broad spectrum of frame rates (from 1Hz to 50 Hz) and allows vision up to 150 meters.

These new and improved features of the Micro80 Gen2 address the needs of large-volume production processes. This means that it is not only ideal for the small-resolution thermography and short-distance surveillance markets, but can also open up new industries for ULIS,” said Cyrille Trouilleau, product manager at ULIS. “The introduction of these novel characteristics is the first step towards the widespread use of thermal sensors in smart building management systems.


Go to the original article...

Sony Kumamoto Fab Impacted by Another Earthquake

Image Sensors World        Go to the original article...

Corrected: Sony Kumamoto fab has been impacted by another earthquake on Aug. 31, 2016:

"The impact of the earthquakes that occurred at 7:46 PM (local time) on August 31 and at 6:33 AM (local time) on September 1, 2016 in the Kumamoto region of Kumamoto Prefecture is as follows:

Operations at Sony Semiconductor Manufacturing Corporation's Kumamoto Technology Center (located in Kikuchi Gun, Kumamoto Prefecture), which primarily manufactures image sensors for digital cameras and security cameras, were halted in order to inspect the site's building and manufacturing equipment. The site's building and manufacturing equipment did not sustain any damage. Work is currently underway to sequentially restart the manufacturing equipment, and production is expected to resume during the morning of September 3, 2016.
"

Thanks to KP for the correction to the original post!

Go to the original article...

Sharp Introduces Faster CCDs

Image Sensors World        Go to the original article...

Sharp has updated its lineup of CCDs for security applications with faster devices:

Go to the original article...

EETimes: Why Dual Cameras Are Better?

Image Sensors World        Go to the original article...

EETimes publishes project manager at Corephotonics Roy Fridman's opinion on the reasons why dual camera in smartphones might have an advantage:

  • By using such dual cameras, smartphone manufacturers are able to support extremely advanced imaging features while keeping the solution slim (below 5mm height), lightweight and robust.
  • When fusing the B/W image with the color image the result is significantly enhanced, even in extreme low light conditions.
  • Dual camera zoom uses a combination of a wide and tele lenses in order to achieve real optical zoom, similar to the electromechanical zoom used in the professional DSLR cameras.
  • Another major advantage of having dual cameras is the ability to sense depth.

Go to the original article...

Leica 12mm f1.4 – a high-end wide prime for MFT bodies!

Camera Labs and DSLR Tips latest news and reviews        Go to the original article...

The Leica Summilux 12mm f1.4 is a wide-angle prime lens for the Micro Four Third system. Mounted on an Olympus or Panasonic body, it delivers 24mm equivalent coverage, while the f1.4 focal ratio is the brightest for this focal length in the MFT catalogue. It isn't however the first 12mm prime for the system and there are also plenty of zooms which include the 12mm focal length, so the big question is how does it measure-up? To find out I made side-by-side comparisons with the Olympus 12mm f2 and 7-14mm f2.8 PRO zoom. Find out how the Summilux performs in my Leica 12mm f1.4 review!

Go to the original article...

KGI: iPhone 7 Plus Dual Camera Features

Image Sensors World        Go to the original article...

9to25Mac quotes KGI Securities analyst Ming-Chi Kuo saying that the oncoming iPhone 7 Plus will have the dual camera. The camera is said to enable optical zoom and ‘light field camera applications’. The dual camera is made up of a wide-angle camera and a telephoto camera, both with 12MP resolution. The larger camera is said to feature OIS and a 6P lens.

Go to the original article...

Omnivision Embedded SPAD and RGB-C Patents

Image Sensors World        Go to the original article...

Omnivision patent application US20160234467 "RGBC color filter array patterns to minimize color aliasing" by Raymond Wu, Jizhang Shan, Chin Poh Pang says

"Some RGBC patterns increase sensitivity but can suffer from color aliasing. Color aliasing results in the wrong color appearing in an area of the image. For example, a color such as red or blue can appear in a part of the image that should be green. In another example of color aliasing, a small white line on a black or otherwise dark background that registers on individual pixels will be interpreted as a line containing single pixels of each of the primary colors registered. Color aliasing occurs at least partly due to the alignment of clear filters within an RGBC pattern. Image sensors with clear pixels are more prone to color aliasing because clear pixels do not produce any color information of their own other than the intensity of light."

So, the authors propose to reduce clear color filter from the prior art on Fig. 9A to one on Fig. 9B:


The company patent application US20160240579 "Stacked embedded SPAD image sensor for attached 3D information" by Tianjia Sun, Rui Wang, Tiejun Dai proposes to combine SPADs and regular RGB pixels in the same array:

Go to the original article...

Ambarella Reports Lower Revenue

Image Sensors World        Go to the original article...

SeekingAlpha: ISP manufacturer Ambarella reports lower Q2 2016 revenues. Few quotes:

"In Q2 2017, our revenue was $65.1 million, down as forecasted of our last call about $84.2 million of revenue in the same period over the prior year. The decline in revenue was primarily due to a decline in the wearable sports camera market as we have discussed. We are seeing a near-term recovery in our China IP security camera business and are experiencing positive design win momentum in all of our current market categories.

The impact of the Sony sensor shortage was in line with our expectations with the most significant impact in the quarter on new product launches. As most new product launches have limited on-hand sensor inventory, purchases of other components were often delayed. We estimate of loss or delay in revenue was between $2 million and $4 million for the quarter.

We now believe our customers have a better understanding of the Sony sensor recovery plan for more established companies’ customers, we expect they will be receiving the majority of sensor delivers by the end of our Q3. ...we didn’t say they would be full production by the end of August. From a standpoint of creating or fixing all the backlog situation. They said they would have the lines up and running full capacity by the end of August, which we use and we take numerous months after that to catch-up on the backlog.

...outside China HEVC is not popular at all due to the royalty issues. But in China we start seeing that HEVC standard being well widely adopted and a lot of people are using them in for Chinese internal consumption. So I think that HEVC definitely is China market only.
"

Go to the original article...

Qualcomm VR Reference Design Relies on 4 Cameras

Image Sensors World        Go to the original article...

Qualcomm introduces its first VR reference platform, the Snapdragon™ VR820. Based on Snapdragon 820 processor, the Snapdragon VR820 enables OEMs to quickly develop standalone head mounted displays (HMDs) optimized for VR content and applications while meeting the processing and performance demands of an all-in-one, dedicated VR headset.

Snapdragon VR820 includes integrated eye tracking with two cameras and dual front facing cameras for six degrees of freedom (6DOF) and see-through applications. The VR820 has Dual Qualcomm Spectra camera ISPs combined with Qualcomm Hexagon DSP for advanced vision features such as look-through imaging and 3D reconstruction, eye-tracking and hand gestures.

Go to the original article...

Omron OKAO Vision System in Restaurant

Image Sensors World        Go to the original article...

Omron presents a use case of its OKAO vision system in restaurants:

Go to the original article...

Smartphone Camera Resolution Trends

Image Sensors World        Go to the original article...

Counterpoint Research publishes an article on resolution trends showing that 13MP+ cameras get wide adoption in all price categories:

Analyzing this trend is very important as the ‘camera megapixel race’ is kickstarting in sub-premium segments, whereas premium segments have moved on. The premium segments are now competing by integrating multiple camera sensors.
  • One interesting trend which is being seen is 16MP+ sensors are being adopted faster in $300-$500 price bands (55% of the total sales volumes). This is mainly driven by Chinese brands’ flagships and Samsung’s high-tier portfolio. In the premium $500+ segment, key brands such as Huawei, Samsung, Apple are actually incorporating 12MP/13MP sensors and focusing more on other aspects such as wide angle improvements, laser assisted autofocus, dual cameras, OIS, as well as simply thinning the overall camera stack. Almost 90% of the smartphone sales volumes in premium segment sport a 12MP or higher camera.
  • In high volume, lower price bands ($100-$300), basic but higher resolution 13MP+ is becoming a go-to camera resolution to invigorate sales and position the models on par with flagships. Three in Four smartphones sold in this high growth $100-$300 segment sports a 13MP or higher camera.
  • In the lower-tier sub $100 segment, 8MP+ sensors are beginning to proliferate especially in $50-$100 band driven by local brands in Asia and lower cost SKUs from Chinese brands.

Go to the original article...

css.php