Archives for October 2018

Correction to Omnivision’s Interview

Image Sensors World        Go to the original article...

Omnivision asked me to correct a part of Yole Developpement interview that talks about its relationship with Will Semiconductor:

"PC: Since the change in shareholders two years ago, can you explain to our readers what has changed for the company?

MW: Our IPO on the Nasdaq Exchange was in 2000. In 2016, we went private. Since September 2017, Mr. Renrong Yu has been OmniVision CEO and he also serves as Chairman of the Board of Will Semiconductor. While the companies have had a close working relationship, the Will Semiconductor company presently owns a small minority share in OmniVision. A proposed transaction is currently in the due diligence phase, and would still require regulatory approvals. Upon consummation of the proposed transaction, the combined company would then be publicly traded on the Shanghai Stock Exchange.

That said, our focus and mission at OmniVision have not changed. That mission is to enable the sensing possibilities with intelligent and reliable imaging solutions. We will continue to develop image sensors for the markets mentioned earlier with more resources and a larger sensor portfolio to capture more worldwide market share.
"

Go to the original article...

Omnivision Combines ULL with Nyxel in 2MP Sensor, Introduces NIR SNR1 Metric

Image Sensors World        Go to the original article...

PRNewswire: OmniVision announces the OS02C10, a 2.9um, 2MP sensor with ultra-low-light (ULL) technology. Combining ULL with Nyxel NIR technology, the OS02C10 can detect incident light in both visible and NIR wavelengths and produce color and monochrome images for security applications.

Nxyel NIR technology in OS02C10 gives it a QE of 60% at 850nm and 40% at 940nm, which is 2x to 4x better than competing devices. Such high QE enables the use of lower-power IR illumination in total darkness.

The amount of NIR light that a sensor requires to capture high-quality images can be quantified with a new metric called NIR SNR1, which takes into account the QE, pixel size and read noise. The OS02C10 achieves an SNR1 at 850nm of 23nw/cm2 and an SNR1 at 940nm of 31nw/cm2, which are 2x to 4x smaller numbers compared with the leading competitors' sensors. For the designers of security cameras that operate in total darkness, this means they can reduce IR illumination to consume 2x to 4x less power compared to the competitors' sensors, for the same environment and over the same image-detection range.

"Surveillance cameras with AI functionality need the highest possible resolution in all lighting conditions for accurate facial recognition," said Brian Fang, business development director at OmniVision. "These next-generation, intelligent-sensing surveillance systems are being enabled by our new sensor's top imaging performance, with industry-best detection capabilities under both extremely low visible light and infrared light in complete darkness."

The OS02C10 achieves an SNR1 of 0.16 lux. OmniVision's dual-conversion-gain technology allows this sensor to achieve the industry's best ULL performance, while the 3-frame staggered shutter minimizes motion artifacts and enables a HDR of 120 dB. The OS02C10 image sensor is available now.

Go to the original article...

TowerJazz Adds 65nm to its ITAR Compliant Process Family, 45nm to Follow

Image Sensors World        Go to the original article...

GlobeNewswire: TowerJazz announces ITAR (International Traffic in Arms Regulations) compliant processes now include 65nm technology access for next-generation ROICs, enabling essential military and space applications critical to national defense. ROICs are used for reading IR and UV detectors in military surveillance and other applications ranging from x-ray astronomy to security and industrial inspection.

Initial products serving the US aerospace and defense community have taped-out in this new 65nm offering and additional products are in design. “We are excited to announce this new advanced capability to our customers. Adding 65nm, and soon, 45nm technology for ITAR products enables our aerospace and defense customers additional avenues to continue to create advanced products to serve this very critical market,” said Mike Scott, Director, TowerJazz USA Aerospace & Defense.

Go to the original article...

Kingpak Compares Automotive CIS Packages

Image Sensors World        Go to the original article...

Kingpak presentation on automotive image sensor packaging compares the company's BGA offerings with competition:

Go to the original article...

TI DLP LiDAR

Image Sensors World        Go to the original article...

TI whitepaper "DLP DMD Technology: LIDAR ambient light reduction" proposes to use the company's DLP chip in LiDAR:

Go to the original article...

Omnivision on Merging with Will Semi, Manufacturing Partners, Pixel Shrink

Image Sensors World        Go to the original article...

Yole Developpement publishes an interview with Michael Wu, SVP Global Marketing and Sales at OmniVision. Few interesting quotes:

Update: The quote below has been edited according to Omnivision's request on Oct. 15, 2018:

"Our IPO on the Nasdaq Exchange was in 2000. In 2016, we went private. Since September 2017, Mr. Renrong Yu has been OmniVision CEO and he also serves as Chairman of the Board of Will Semiconductor. While the companies have had a close working relationship, the Will Semiconductor company presently owns a small minority share in OmniVision. A proposed transaction is currently in the due diligence phase, and would still require regulatory approvals. Upon consummation of the proposed transaction, the combined company would then be publicly traded on the Shanghai Stock Exchange."

Once we are at M&A activity, that same Will Semi also buys a majority stake in SuperPix. Together with Omnivision acquisition, it is one of the largest deals registered in Asia-Pacific region in Aug. 2018:


Now, back to the Omnivision's interview:

"With design and R&D centers worldwide, including the U.S., Norway, Japan, China and Singapore, OmniVision has 17 office locations, roughly 1,600 employees and more than 3,800 patents globally. Over the past 23 years, our total shipment number has reached over 9 billion units. In terms of shipment volume, we are leading in all CIS application segments with great customer acceptance and a proven global track record. In terms of unit shipments, we are No. 3 in mobile, No. 2 in automotive and security, and No. 1 in notebooks, medical and emerging markets.

Our first image sensor for the automotive market went to volume production in 2004, and we launched the first automotive high dynamic range (HDR) SoC image sensor/signal processor in 2008. To date, we have more than 110M sensors on the road worldwide. We partnered with TSMC for all of our automotive products to ensure both quality and reliable supply.

HLMC is one of our key supply chain partners. We continue working very closely and developing new products with our partners. Our strategy is to leverage the technical strengths, process capabilities, capacity and long-term commitments of multiple CIS manufacturing partners to optimize our product portfolio, supply, quality and cost, no matter where they are located. We will continue with this strategy moving forward.

We are focused on and committed to the mobile segment—it represents more than 60% of our revenue. Pixel-shrink evolution has accelerated (especially going from 1.0 micron to 0.9 micron and 0.8 micron, and even 0.7 micron) for the mainstream, high-end smartphone segment that is using our 4C CF pattern. This pattern is where four neighboring pixels have the same color filter, effectively increasing sensitivity while providing the option to recover full-size resolution at a specific frame rate.
"

Go to the original article...

e2v Announces 11MP APS-C Sensor Capable of 4K 710fps Video

Image Sensors World        Go to the original article...

Teledyne e2v announces a new 11MP sensor in its Lince family. The Lince11M is designed for applications that require 4K resolution at very high shutter speed. It uniquely combines 4K resolution at 710fps in APS-C format or or 1400fps when windowing in full HD resolution.

Lince11M targets in-line inspection to increase manufacturing throughput, or with strobed lighting for multispectral imaging or multi field (bright field, dark field, backlight) imaging, and serve as an alternative to line scan sensors to improve defect classification where uniform image sharpness across all directions is critical.

Lince11M takes advantage of the APS-C format and is compatible with standard optics. The samples are expected to be available in March 2019.

Key features:
  • Global shutter CMOS pixel (6μm x 6μm)
  • APS-C optical format in 4K resolution
  • 700fps in 4K resolution, 1400fps in full HD resolution
  • Large FWC to maximize SNR in shot noise limited application
  • SNR of 46dB
  • Peak QE of 60%

Go to the original article...

Light-Controlled Polymers

Image Sensors World        Go to the original article...

Finland's Tampere University of Technology (TUT) is developing light-controlled materials that change their shape, length or thickness in response to light. The new method enables researchers to employ UV light to program the shape the material adapts, and then elicit the different types of movements by shining red light on the material.

"Our concept for developing this material is actually quite simple. It is based on a combination of two well-known light-induced control mechanisms. No one has ever tried this before, despite, or maybe because of, the simplicity of the underlying idea. Our results are a good example of how novel results can be achieved by combining something known in a new way," says Academy Research Fellow Arri Priimägi who heads the Smart Photonic Materials group at TUT.

Nature paper "Reconfigurable photoactuator through synergistic use of photochemical and photothermal effects" by Markus Lahikainen, Hao Zeng, and Arri Priimagi describes the work of the researchers.

A Youtube video shows the light sensitive polymer in action:



The team has designed "an optical flytrap that was inspired by the way the Venus flytrap ensnares its prey. The optical flytrap is a small elastomer strip – less than one centimetre in length – that is glued onto an optical fibre into which blue light is coupled. When an object in the flytrap's field of view reflects light onto the elastomer surface, the strip bends itself around the object, capturing it like a Venus flytrap. The optical flytrap is able to lift hundreds of times its own weight."

Go to the original article...

Apple Sued for Dual Camera Patent Infringement

Image Sensors World        Go to the original article...

AppleInsider: California residents Yanbin Yu and Zhongxuan Zhang filed a lawsuit alleging Apple infringed on their US6611289 patent in the company's dual-camera mobile phones, such as iPhone 7 Plus, iPhone 8 Plus, iPhone X, iPhone XS, and iPhone XS Max. The patent named "Digital cameras using multiple sensors with multiple lenses" has been filed in 1999 and granted in 2003.

Apple has filed its own patent for a multi-sensor camera system in 2008. A patent examiner issued an office action rejecting 11 claims of Apple patent application as anticipated, or not novel, in relation to Yu and Zhang's patent. That way, Apple became aware of Yu and Zhang's prior art, but has not attempted to license it.

From US6611289 patent

Go to the original article...

Samsung Mid-Range Phone Features 4 Rear Cameras

Image Sensors World        Go to the original article...

Samsung Galaxy A9 mobile phone features 4 rear cameras:

Go to the original article...

RoboSense Announces China’s Largest-ever Round of Financing for a LiDAR Company

Image Sensors World        Go to the original article...

PRNewswire, Thomas-PR: RoboSense announces the completion of China’s largest-ever single round of financing for a LiDAR company – a combined investment of over $45 million (RMB 300 million). The investors include Cainiao Smart Logistics Network Ltd. (“Cainiao”), the logistics arm of the Alibaba Group; SAIC Motor Group (Shanghai Automotive Industry Corporation), the largest publicly-traded auto manufacturer in China’s A-Share; and BAIC Group (Beijing Automotive Industry Holding Co.) electric vehicle company.

Robosense claims to have an over 50% market share of all LiDAR sold in Asia becoming the market leader in the region. The new funding will be used to increase RoboSense’s market share and the R&D of autonomous vehicle technologies, including its solid-state LiDAR, AI sensing algorithms, and other advanced technologies, as well as accelerating product development, long-term manufacturing and market penetration.

The rapid development of autonomous driving has ignited a huge demand for LiDAR,” said Mark Qiu, co-founder of RoboSense. “RoboSense is embracing this market demand through partnerships with multiple industry leaders. It is our great pleasure to be endorsed and funded by industry giants from many different fields. This round of funding is not only for capital assistance, but also for strategic resources. We are looking forward to continuously working with our partners to lead the large-scale commercialization era of the autonomous driving industry.

IHS Markit predicts that by 2035, global sales of self-driving cars will reach 21 million vehicles, up from nearly 600,000 vehicles in 2025. IHS believes that by 2035, nearly 76 million vehicles with some level of autonomy will be sold globally.

Autonomous logistics vehicles are expected to become one of the first markets for autonomous vehicle technology. Based on data from Deloitte’s China Smart Logistics Development Report, the intelligent logistics market will reach $145 billion (RMB 1 trillion) by 2025.

In the past two years, RoboSense has had explosive growth:

  • In April 2017, the company started the mass production of its 16-beam automotive LiDAR.
  • In September 2017, the company mass-produced its 32-beam LiDAR, released a LiDAR-based autonomous driving environmental sensing AI system, and provided a software and hardware combined LiDAR environment sensing solution.
  • In October 2017, RoboSense launched its MEMS solid-state LiDAR, publicly exhibited for the first time at CES 2018 in January 2018.
  • In April 2018, RoboSense partnered with the Cainiao Network to launch the world’s first MEMS LiDAR autonomous logistics vehicle – the G Plus.



Go to the original article...

3D Face ID for Fish

Image Sensors World        Go to the original article...

Bloomberg BusinessWeek: Cermaq Group AS in Norway has developed a 3D face recognition for salmon fish, based on the distinct pattern of spots around their eyes, mouth and gills.

The Norwegian fish-farming giant wants to roll out the 3D salmon face recognition at pens along Norway’s coastline to track and prevent the spread of epidemics like sea lice that infect hundreds of millions of farmed fish and cost the global industry upwards of $1 billion each year.

We can build a medical record for each individual fish,” said Harald Takle, the head researcher at Cermaq, one of the partners testing the system behind what it’s calling an iFarm. “This will be like a revolution.

Go to the original article...

Nikon Z 24-70mm f4S review so far

Cameralabs        Go to the original article...

The Nikon Z 24-70mm f4S is a general-purpose zoom for Nikon’s Z-format full-frame mirrorless cameras. The lens features a constant f4 focal ratio, fast and quiet focusing, and a compact design that’s a perfect match for Z bodies. It’s the standard ‘kit-zoom’ for Z bodies, but is it any good? Ahead of our full review, check out a variety of sample images!…

The post Nikon Z 24-70mm f4S review so far appeared first on Cameralabs.

Go to the original article...

Yole Presentations at Photonics Executive Forums

Image Sensors World        Go to the original article...

Yole Developpement publishes its presentations at CIOE Photonics Executive Forums (requires free registration to download):


Some interesting slides from the presentations:

Go to the original article...

poLight Completes a Successful IPO

Image Sensors World        Go to the original article...

poLight has completed a successful IPO at Oslo Stock Exchange on Oct. 1, 2018. The company has raised 130m NOK (about $15.5m).

Go to the original article...

Photonics Spectra on Emerging Image Sensor Applications

Image Sensors World        Go to the original article...

Photonics Spectra publishes an article "Emerging Applications Drive Image Sensor Innovations" Hank Hogan. Few quotes:

"Vendors are responding by increasing sensor spectral range, integrating new capabilities into devices, and adding features such as 3D imaging. The result can be rapid growth in sensors, even in areas that are relatively stable. For instance, the worldwide market for cars is expanding at a relatively modest pace, according to Geoff Ballew, senior director of marketing in the automotive sensing division of chipmaker ON Semiconductor Corp. of Phoenix.

However, tepid growth is not the case for the automotive imaging solutions. “The number of sensors consumed and attached to those cars is growing wildly,” he said. “The image sensor chip business is growing in excess of 15 to 20 percent a year. The reason for that is cameras increasingly are adding new functionality to cars.”

Automotive sensors are expected to work from -40 to 125 °C. That interacts with the dynamic range requirement because as the operating temperature rises, so too does the dark current in the sensor. Vendors such as OmniVision must take special care within the manufacturing process to drive that dark current down, thereby expanding the operating temperature and preserving the high dynamic range.

Besides automotive, another area pushing imaging capability is the IoT. Refrigerators, washing machines, and home security systems are adding image sensors for cataloging food, recognizing people, and other tasks. But the IoT brings its own requirements, and they affect sensors, according to Nick Nam, head of emerging markets at OmniVision Technologies.

For instance, power consumption often may need to be minimized, particularly for IoT applications running on batteries.

Depth or 3D sensing is a capability being added to automotive, the IoT, and other applications. There are competing 3D imaging methods, and which is best will be different for different situations

Imaging in the shortwave IR region out to about 2 µm offers improved performance in poor visibility or at night. When combined with capabilities in the visible and UV, the resulting multispectral or hyperspectral imaging can provide important information not obtainable by visible imaging alone. While not new, the hybrid approach offers the advantage that as CMOS technology improves, so can the performance of the sensors. What’s more, the hybrid technique can be extended to other materials, allowing sensors to capture information in the mid- and thermal-IR at 5 or 10 µm, or more.
"

On a similar matter, most of PWC's list of 8 emerging technologies in one way or another rely on image sensing: artificial intelligence, augmented reality, blockchain, drones, IoT, robotics, virtual reality, and 3-D printing:

Go to the original article...

Color Night Vision Image Sensor Supplier Revealed

Image Sensors World        Go to the original article...

San Diego-based image sensor distributor AlliedSens reveals that Brookman 1.3MP BT130C and 2MP BT200C are used in Japanese color night vision cameras, such as this one.


Here is Flovel camera with Brookman sensor:

Go to the original article...

Polarization Sensing in LWIR Band

Image Sensors World        Go to the original article...

KB ViTA kindly sent me an info about their latest LWIR camera that senses polarization:

"It all was started with the fact KB ViTA has developed a very sensitive thermal imaging module VLM640, which had a sensitivity of at least 20 mK in 8 — 12 µm band. The sensor manufacturer turned to KB ViTA and offered an engineering sample from an experimental wafer of bolometric detectors with integrated polarization filters. For KB ViTA it was honorable but, at the same time, there was no understanding of what it is expected to ultimately obtain. The technology and the very idea of seeing the own polarization of the thermal photons of objects that surround us is absolutely new and hardly anyone has experience of processing such information.

Below we will show you how the polarization in the IR spectrum looks.

There were a polarizing sensor and electronics from VLM640 camera with 20 mK sensitivity. The interesting thing about the sensor is each pixel in the group of four is covered with a polarizer. The polarization of each filter differs by 45deg. As a result, the polarization angles are 0—180deg, 45—225deg, 90—270deg, 135—315deg.

In the resulting videos, there are three images (from left to right): video from a conventional thermal imager, reconstructed polarization angles, a complex image, where the brightness is thermal radiation and the color is the polarization angle.
"

The example videos below show a light bulb and a painted box:





"So far, KB ViTA can say polarization shows us the object surface quality.
There are assumptions (based on the results of communication with the detector manufacturer, colleagues at exhibitions and very scarce information on the Web) that the effect of evaluating the polarization of radiating and reflecting objects can be used in the following areas:

  1. The difference between objects own radiation and reflection (for example, a warm car and a glare of the sun in a puddle or sand).
  2. Search for camouflaged objects.
  3. Oil stains detection on the water surface. 
  4. Defect search.
  5. 3D scanning.
  6. Detection of warm objects on a water surface (a drowning person, for instance), distinguishing its own radiation from reflected light on water."

Go to the original article...

Gigabit Random Number Generator Based on 24MP 30fps Image Sensor

Image Sensors World        Go to the original article...

James Hughes and Yash Gupta from UCSC present their idea of Gigabit random number generator for cryptography:

Go to the original article...

Assorted News

Image Sensors World        Go to the original article...

Science Magazine publishes a paper on wideband light sensing device "Ultrabroadband photosensitivity from visible to terahertz at room temperature" by Dong Wu, Yongchang Ma, Yingying Niu, Qiaomei Liu, Tao Dong, Sijie Zhang, Jiasen Niu, Huibin Zhou, Jian Wei, Yingxin Wang, Ziran Zhao, and Nanlin Wang from Peking University, Tianjin University of Technology, and Tsinghua University, China.

"Charge density wave (CDW) is one of the most fundamental quantum phenomena in solids. Different from ordinary metals in which only single-particle excitations exist, CDW also has collective excitations and can carry electric current in a collective fashion. Manipulating this collective condensation for applications has long been a goal in the condensed matter and materials community. We show that the CDW system of 1T-TaS2 is highly sensitive to light directly from visible down to terahertz, with current responsivities on the order of ~1 AW−1 at room temperature. Our findings open a new avenue for realizing uncooled, ultrabroadband, and sensitive photoelectronics continuously down to the terahertz spectral range."

Everything is great about the new photodetection mechanism, except the dark current, which is about 15 orders of magnitude higher than in Si photodiode:


Optics.org reports that Columbia University team managed to create a very thin wide IR-band lens made of metamaterials and accepted paper in Nature:

"The Columbia Engineering researchers have created the first flat lens capable of correctly focusing a large range of wavelengths of any polarization to the same focal point without the need for any additional elements. At just 1µm thick, their revolutionary "flat" lens offers performance comparable to top-of-the-line compound lens systems.

The team’s next challenge is to improve these lenses' efficiency. The flat lenses currently are not optimal because a small fraction of the incident optical power is either reflected by the flat lens, or scattered into unwanted directions. The team is optimistic that the issue of efficiency is not fundamental, and they are busy inventing new design strategies to address the efficiency problem.
"




Globenewswire: Atomera licenses its Mears Silicon Technology (MST) technology to ST. MST is an additional non-silicon implant that reduces transistors variability at a given processing node. Also, 1/f noise is reduced due to the elimination of halo implant (also called pocket implant by some fabs).

Why is this relevant for image sensors? First, if Sony pixel-parallel ADC presented at ISSCC 2018 gets the market traction, the reduction of mismatch between transistors might become more important to reduce pixel-level FPN coming from multiple ADCs. Second, reduction of SF gain variations across the pixel array might reduce PRNU in the regular image sensors. Although 1/f noise reduction might not directly affect pixel transistors that do not have halo implant anyway, for the most part, other parts of the image sensor still might benefit from it.

Atomera's seminar at IEDM 2017 explains their proposal:

Go to the original article...

Espros Delivers its ToF Sensors for Hypersen LiDAR

Image Sensors World        Go to the original article...

Espros September 2018 Newsletter announces that the company has signed a contract with Hypersen Technologies (Shenzhen) Co. to supply them with a mass delivery of the TOF epc635 imagers. The epc635 sensors run Hypersen's recently launched solid-state LiDAR (HPS-3D Series). Other Espros partners in China are Benewake (Beijing) and Shanghai Data Miracle Co.


Another notable quote from the Newsletter:

"By the way, the most dominant cost drivers in TOF cameras are the receiver lens and the illumination. The TOF camera chip typically ranks as third. Hence a very sensitive TOF imager allows cost reduction due to less spending on illumination. What's more, the camera will not heat up as much, increasing lifetime and reducing power consumption."

Update: The post has been corrected according to the additional Espros explanations. Espros is supplying the ToF sensors to Hypersenm, not outsourcing the production.

Go to the original article...

A Curious Engineer Overturns Waymo Key LiDAR Patent

Image Sensors World        Go to the original article...

Ars Technica: Following a complaint by Eric Swildens, the USPTO has rejected all but three of 56 claims in Waymo's US9368936 patent. The USPTO found that some claims replicated technology described in an earlier patent from Velodyne, while another claim was simply "impossible" and "magic."

"The patent shouldn't have been filed in the first place," Swildens said. "It's a very well written patent. However, my personal belief is that the thing that they say they invented, they didn't invent."

The 936 patent played a key role in last year's lawsuit with Uber. In December 2016, a Waymo engineer was inadvertently copied on an email from one of its suppliers to Uber, showing a LiDAR circuit design that looked almost identical to the one shown in the 936 patent:


Swildens said to Wired in 2017: "I couldn't imagine the circuit didn't exist prior to this patent." He then spent $6,000 of his own money to launch a formal challenge to 936 and won in the court.

Thanks to AM for the pointer!

Go to the original article...

e2v Sampling 8.9MP 2/3-inch Global Shutter Sensor

Image Sensors World        Go to the original article...

Teledyne e2v announces that samples are now available for Emerald 8M9, the newest member of the Emerald CMOS sensor family dedicated to machine vision and Intelligent Traffic System (ITS) applications.

Emerald 8M9 features a 2.8µm global shutter pixel and provides a 8.9MP resolution in a 2/3-inch optical format. The sensor is available in two speed grades: a standard speed model (47fps @10bits) and a high speed model (107fps @10bits). The new sensor has a readout noise of 2.8e- combined with 65% QE.

Vincent Richard, Marketing Manager at Teledyne e2v, said “Emerald 8M9 is designed specifically to address the demands of machine vision, high resolution surveillance and traffic intelligence. The sensor is unmatched in the industry because of its versatile feature set. For example, real-time High Dynamic Range mode allows high resolution capture of fast moving situations from daylight to night-time with minimum artefacts and blur effects.

Samples and demo kits are now available and mass production is planned for Q1 2019.

Go to the original article...

LiDAR Startup Aeva Raises $45m

Image Sensors World        Go to the original article...

Wired, Verge, Axios: Mountain View, CA-based automotive LiDAR startup Aeva demos its coherent LiDAR prototype and announces $45m round A financing. The company was founded by two ex-Apple engineers in 2017. Not much is said about the technology side:
  • The LiDAR is able to measure Doppler shift with a few cm/s accuracy
  • Its range is 200m
  • The power consumption is less than 100W
  • No mechanical scanning
  • Also has a camera functionality
  • Costs in a range of few hundred dollars
We’re focused on delivering things now,” says Aeva cofounder Mina Rezk. “This is an architecture that we put together, that we know we can manufacture.” That means no exotic materials and using components that are well established and easy to acquire."

Go to the original article...

Rode VideoMic Me-L review

Cameralabs        Go to the original article...

The Rode VideoMic ME-L is a small microphone designed for iPhones with Lightning ports. It's aimed at vloggers, interviewers and musicians who want to capture higher quality audio than the phone's built-in microphone. Hear the difference in Ben's review!…

The post Rode VideoMic Me-L review appeared first on Cameralabs.

Go to the original article...

Artilux and TSMC Develop Ge-on-Si ToF Sensor

Image Sensors World        Go to the original article...

Taiwan-based Artilux' paper "Proposal and demonstration of lock-in pixels for indirect time-of-flight measurements based on germanium-on-silicon technology" by N. Na, S.-L. Cheng, H.-D. Liu, M.-J. Yang, C.-Y. Chen, H.-W. Chen, Y.-T. Chou, C.-T. Lin, W.-H. Liu, C.-F. Liang, C.-L. Chen, S.-W. Chu, B.-J. Chen, Y.-F. Lyu, and S.-L. Chen unveils the company plans to work with TSMC on Ge-on-Si pixel:

"We propose the use of germanium-on-silicon technology for indirect time-of-flight depth sensing as well as three-dimensional imaging applications, and demonstrate a novel pixel featuring a high quantum efficiency and a large frequency bandwidth. Compared to conventional silicon pixels, our germanium-on-silicon pixels simultaneously maintain a high quantum efficiency and a high demodulation contrast deep into GHz frequency regime, which enable consistently superior depth accuracy in both indoor and outdoor scenarios. Device simulation, system performance comparison, and electrical/optical characterization of the fabricated pixels are presented. Our work paves a new path to high-performance time-of-flight sensors and imagers, as well as potential adoptions of eye-safe lasers (wavelengths > 1.4um) that fall outside of the operation window of conventional silicon pixels."


"These results might be surprising as the dark current of the Ge-on-Si pixel is set to be several orders of magnitude larger than that of the Si pixel (nearly no changes to Fig. 3(a) and 3(b) even if a lower Si pixel dark current is set). The reason lies in that, in an indirect TOF system, the dominant system noise is in fact due to the indoor/outdoor ambient light and the laser light instead of the dark current for various depth sensing and 3D imaging applications."

The company patent applications, such as US20170040362, show the proposed pixel structures:

Go to the original article...

Valeo Shows In-Car Camera Use Cases

Image Sensors World        Go to the original article...

Valeo shows how inside car cameras can be useful in various scenarios:

Go to the original article...

IEDM 2018 Image Sensor Papers

Image Sensors World        Go to the original article...

IEDM 2018 to be held on Dec. 1-5 in San Francisco publishes a list of accepted papers with an interesting image sensor stuff:
  • High Performance 2.5um Global Shutter Pixel with New Designed Light-Pipe Structure
    Toshifumi Yokoyama, TowerJazz Panasonic Semiconductor Co.
  • Back-Illuminated 2.74 μm-Pixel-Pitch Global Shutter CMOS Image Sensor with Charge-Domain Memory Achieving 10k e- Saturation Signal
    Yoshimichi Kumagai, Sony Semiconductor
  • A 0.68e-rms Random-Noise 121dB Dynamic-Range Sub-pixel architecture CMOS Image Sensor with LED Flicker Mitigation
    Satoko Iida, Sony Semiconductor
  • A 24.3Me- Full Well Capacity CMOS Image Sensor with Lateral Overflow Integration Trench Capacitor for High Precision Near Infrared Absorption Imaging
    Maasa Murata, Tohoku University
  • A HDR 98dB 3.2µm Charge Domain Global Shutter CMOS Image Sensor
    Arnaud Tournier, STMicroelectronics
  • 1.5µm dual conversion gain, backside illuminated image sensor using stacked pixel level connections with 13ke- full-well capacitance and 0.8e- noise
    Vincent Venezia, Omnivision
  • Through-silicon-trench in back-side-illuminated CMOS image sensors for the improvement of gate oxide long term performance
    Andrea Vici, La Sapienza University of Rome
  • High-Performance Germanium-on-Silicon Lock-in Pixels for Indirect Time-of-Flight Applications
    Neil Na, Artilux Inc.
  • High Voltage Generation Using Deep Trench Isolated Photodiodes in a Back Side Illuminated Process
    Filip Kaklin, The University of Edinburgh
  • CMOS-Integrated Single-Photon-Counting X-Ray Detector using an Amorphous-Selenium Photoconductor with 11×11-μm2 Pixels
    Ahmet Camlica, University of Waterloo

Go to the original article...

css.php