Archives for November 2018

WCP on ADAS/AV Trends

Image Sensors World        Go to the original article...

Woodside Capital Partners release November 2018 update of its "Autonomous Vehicles Technology Report." A few interesting slides with somewhat pessimistic view on the automotive LiDAR short term prospects:

Go to the original article...

Yole Interviews XenomatiX CEO

Image Sensors World        Go to the original article...

Yole Developpement analyst Alexis Debray publishes an interview with Filip Geuens, CEO of LiDAR maker XenomatiX. Some interesting quotes:

"We see 2 main categories in the market: global illumination LiDARs (also called flash LiDAR) and scanning-beam LiDAR. The scanning can come from an optical phase array or from a rotating mirror, oscillating mirror or other mechanical device.
Our XenoLidar does not fit in any of these 2 categories. XenoLidar uses multi-beam. Just like global illumination, we measure the scene in one shot and with high resolution, but in a much more efficient way as we only need a fraction of the energy a flash system needs. This actually translates into the fact that we can cover a much larger range for the same energy.

In the end it is a balancing exercise. We believe we have the best mix of what is critical for automotive in terms of cost, reliability, resolution, efficiency and size (in order of importance).

Today, an important bottleneck is the lack of decision taking. Many people get confused by the diversity of make-believe solutions and by initiatives that failed to deliver on their promises. That is slowing down adoption. Too many parties are sitting on the fence and waiting for a leader to pick a solution.

We deal with this by putting evidence on the table. Being able to back-up performance statements with functional products is our response. However, early adopters are still needed to help the technology to mature further, moving from technology level to application level.
"

Go to the original article...

Fraunhofer Vision SoC vs Event-Based Sensors

Image Sensors World        Go to the original article...

Fraunhofer presentation at the Vision Show held in Stuttgart, Germany, on Nov 6-8, 2018 offers a different approach to the data minimization in machine vision applications. To simplify the use, Fraunhofer embedded vision sensor even offers Python language-like scripting interface:


Prophesee too presented its Event-Driven Sensor at the Vision Show:


Thanks to TL for the pointers!

Go to the original article...

ST Automotive HDR GS Sensor Presentation

Image Sensors World        Go to the original article...

ST presentation "Automotive In-cabin Sensing Solutions" by Nicolas Roux details the company's GS HDR sensor technology, including the dual ADCs and dual pixel memory:

Go to the original article...

ST Automotive HDR GS Sensor Presentation

Image Sensors World        Go to the original article...

ST presentation "Automotive In-cabin Sensing Solutions" by Nicolas Roux details the company's GS HDR sensor technology, including the dual ADCs and dual pixel memory:

Go to the original article...

Call for Nominations for 2019 Walter Kosonocky Award

Image Sensors World        Go to the original article...

International Image Sensors Society calls for nominations for the 2019 Walter Kosonocky Award for significant advancement in solid-state image sensors.

The Walter Kosonocky Award is presented biennially for THE BEST PAPER presented in any venue during the prior two years representing significant advancement in solid-state image sensors. The award commemorates the many
important contributions made by the late Dr. Walter Kosonocky to the field of solid-state image sensors.

Founded in 1997 by his colleagues in industry, government and academia, the award is also funded by proceeds from the International Image Sensor Workshop. The award is selected from nominated papers by the Walter Kosonocky Award Committee, announced and presented at the International Image Sensor Workshop (IISW), and sponsored by the International Image Sensor Society (IISS).

The nominations for 2019 award should be sent to Rihito Kuroda, Chair of the IISS Award Committee, with a pdf file of the nominated paper (that you judge is the best paper published/ presented in calendar years 2017 and 2018) as well as a brief description (less than 100 words) of your reason nominating the paper. Nomination of a paper from your company/ institute is also welcome.

The deadline for receiving nominations is February 18th, 2019.

Go to the original article...

ZTE Nubia X Smartphone Reverses Multi-Camera Trend

Image Sensors World        Go to the original article...

Tom's Guide: ZTE Nubia X with dual display on the front and back eliminates the need in a front camera: the main rear camera is used for selfies too:


GSM Arena reports that one of the largest smartphone manufacturers Vivo is about to roll out a similar model with no separate selfie camera - NEX 2:

Go to the original article...

Plasma Dicing Benefits

Image Sensors World        Go to the original article...

Panasonic Industrial presentation teaches plasma dicing advantages for image sensors:


Veeco Ultratech promotes its IR alignment system:

Go to the original article...

Yole on Consumer Biometrics

Image Sensors World        Go to the original article...

Yole Developpement report "Consumer Biometrics: Market and Technologies Trends 2018" forecasts:

"As anticipated by Yole Développement (Yole) in mid-2016, biometry’s “second wave” began with the introduction of the iPhone X in September 2017, when Apple set the standard for technological advancement (and use-cases) for 3D sensing in consumer. Apple conceived a complex assembly of camera modules and VCSEL light sources using structured light principles, along with an innovative NIR global shutter image sensor from STMicroelectronics to perform secure 3D facial recognition. This second wave, led by biometry with 3D sensing, is ongoing and will increase market value toward $17B by 2022.

But biometry is not only a matter of fingerprint or face detection but also iris and voice recognition, regarding the overall breakdown of biometry recognition, Yole estimates that the proportion of each type of detection will be quite unbalanced in the future, with 60% of biometric module in volume coming from face recognition module, while fingerprint (40%) will see a decrease over time of its value due to competition and alternative implementation leading to cost reduction.
"

Go to the original article...

RGB-IR CFA Optimizations

Image Sensors World        Go to the original article...

Tokyo Institute of Technology and Olympus publish a paper "Single-Sensor RGB-NIR Imaging: High-Quality System Design and Prototype Implementation" by Yusuke Monno, Hayato Teranaka, Kazunori Yoshizaki, Masayuki Tanaka, and Masatoshi Okutomi.

"In recent years, many applications using a set of RGB and near-infrared (NIR) images, also called an RGB-NIR image, have been proposed. However, RGB-NIR imaging, i.e., simultaneous acquisition of RGB and NIR images, is still a laborious task because existing acquisition systems typically require two sensors or shots. In contrast, single-sensor RGB-NIR imaging using an RGB-NIR sensor, which is composed of a mosaic of RGB and NIR pixels, provides a practical and low-cost way of one-shot RGB-NIR image acquisition. In this paper, we investigate high-quality system designs for single-sensor RGBNIR imaging. We first present a system evaluation framework using a new hyperspectral image dataset we constructed. Different from existing work, our framework takes both the RGB-NIR sensor characteristics and the RGB-NIR imaging pipeline into account. Based on the evaluation framework, we then design each imaging factor that affects the RGB-NIR imaging quality and propose the best-performed system design. We finally present the configuration of our developed prototype RGB-NIR camera, which was implemented based on the best system design, and demonstrate several potential applications using the prototype."

Go to the original article...

Espros ToF Face ID Module

Image Sensors World        Go to the original article...

Espros November 2018 Newsletter shows the company's ToF module for face recognition in smartphones:

"The USPs of the epc660 chip - very high NIR sensitivity (>80% @ 850nm) as well as the capability of suppressing strong ambient light in the charge domain - make it in a favorite choice for miniaturized mobile applications. High sensitivity means saving battery power and allows eye-safe operation due to fact that the active illumination can be designed to be less powerful. Ambient light acceptance is a key factor and a challenge for devices although they are used outdoor in a full sunlight environment.

The slim bare-die chip-scale package with an overall thickness of 0.23mm with solder balls (CSP) allows to design modules for the thinnest mobile applications. The package allows to scale down the whole the complete module not just in size but also in cost.
"

Go to the original article...

Sigma 56mm f1.4 review

Cameralabs        Go to the original article...

The Sigma 56mm f1.4 is a short-telephoto lens available for Sony E (APSC) or Micro Four Thirds mounts, on which it delivers equivalent coverage of 84 or 112mm respectively. The focal length and bright f1.4 focal ratio make it ideal for events as well as street or tighter urban views. Find out why it's become a favourite in my review!…

The post Sigma 56mm f1.4 review appeared first on Cameralabs.

Go to the original article...

Four Generations of Camera Module Testers

Image Sensors World        Go to the original article...

Pamtek presents 4 generations of its testing systems for camera modules:



Go to the original article...

SiOnyx Camera Review

Image Sensors World        Go to the original article...

DPReview publishes a review of SiOnyx Aurora night vision camera with Black Silicon sensor. The conclusion is:

"Does the SiOnyx Aurora let me see things in the dark that I can't see with the unaided eye? Absolutely: the infrared sensitivity makes a big difference and, hence, my stress on the night vision capability of this device. The fact that you can also capture what you see is a plus. For me it was capturing Northern Lights, but I'm also looking forward to capturing surface lava flows in Hawaii, bioluminescence in Puerto Rico, as well as other phenomena around the world."

Go to the original article...

Teledyne IR Sensors for Space Missions

Image Sensors World        Go to the original article...

Teledyne presentation on IR detectors for space missions by Paul Jerram and James Beletic shows the company's project examples:

Go to the original article...

3D Stacked SPAD Array in 45nm Process

Image Sensors World        Go to the original article...

IEEE Journal of Selected Topics in Quantum Electronics publishes an open access paper "High-Performance Back-Illuminated Three-Dimensional Stacked Single-Photon Avalanche Diode Implemented in 45-nm CMOS Technology" by Myung-Jae Lee, Augusto Ronchini Ximenes, Preethi Padmanabhan, Tzu-Jui Wang, Kuo-Chin Huang, Yuichiro Yamashita, Dun-Nian Yaung, and Edoardo Charbon from EPFL, Delft University of Technology, and TSMC.

"We present a high-performance back-illuminated three-dimensional stacked single-photon avalanche diode (SPAD), which is implemented in 45-nm CMOS technology for the first time. The SPAD is based on a P + /Deep N-well junction with a circular shape, for which N-well is intentionally excluded to achieve a wide depletion region, thus enabling lower tunneling noise and better timing jitter as well as a higher photon detection efficiency and a wider spectrum. In order to prevent premature edge breakdown, a P-type guard ring is formed at the edge of the junction, and it is optimized to achieve a wider photon-sensitive area. In addition, metal-1 is used as a light reflector to improve the detection efficiency further in backside illumination. With the optimized 3-D stacked 45-nm CMOS technology for back-illuminated image sensors, the proposed SPAD achieves a dark count rate of 55.4 cps/μm 2 and a photon detection probability of 31.8% at 600 nm and over 5% in the 420-920 nm wavelength range. The jitter is 107.7 ps full width at half-maximum with negligible exponential diffusion tail at 2.5 V excess bias voltage at room temperature. To the best of our knowledge, these are the best results ever reported for any back-illuminated 3-D stacked SPAD technologies."

Go to the original article...

SOI ToF Sensor for LiDAR

Image Sensors World        Go to the original article...

MDPI publishes a paper "A Back-Illuminated Time-of-Flight Image Sensor with SOI-Based Fully Depleted Detector Technology for LiDAR Application" by Sanggwon Lee, Keita Yasutomi, Ho Hai Nam, Masato Morita, and Shoji Kawahito from Shizuoka University.

"A back-illuminated time-of-flight (ToF) image sensor based on a 0.2 µm silicon-on-insulator (SOI) CMOS detector technology using fully-depleted substrate is developed for the light detection and ranging (LiDAR) applications. A fully-depleted 200 µm-thick bulk silicon is used for the higher quantum efficiency (QE) in a near-infrared (NIR) region. The developed SOI pixel structure has a 4-tapped charge modulator with a draining function to achieve a higher range resolution and to cancel background light signal. A distance is measured up to 27 m with a range resolution of 12 cm at the outdoor and average light power density is 150 mW/m2@30 m."

Go to the original article...

Isaiah Research Forecasts Triple Camera Adoption in Smartphones

Image Sensors World        Go to the original article...

IFNewsflash: Isaiah Research increases its previous forecast of double and triple cameras adoption on 2019 smartphone market:

Go to the original article...

How to Explain Hyperspectral Imaging to Your Family and Relatives

Image Sensors World        Go to the original article...

Finnish company Specim publishes a nice 2-part video explanation of hyperpectral imaging principles. Like in many popular videos, there are some minor mistakes, but the overall work is still quite good:




Go to the original article...

Image Sensing Content at ISSCC 2019

Image Sensors World        Go to the original article...

ISSCC 2019 to be held on February 17-21 in San Francisco publishes its program with a number of image sensor papers. The image Sensor session starts with Smartsens presentation, probably the first image sensor company from China presenting its work at ISSCC:

A Stacked Global-Shutter CMOS Imager with SC-Type Hybrid-GS Pixel and Self-Knee Point Calibration Single-Frame HDR and On-Chip Binarization Algorithm for Smart Vision Applications
C. Xu, Y. Mo, G. Ren, W. Ma, X. Wang, W. Shi, J. Hou, K. Shao, H. Wang, P. Xiao, Z. Shao, X. Xie, X. Wang, C. Yiu
SmartSens Technology

Energy-Efficient Low-Noise CMOS Image Sensor with Capacitor Array-Assisted Charge-Injection SAR ADC for Motion-Triggered Low-Power IoT Applications
K. D. Choo, L. Xu, Y. Kim, J-H. Seol, X. Wu, D. Sylvester, D. Blaauw
University of Michigan, Ann Arbor, MI

A Data-Compressive 1.5b/2.75b Log-Gradient QVGA Image Sensor with Multi-Scale Readout for Always-On Object Detection
C. Young, A. Omid-Zohoor, P. Lajevardi, B. Murmann
Stanford University, Stanford, CA; Robert Bosch, Sunnyvale, CA

A 76mW 500fps VGA CMOS Image Sensor with Time-Stretched Single-Slope ADCs Achieving 1.95e- Random Noise
I. Park, C. Park, J. Cheon, Y. Chae,
Yonsei University, Seoul, Korea
Kumoh National Institute of Technology, Gyeongbuk, Korea

Dual-Tap Pipelined-Code-Memory Coded-Exposure-Pixel CMOS Image Sensor for Multi-Exposure Single-Frame Computational Imaging
N. Sarhangnejad, N. Katic, Z. Xia, M. Wei, N. Gusev, G. Dutta, R. Gulve, H. Haim, M. Moreno Garcia, D. Stoppa, K. N. Kutulakos, R. Genov
University of Toronto, Toronto, Canada; Synopsys, Toronto, Canada; Fondazione Bruno Kessler, Trento, Italy; ams AG, Ruschlikon, Switzerland

A 400×400-Pixel 6μm-Pitch Vertical Avalanche Photodiodes CMOS Image Sensor Based on 150ps-Fast Capacitive Relaxation Quenching in Geiger Mode for Synthesis of Arbitrary Gain Images
Y. Hirose, S. Koyama, T. Okino, A. Inoue, S. Saito, Y. Nose, M. Ishii, S. Yamahira, S. Kasuga, M. Mori, T. Kabe, K. Nakanishi, M. Usuda, A. Odagawa, T. Tanaka
Panasonic, Nagaokakyo, Japan

A 256×256 40nm/90nm CMOS 3D-Stacked 120dB-Dynamic-Range Reconfigurable Time-Resolved SPAD Imager
R. K. Henderson, N. Johnston, S. W. Hutchings, I. Gyongy, T. Al Abbas, N. Dutton, M. Tyler, S. Chan, J. Leach
University of Edinburgh, Edinburgh, United Kingdom; STMicroelectronics, Edinburgh, United Kingdom; Heriot-Watt University, Edinburgh, United Kingdom

A 32×32-Pixel 0.9THz Imager with Pixel-Parallel 12b VCO-Based ADC in 0.18μm CMOS
S. Yokoyama, M. Ikebe, Y. Kanazawa, T. Ikegami, P. Ambalathankandy, S. Hiramatsu, E. Sano, Y. Takida, H. Minamide
Hokkaido University, Sapporo, Japan; RIKEN, Sendai, Japan

A 512-Pixel 3kHz-Frame-Rate Dual-Shank Lensless Filterless Single- Photon-Avalanche-Diode CMOS Neural Imaging Probe
C. Lee, A. J. Taal, J. Choi, K. Kim, K. Tien, L. Moreaux, M. L. Roukes, K. L. Shepard
Columbia University, New York, NY; KIST, Seoul, Korea; California Institute of Technology, Pasadena

The Industry Showcase event includes:
  • ams AG, Premstätten, Austria, Direct Time-of-Flight Module in CMOS 55nm HV for Mobile Applications
  • Ouster, San Francisco, CA, Native camera imaging on LiDAR and deep learning enablement
  • Samsung Electronics, Hwaseong, Korea, Motion Artifact Free Dynamic Vision Sensor for Machine Vision

Go to the original article...

Image Sensing Content at ISSCC 2019

Image Sensors World        Go to the original article...

ISSCC 2019 to be held on February 17-21 in San Francisco publishes its program with a number of image sensor papers. The image Sensor session starts with Smartsens presentation, probably the first image sensor company from China presenting its work at ISSCC:

A Stacked Global-Shutter CMOS Imager with SC-Type Hybrid-GS Pixel and Self-Knee Point Calibration Single-Frame HDR and On-Chip Binarization Algorithm for Smart Vision Applications
C. Xu, Y. Mo, G. Ren, W. Ma, X. Wang, W. Shi, J. Hou, K. Shao, H. Wang, P. Xiao, Z. Shao, X. Xie, X. Wang, C. Yiu
SmartSens Technology

Energy-Efficient Low-Noise CMOS Image Sensor with Capacitor Array-Assisted Charge-Injection SAR ADC for Motion-Triggered Low-Power IoT Applications
K. D. Choo, L. Xu, Y. Kim, J-H. Seol, X. Wu, D. Sylvester, D. Blaauw
University of Michigan, Ann Arbor, MI

A Data-Compressive 1.5b/2.75b Log-Gradient QVGA Image Sensor with Multi-Scale Readout for Always-On Object Detection
C. Young, A. Omid-Zohoor, P. Lajevardi, B. Murmann
Stanford University, Stanford, CA; Robert Bosch, Sunnyvale, CA

A 76mW 500fps VGA CMOS Image Sensor with Time-Stretched Single-Slope ADCs Achieving 1.95e- Random Noise
I. Park, C. Park, J. Cheon, Y. Chae,
Yonsei University, Seoul, Korea
Kumoh National Institute of Technology, Gyeongbuk, Korea

Dual-Tap Pipelined-Code-Memory Coded-Exposure-Pixel CMOS Image Sensor for Multi-Exposure Single-Frame Computational Imaging
N. Sarhangnejad, N. Katic, Z. Xia, M. Wei, N. Gusev, G. Dutta, R. Gulve, H. Haim, M. Moreno Garcia, D. Stoppa, K. N. Kutulakos, R. Genov
University of Toronto, Toronto, Canada; Synopsys, Toronto, Canada; Fondazione Bruno Kessler, Trento, Italy; ams AG, Ruschlikon, Switzerland

A 400×400-Pixel 6μm-Pitch Vertical Avalanche Photodiodes CMOS Image Sensor Based on 150ps-Fast Capacitive Relaxation Quenching in Geiger Mode for Synthesis of Arbitrary Gain Images
Y. Hirose, S. Koyama, T. Okino, A. Inoue, S. Saito, Y. Nose, M. Ishii, S. Yamahira, S. Kasuga, M. Mori, T. Kabe, K. Nakanishi, M. Usuda, A. Odagawa, T. Tanaka
Panasonic, Nagaokakyo, Japan

A 256×256 40nm/90nm CMOS 3D-Stacked 120dB-Dynamic-Range Reconfigurable Time-Resolved SPAD Imager
R. K. Henderson, N. Johnston, S. W. Hutchings, I. Gyongy, T. Al Abbas, N. Dutton, M. Tyler, S. Chan, J. Leach
University of Edinburgh, Edinburgh, United Kingdom; STMicroelectronics, Edinburgh, United Kingdom; Heriot-Watt University, Edinburgh, United Kingdom

A 32×32-Pixel 0.9THz Imager with Pixel-Parallel 12b VCO-Based ADC in 0.18μm CMOS
S. Yokoyama, M. Ikebe, Y. Kanazawa, T. Ikegami, P. Ambalathankandy, S. Hiramatsu, E. Sano, Y. Takida, H. Minamide
Hokkaido University, Sapporo, Japan; RIKEN, Sendai, Japan

A 512-Pixel 3kHz-Frame-Rate Dual-Shank Lensless Filterless Single- Photon-Avalanche-Diode CMOS Neural Imaging Probe
C. Lee, A. J. Taal, J. Choi, K. Kim, K. Tien, L. Moreaux, M. L. Roukes, K. L. Shepard
Columbia University, New York, NY; KIST, Seoul, Korea; California Institute of Technology, Pasadena

The Industry Showcase event includes:
  • ams AG, Premstätten, Austria, Direct Time-of-Flight Module in CMOS 55nm HV for Mobile Applications
  • Ouster, San Francisco, CA, Native camera imaging on LiDAR and deep learning enablement
  • Samsung Electronics, Hwaseong, Korea, Motion Artifact Free Dynamic Vision Sensor for Machine Vision

Go to the original article...

Automotive Gesture Recognition Market

Image Sensors World        Go to the original article...

GlobeNewswire: Global Market Insights forecasts that automotive gesture recognition market will grow at about 44% CAGR from 2018 to 2024 led by rising trend towards customer comfort and advanced driving experience. The market is expected to reach $13.6bn by 2024 from $1bn in 2017:

Go to the original article...

Event-Based Sensor Use Case

Image Sensors World        Go to the original article...

Neuro Vision spin-off from Zurich University Institut de la Vision, Paris, shows a use case for an event-based camera:

Go to the original article...

Aeye LiDAR Shows 1000m Track Detection, Raises $40m

Image Sensors World        Go to the original article...

Techchrunch, Optics.org, VentureBeat: AEye raises a $40m in Series B. round led by Taiwania Capital, the investment firm created and backed by Taiwan’s National Development Council, and includes returning investors Kleiner Perkins, Intel Capital, Airbus Ventures and Tychee Partners.

This brings the LiDAR startup’s total funding to about $61m. In the announcement, founder and CEO Luis Dussan said Taiwania’s investment is a strategic one and will give AEye more access to manufacturing, logistics and tech resources in Asia. AEye also plans to launch a new product at CES in January.

In tests monitored and validated by VSI Labs, a research company that focuses on autonomous-vehicle technology, AEye said that its iDAR sensor, which combines a solid-state lidar and high-resolution camera in one device, was able to detect and track a white color moving truck from one kilometer away. AEye claims that this is four to five times the distance other current lidar systems can detect.

In a press statement, AEye chief of staff Blair LaCorte said the company believes iDAR can potentially track moving objects, including trucks and drones, from 5km to 10km away.

Go to the original article...

Sony Adds Some Data on its DSLR/ILC Sensors

Image Sensors World        Go to the original article...

Sony publishes flyers for 8 new products for DSLR/ILC cameras spanning from 150MP medium format IMX411 to 20MP 60fps MFT IMX272 sensors, including full-frame and APS-C sensors:

Go to the original article...

IWISS2018 Posters List

Image Sensors World        Go to the original article...

4th International Workshop on Image Sensors and Imaging Systems (IWISS2018) to be held on Nov. 28-29 in Tokyo, publishes the list of posters:

(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)

Go to the original article...

Dual-Gate Organic Phototransistor for Image Sensing

Image Sensors World        Go to the original article...

Nature publishes a paper "Dual-gate organic phototransistor with high-gain and linear photoresponse" Philip C. Y. Chow, Naoji Matsuhisa, Peter Zalar, Mari Koizumi, Tomoyuki Yokota, and Takao Someya from Hong Kong University of Science and Technology, Holst Centre (The Netherlands), and University of Tokyo.

"The conversion of light into electrical signal in a photodetector is a crucial process for a wide range of technological applications. Here we report a new device concept of dual-gate phototransistor that combines the operation of photodiodes and phototransistors to simultaneously enable high-gain and linear photoresponse without requiring external circuitry. In an oppositely biased, dual-gate transistor based on a solution-processed organic heterojunction layer, we find that the presence of both n- and p-type channels enables both photogenerated electrons and holes to efficiently separate and transport in the same semiconducting layer. This operation enables effective control of trap carrier density that leads to linear photoresponse with high photoconductive gain and a significant reduction of electrical noise. As we demonstrate using a large-area, 8 × 8 imaging array of dual-gate phototransistors, this device concept is promising for high-performance and scalable photodetectors with tunable dynamic range."

Go to the original article...

Human Eye Resolution in Megapixels

Image Sensors World        Go to the original article...

Quora publishes an answer on human eye resolution question written by Michael Bross, former Pychology Professor at Concordia University, Montreal, among 93 other answers. Few interesting quotes:

"...if you look of what is going on in the eye it looks messy, the ‘seeing’ is done by the visual cortex.

Note that the light has to pass trough several structures before it gets to the retina, cornea, aqueous humor, lens, vitreous humor (humors are a translucent gel/watery like medium), blood vessels, and then it has to traverse 4 layers of nerve cells before it gets to the light receptors (rods and cones) at the back of the retina.

So plenty of photons get absorbed before reaching the receptors, add to this that quite a few of them will be bouncing around in the eye ball, and it has been estimated that only around 20–25% of light entering the eye reaches the receptors.

So to put that into pixel estimates (I’m relying here on data from Hendrik Lensch at the Max Plank Institute Informatik), given a 19″ LED viewed at 60 cm. without hyperacuity the visual cortex would process Pixel 3,000x3,000 pixels, with hyperacuity 18,000x18,000.
"

Go to the original article...

Black Friday best camera deals 2018

Cameralabs        Go to the original article...

Black Friday is approaching and many stores have already started discounting! I've sifted through the deals to find the best bargains on cameras and photography gear. So if you're shopping for a new camera, lens or accessory, check out my guide to the best deals this Holiday Season. PS - check back for updates!…

The post Black Friday best camera deals 2018 appeared first on Cameralabs.

Go to the original article...

High Photon Throughput SPAD Imager

Image Sensors World        Go to the original article...

MDPI Special Issue The International SPAD Sensor Workshop publishes a paper "A CMOS SPAD Imager with Collision Detection and 128 Dynamically Reallocating TDCs for Single-Photon Counting and 3D Time-of-Flight Imaging" by Chao Zhang, Scott Lindner, Ivan Michel Antolovic, Martin Wolf, and Edoardo Charbon from Delft University of Technology, University of Zurich, EPFL, and Kavli Institute of Nanoscience.

"Per-pixel time-to-digital converter (TDC) architectures have been exploited by single-photon avalanche diode (SPAD) sensors to achieve high photon throughput, but at the expense of fill factor, pixel pitch and readout efficiency. In contrast, TDC sharing architecture usually features high fill factor at small pixel pitch and energy efficient event-driven readout. While the photon throughput is not necessarily lower than that of per-pixel TDC architectures, since the throughput is not only decided by the TDC number but also the readout bandwidth. In this paper, a SPAD sensor with 32 × 32 pixels fabricated with a 180 nm CMOS image sensor technology is presented, where dynamically reallocating TDCs were implemented to achieve the same photon throughput as that of per-pixel TDCs. Each 4 TDCs are shared by 32 pixels via a collision detection bus, which enables a fill factor of 28% with a pixel pitch of 28.5 μm. The TDCs were characterized, obtaining the peak-to-peak differential and integral non-linearity of −0.07/+0.08 LSB and −0.38/+0.75 LSB, respectively. The sensor was demonstrated in a scanning light-detection-and-ranging (LiDAR) system equipped with an ultra-low power laser, achieving depth imaging up to 10 m at 6 frames/s with a resolution of 64 × 64 with 50 lux background light."

Go to the original article...

css.php