International Image Sensors Workshop (IISW) 2023 Program and Pre-Registration Open

Image Sensors World        Go to the original article...

The 2023 International Image Sensors Workshop announces the technical programme and opens the pre-registration to attend the workshop.

Technical Programme is announced: The Workshop programme is from May 22nd to 25th with attendees arriving on May 21st. The programme features 54 regular presentations and 44 posters with presenters from industry and academia. There are 10 engaging sessions across 4 days in a single track format. On one afternoon, there are social trips to Stirling Castle or the Glenturret Whisky Distillery. Click here to see the technical programme.

Pre-Registration is Open: The pre-registration is now open until Monday 6th Feb. Click here to pre-register to express your interest to attend.










Go to the original article...

SD Optics releases MEMS-based system "WiseTopo" for 3D microscopy

Image Sensors World        Go to the original article...

SD Optics has released WiseTopo, a MEMS-based microarray lens system that transforms a 2D microscopes into 3D. 
 
Attendees at Photonics West can see a demonstration at their booth #4128 between Jan 31 to Feb 2, 2023 at the Moscone Center in San Francisco, California.
 


SD OPTICS introduces WiseTopo with our core technology Mals lens, the Mems-based microarray lens system. WiseTopo transforms a 2D microscope into a 3D microscope with a simple plug-in installation, and it fits all microscopes. The conventional system has a limited depth of field, so a user has to adjust the focus manually by moving the z-axis. It is difficult to identify the exact shape of the object instantly.  The manual movements can cause deviations in the observation, missing information, incomplete inspection, and an increase in user work load. SD Optics' WiseTopo is the most innovative 3D microscope module empowered with the patented core technology Mals. WiseTopo converts a 2D microscope into a 3D microscope by replacing the image sensor. With this simple installation, WiseTopo resolves the depth-of-field issue without Z-axis movement. Mals is an optical Mems-based, ultra-fast variable focusing lens that implements curvature changes in the lens with the motion of individual micro-mirrors. Mals moves and focuses at a speed of 12Khz without z-axis mechanical movement. It is a semi-permanent digital lens technology that operates at any temperature and has no life cycle limit. WiseTopo provides ideal features in combination with our developed software. These features let users have a better understanding of an object in real time. WiseTopo provides an All-in-focus function where everything is in focus. The Auto-focus function automatically focuses on the Region of Interest Focus lock maintains focus when multiple focus ROIs are set in the z-axis, multi-focus lock stays in focus even when moving the X- and Y-axis. Auto-focus lock retains auto-focus during Z-axis movement and others. These functions maximize user convenience. WiseTopo and its 3D images will reveal necessary information that is hidden when using a 2D microscope. WiseTopo obtains in-focused images with fast varying focus technology and processes many 3D attributes such as shape matching and point cloud instantly. WiseTopo supports various 3D data formats for analysis. For example, a comparison between the reference 3D data with the real-time 3D data can be performed easily. In the microscope, objective lenses with different magnifications are mounted on the turret. Wisetopo provides all functions even when the magnification is changed. Wisetopo provides all 3D features in any microscope and can be used with all of them, regardless of the brand
3D images created in Wisetopo can be viewed in AR/VR. This will let users feel and observe 3D data in the metaverse space.
 

Go to the original article...

Videos of the day [TinyML and WACV]

Image Sensors World        Go to the original article...

Event-based sensing and computing for efficient edge artificial intelligence and TinyML applications
Federico CORRADI, Senior Neuromorphic Researcher, IMEC

The advent of neuro-inspired computing represents a paradigm shift for edge Artificial Intelligence (AI) and TinyML applications. Neurocomputing principles enable the development of neuromorphic systems with strict energy and cost reduction constraints for signal processing applications at the edge. In these applications, the system needs to accurately respond to the data sensed in real-time, with low power, directly in the physical world, and without resorting to cloud-based computing resources.
In this talk, I will introduce key concepts underpinning our research: on-demand computing, sparsity, time-series processing, event-based sensory fusion, and learning. I will then showcase some examples of a new sensing and computing hardware generation that employs these neuro-inspired fundamental principles for achieving efficient and accurate TinyML applications. Specifically, I will present novel computer architectures and event-based sensing systems that employ spiking neural networks with specialized analog and digital circuits. These systems use an entirely different model of computation than our standard computers. Instead of relying upon software stored in memory and fast central processing units, they exploit real-time physical interactions among neurons and synapses and communicate using binary pulses (i.e., spikes). Furthermore, unlike software models, our specialized hardware circuits consume low power and naturally perform on-demand computing only when input stimuli are present. These advancements offer a route toward TinyML systems composed of neuromorphic computing devices for real-world applications.



Improving Single-Image Defocus Deblurring: How Dual-Pixel Images Help Through Multi-Task Learning

Authors: Abdullah Abuolaim (York University)*; Mahmoud Afifi (Apple); Michael S Brown (York University) 
 
Many camera sensors use a dual-pixel (DP) design that operates as a rudimentary light field providing two sub-aperture views of a scene in a single capture. The DP sensor was developed to improve how cameras perform autofocus. Since the DP sensor's introduction, researchers have found additional uses for the DP data, such as depth estimation, reflection removal, and defocus deblurring. We are interested in the latter task of defocus deblurring. In particular, we propose a single-image deblurring network that incorporates the two sub-aperture views into a multi-task framework. Specifically, we show that jointly learning to predict the two DP views from a single blurry input image improves the network's ability to learn to deblur the image. Our experiments show this multi-task strategy achieves +1dB PSNR improvement over state-of-the-art defocus deblurring methods. In addition, our multi-task framework allows accurate DP-view synthesis (e.g., ~39dB PSNR) from the single input image. These high-quality DP views can be used for other DP-based applications, such as reflection removal. As part of this effort, we have captured a new dataset of 7,059 high-quality images to support our training for the DP-view synthesis task.




Go to the original article...

2023 International Solid-State Circuits Conference (ISSCC) Feb 19-23, 2023

Image Sensors World        Go to the original article...

ISSCC will be held as an in-person conference Feb 19-23, 2023 in San Francisco. 

An overview of the program is available here: https://www.isscc.org/program-overview

Some sessions of interest to image sensors audience below:


Tutorial on  "Solid-State CMOS LiDAR Sensors" (Feb 19)
Seong-Jin Kim, Ulsan National Institute of Science and Technology, Ulsan, Korea

This tutorial will present the technologies behind single-photon avalanche-diode (SPAD)-based solid-state
CMOS LiDAR sensors that have emerged to realize level-5 automotive vehicles and the metaverse AR/VR in mobile devices. It will begin with the fundamentals of direct and indirect time-of-flight (ToF) techniques, followed by structures and operating principles of three key building blocks: SPAD devices, time-to-digital converters (TDCs), and signal-processing units for histogram derivation. The tutorial will finally introduce the recent development of on-chip histogramming TDCs with some state-of-the-art examples.

Seong-Jin Kim received a Ph.D. degree from KAIST, Daejeon, South Korea, in 2008 and joined the Samsung Advanced Institute of Technology to develop 3D imagers. From 2012 to 2015, he was with the Institute of Microelectronics, A*STAR, Singapore, where he was involved in designing various sensing systems. He is currently an associate professor at Ulsan National Institute of Science and Technology, Ulsan, South Korea, and a co-founder of SolidVUE, a LiDAR startup company in South Korea. His current research interests include high-performance imaging devices, LiDAR systems, and biomedical interface circuits and systems.

[Read more...]

Go to the original article...

2023 International Image Sensors Workshop – Call for Papers

Image Sensors World        Go to the original article...

The 2023 International Image Sensors Workshop (IISW) will be held in Scotland from 22-25 May 2023. The first call for papers is now available at this link: 2023 IISW CFP.



FIRST CALL FOR PAPERS

ABSTRACTS DUE DEC 9, 2022
 

2023 International Image Sensor Workshop

Crieff Hydro Hotel, Scotland, UK

22-25 May, 2023


The 2023 International Image Sensor Workshop (IISW) provides a biennial opportunity to present innovative work in the area of solid-state image sensors and share new results with the image sensor community. Now in its 35th year, the workshop will return to an in-person format. The event is intended for image sensor technologists; in order to encourage attendee interaction and a shared experience, attendance is limited, with strong acceptance preference given to workshop presenters. As is the tradition, the 2023 workshop will emphasize an open exchange of information among participants in an informal, secluded setting beside the Scottish town of Crieff. The scope of the workshop includes all aspects of electronic image sensor design and development. In addition to regular oral and poster papers, the workshop will include invited talks and announcement of International Image Sensors Society (IISS) Award winners.

Papers on the following topics are solicited:

Image Sensor Design and Performance
CMOS imagers, CCD imagers, SPAD sensors
New and disruptive architectures
Global shutter image sensors
Low noise readout circuitry, ADC designs
Single photon sensitivity sensors
High frame rate image sensors
High dynamic range sensors
Low voltage and low power imagers
High image quality; Low noise; High sensitivity
Improved color reproduction
Non-standard color patterns with special digital processing
Imaging system-on-a-chip, On-chip image processing

Pixels and Image Sensor Device Physics
New devices and pixel structures
Advanced materials
Ultra miniaturized pixels development, testing, and characterization
New device physics and phenomena
Electron multiplication pixels and imagers
Techniques for increasing QE, well capacity, reducing crosstalk, and improving angular response
Front side illuminated, back side illuminated, and stacked pixels and pixel arrays
Pixel simulation: Optical and electrical simulation, 2D and 3D, CAD for design and simulation, improved models

Application Specific Imagers
Image sensors and pixels for range sensing: LIDAR, TOF,
RGBZ, Structured light, Stereo imaging, etc.
Image sensors with enhanced spectral sensitivity (NIR, UV, IR)
Sensors for DSC, DSLR, mobile, digital video cameras and mirror-less cameras
Array imagers and sensors for multi-aperture imaging,
computational imaging, and machine learning
Sensors for medical applications, microbiology, genome sequencing
High energy photon and particle sensors (X-ray, radiation)
Line arrays, TDI, Very large format imagers
Multi and hyperspectral imagers
Polarization sensitive imagers

Image sensor manufacturing and testing
New manufacturing techniques
Backside thinning
New characterization methods
Defects & leakage current

On-chip optics and imaging process technology
Advanced optical path, Color filters, Microlens, Light guides
Nanotechnologies for Imaging
Wafer level cameras
Packaging and testing: Reliability, Yield, Cost
Stacked imagers, 3D integration
Radiation damage and radiation hard imager



ORGANIZING COMMITTEE

General Workshop Co-Chairs
Robert Henderson – The University of Edinburgh
Guy Meynants – Photolitics and KU Leuven

Technical Program Chair
Neale Dutton – ST Microelectronics

Technical Program Committee
Jan Bogaerts - GPixel, Belgium
Calvin Yi-Ping Chao - TSMC, Taiwan
Edoardo Charbon - EPFL, Switzerland
Bart Dierickx - Caeleste, Belgium
Amos Fenigstein - TowerJazz, Israel
Manylun Ha -  DB Hitek, South Korea
Vladimir Korobov - ON Semiconductor, USA
Bumsuk Kim - Samsung, South Korea
Alex Krymski - Alexima, USA
Jiaju Ma - Gigajot, USA
Pierre Magnan - ISAE, France
Robert Daniel McGrath - Goodix Technology, US 
Preethi Padmanabhan - AMS-Osram, Austria
Francois Roy - STMicroelectronics, France
Andreas Suess - Omnivision Technologies, USA

IISS Board of Directors
Boyd Fowler – OmniVision
Michael Guidash – R.M. Guidash Consulting
Robert Henderson – The University of Edinburgh
Shoji Kawahito – Shizuoka University and Brookman Technology
Vladimir Koifman – Analog Value
Rihito Kuroda – Tohoku University
Guy Meynants – Photolitics
Junichi Nakamura – Brillnics
Yusuke Oike – Sony (Japan)
Johannes Solhusvik – Sony (Norway)
Daniel Van Blerkom – Forza Silicon-Ametek
Yibing Michelle Wang – Samsung Semiconductor

ISS Governance Advisory Committee:
Eric Fossum - Thayer School of Engineering at Dartmouth, USA
Nobukazu Teranishi - University of Hyogo, Japan
Albert Theuwissen - Harvest Imaging, Belgium / Delft University of Technology, The Netherlands

Go to the original article...

CFP: International Workshop on Image Sensors and Imaging Systems 2022

Image Sensors World        Go to the original article...

The 5th International Workshop on Image Sensors and Imaging Systems (IWISS2022) will be held in December 2022 in Japan. This workshop is co-sponsored by IISS.


-Frontiers in image sensors based on conceptual breakthroughs inspired by applications-

Date: December 12 (Mon) and 13 (Tue), 2022

Venue: Sanaru Hall, Hamamatsu Campus, Shizuoka University 

Access: see https://www.eng.shizuoka.ac.jp/en_other/access/

Address: 3-5-1 Johoku, Nakaku, Hamamatsu, 432-8561 JAPAN

Official language: English


Overview

In this workshop, people from various research fields, such as image sensing, imaging systems, optics, photonics, computer vision, and computational photography/imaging, come together to discuss the future and frontiers of image sensor technologies in order to explore the continuous progress and diversity in image sensors engineering and state-of-the-art and emerging imaging systems technologies. The workshop is composed of invited talks and a poster session. We are accepting approximately 20 poster papers, whose submission starts in August, with deadline on October 14 (Fri), 2022. A Poster Presentation Award will be given to the selected excellent paper. We encourage everyone to submit the latest original work. Every participant is required to register online by December 5 (Mon), 2022. On-site registration is NOT accepted. Since the workshop is operated by a limited number of volunteers, we can offer only minimal service; therefore, no invitation letters for visa applications to enter Japan can be issued.

Latest Information: Call for Paper, Advance Program
http://www.i-photonics.jp/meetings.html#20221212IWISS

Poster Session
Submit a paper: https://www.ite.or.jp/ken/form/index.php?tgs_regid=faf9bc5bde5e430962d98b110ccac65c5ddc6ca5718edb7c80089461c48b9cfa&tgid=ITE-IST&lang=eng&now=20220719133618
Submission deadline: Oct. 14(Fri), 2022 (Only title, authors, and short abstract are required)
Please use the above English page. DO NOT follow the Japanese instructions at the bottom of the page.
Notification of acceptance: by Oct. 21 (Fri)

Manuscript submission deadline: Nov. 21 (Mon), 2022 (2-page English proceeding is required)
One excellent poster will be awarded.

Plenary and Invited Speakers

[Plenary] 

“Deep sensing - Jointly optimize imaging and processing –“ by
Hajime Nagahara (Osaka University, Japan)


[Invited Talks]
- Image Sensors
“InGaAs/InP and Ge-on-Si SPADs for SWIR applications” by Alberto Tosi (Politecnico di Milano, Italy)
“CMOS SPAD-Based LiDAR Sensors with Zoom Histogramming TDC Architectures” by Seong-Jin Kim et al. (UNIST, Korea)
"TBD" by Min-Sun Keel (Samsung Electronics, Korea)
“Modeling and verification of a photon-counting LiDAR” by Sheng-Di Lin (National Yang Ming Chiao Tung Univ., Taiwan)
- Computational Photography/Imaging and applications “Computational lensless imaging by coded optics” by Tomoya Nakamura (Osaka Univ., Japan)
“TBD” by Miguel H. Conde (Siegen Univ.) “TBD” by TBD (Toronto Univ.)
 

- Optics and Photonics
“Optical system integrated time-of-flight and optical coherence tomography for high-dynamic range distance measurement” by Yoshio Hayasaki et al. (Utsunomiya Univ., Japan)
“High-speed/ultrafast holographic imaging using an image sensor” by Yasuhiro Awatsuji et al. (Kyoto Institute of Technology, Japan)
“Near-infrared sensitivity improvement by plasmonic diffraction technology” by Nobukazu Teranishi et al. (Shizuoka Univ, Japan)


Scope
- Image sensor technologies: fabrication process, circuitry, architectures
- Imaging systems and image sensor applications
- Optics and photonics: nanophotonics, plasmonics, microscopy, spectroscopy
- Computational photography/ imaging
- Applications and related topics on image sensors and imaging systems: e.g., multi-spectral imaging, ultrafast imaging, biomedical imaging, IoT, VR/AR, deep learning, ...

Online Registration for Audience
Registration is necessary due to the limited number of available seats.
Registration deadline is Dec. 5 (Mon).
Register and pay online from the following website: <to appear>

Registration Fee
Regular and student: approximately 2,000-yen (~15 USD)
Note: This price is for purchasing the online proceeding of IWISS2022 through the ITE. If you cannot join the workshop due to any reason, no refund will be provided.

Collaboration with MDPI Sensors Special Issue
Special Issue on "Recent Advances in CMOS Image Sensor"
Special issue editor: Dr. De Xing Lioe
Paper submission deadline: Feb. 25 (Sat), 2023
https://www.mdpi.com/journal/sensors/special_issues/CMOS_image_sensor
The poster presenters are encouraged to submit a paper to this special issue!
Note-1: Those who do not give a presentation in the IWISS2022 poster session are also welcome to submit a paper!
Note-2: Sensors is an open access journal, the article processing charges (APC) will be applied to accepted papers.
Note-3: For poster presenters of IWISS2022, please satisfy the following conditions.

The submitted extended papers to the special issue should have more than 50% new data and/or extended content to make it a real and complete journal paper. It will be much better if the Title and Abstract are different with that of conference paper so that they can be differentiated in various databases. Authors are asked to disclose that it is conference paper in their cover letter and include a statement on what has been changed compared to the original conference paper.
 


Sponsored by Technical Group on Information Sensing Technologies (IST),
the Institute of Image Information and Television Engineers (ITE)
Co-sponsored by International Image Sensor Society (IISS), Group of
Information Photonics (IPG) +CMOS Working Group, the Optical Society of
Japan, and innovative Photonics Evolution Research Center (iPERC)
[General Chair] Keiichiro Kagawa (Shizuoka Univ., Japan)
[Technical Program Committee (alphabetical order)]
Chih-Cheng Hsieh (National Tsing Hua Univ., Taiwan)
Keiichiro Kagawa (Shizuoka Univ., Japan)
Takashi Komuro (Saitama Univ., Japan)
De Xing Lioe (Shizuoka Univ., Japan)
Hajime Nagahara (Osaka Univ., Japan)
Atushi Ono (Shizuoka Univ., Japan)
Min-Woong Seo (Samsung Electronics, Korea)
Hiroyuki Suzuki (Gunma Univ., Japan)
Hisayuki Taruki (Toshiba Electronic Devices & Storage Corporation, Japan)
Franco Zappa (Politecnico di Milano, Italy)

Contact for any question about IWISS2022
E-mail: iwiss2022@idl.rie.shizuoka.ac.jp
(Keiichiro Kagawa, Shizuoka Univ., Japan)

Go to the original article...

IEEE International Conference on Computational Photography 2022 in Pasadena (Aug 1-3)

Image Sensors World        Go to the original article...


[Jul 16, 2022] Update from program chair Prof. Ioannis Gkioulekas: All paper presentations will be live-streamed on the ICCP YouTube channel: https://www.youtube.com/channel/UClptqae8N3up_bdSMzlY7eA

You can watch them for free, no registration required. You can also use the live stream to ask the presenting author questions.

ICCP will take place in person in Caltech (Pasadena, CA) from August 1 to 3, 2022. The final program is now available here: https://iccp2022.iccp-conference.org/program/

There will be an exciting line up of:
  • three keynote speakers, Shree Nayar, Changhuei Yang, Joyce Farrell;
  • ten invited speakers, spanning areas from acousto-optics and optical computing, to space exploration and environment conservation; and 
  • 24 paper and more than 80 poster and demo presentations. 


List of accepted papers with oral presentations:

#16: Learning Spatially Varying Pixel Exposures for Motion Deblurring
Cindy Nguyen (Stanford University); Julien N. P. Martel (Stanford University); Gordon Wetzstein (Stanford University)

#43: MantissaCam: Learning Snapshot High-dynamic-range Imaging with Perceptually-based In-pixel Irradiance Encoding
Haley M So (Stanford University); Julien N. P. Martel (Stanford University); Piotr Dudek (School of Electrical and Electronic Engineering, The University of Manchester, UK); Gordon Wetzstein (Stanford University)

#47: Rethinking Learning-based Demosaicing, Denoising, and Super-Resolution Pipeline
Guocheng Qian (KAUST); Yuanhao Wang (KAUST); Jinjin Gu (The University of Sydney); Chao Dong (SIAT); Wolfgang Heidrich (KAUST); Bernard Ghanem (KAUST); Jimmy Ren (SenseTime Research; Qing Yuan Research Institute, Shanghai Jiao Tong University)

#54: Physics vs. Learned Priors: Rethinking Camera and Algorithm Design for Task-Specific Imaging
Tzofi M Klinghoffer (Massachusetts Institute of Technology); Siddharth Somasundaram (Massachusetts Institute of Technology); Kushagra Tiwary (Massachusetts Institute of Technology); Ramesh Raskar (Massachusetts Institute of Technology)

#6: Analyzing phase masks for wide etendue holographic displays
Sagi Monin (Technion – Israel Institute of Technology); Aswin Sankaranarayanan (Carnegie Mellon University); Anat Levin (Technion)

#7: Wide etendue displays with a logarithmic tilting cascade
Sagi Monin (Technion – Israel Institute of Technology); Aswin Sankaranarayanan (Carnegie Mellon University); Anat Levin (Technion)

#65: Towards Mixed-State Coded Diffraction Imaging
Benjamin Attal (Carnegie Mellon University); Matthew O’Toole (Carnegie Mellon University)

#19: A Two-Level Auto-Encoder for Distributed Stereo Coding
Yuval Harel (Tel Aviv University); Shai Avidan (Tel Aviv University)

#35: First Arrival Differential LiDAR
Tianyi Zhang (Rice University); Akshat Dave (Rice University); Ashok Veeraraghavan (Rice University); Mel J White (Cornell); Shahaboddin Ghajari (Cornell University); Alyosha C Molnar (Cornell University); Ankit Raghuram (Rice University)

#46: PS2F: Polarized Spiral PSF for single-shot 3D sensing
Bhargav Ghanekar (Rice University); Vishwanath Saragadam (Rice University); Dushyant Mehra (Rice University); Anna-Karin Gustavsson (Rice University); Aswin Sankaranarayanan (Carnegie Mellon University); Ashok Veeraraghavan (Rice University)

#56: Double Your Corners, Double Your Fun: The Doorway Camera
William Krska (Boston University); Sheila Seidel (Boston University); Charles Saunders (Boston University); Robinson Czajkowski (University of South Florida); Christopher Yu (Charles Stark Draper Laboratory); John Murray-Bruce (University of South Florida); Vivek K Goyal (Boston University)

#8: Variable Imaging Projection Cloud Scattering Tomography
Roi Ronen (Technion); Schechner Yoav (Technion); Vadim Holodovsky (Technion)

#31: DIY hyperspectral imaging via polarization-induced spectral filters
Katherine Salesin (Dartmouth College); Dario R Seyb (Dartmouth College); Sarah Friday (Dartmouth College); Wojciech Jarosz (Dartmouth College)

#57: Wide-Angle Light Fields
Michael De Zeeuw (Carnegie Mellon University); Aswin Sankaranarayanan (Carnegie Mellon University)

#55: Computational Imaging using Ultrasonically-Sculpted Virtual Lenses
Hossein Baktash (Carnegie Mellon University); Yash Belhe (University of California, San Diego); Matteo Scopelliti (Carnegie Mellon University); Yi Hua (Carnegie Mellon University); Aswin Sankaranarayanan (Carnegie Mellon University); Maysamreza Chamanzar (Carnegie Mellon University)

#38: Dynamic structured illumination microscopy with a neural space-time model
Ruiming Cao (UC Berkeley); Fanglin Linda Liu (UC Berkeley); Li-Hao Yeh (Chan Zuckerberg Biohub); Laura Waller (UC Berkeley)

#39: Tensorial tomographic differential phase-contrast microscopy
Shiqi Xu (Duke University); Xiang Dai (University of California San Diego); Xi Yang (Duke University); Kevin Zhou (Duke University); Kanghyun Kim (Duke University); Vinayak Pathak (Duke University); Carolyn Glass (Duke University); Roarke Horstmeyer (Duke University)

#42: Style Transfer with Bio-realistic Appearance Manipulation for Skin-tone Inclusive rPPG
Yunhao Ba (UCLA); Zhen Wang (UCLA); Doruk Karinca (University of California, Los Angeles); Oyku Deniz Bozkurt (UCLA); Achuta Kadambi (UCLA)#4: Robust Scene Inference under Dual Image Corruptions
Bhavya Goyal (University of Wisconsin-Madison); Jean-Francois Lalonde (Université Laval); Yin Li (University of Wisconsin-Madison); Mohit Gupta (University of Wisconsin-Madison)

#9: Time-of-Day Neural Style Transfer for Architectural Photographs
Yingshu Chen ( The Hong Kong University of Science and Technology); Tuan-Anh Vu (The Hong Kong University of Science and Technology); Ka-Chun Shum (The Hong Kong University of Science and Technology); Binh-Son Hua (VinAI Research); Sai-Kit Yeung (Hong Kong University of Science and Technology)

#25: MPS-NeRF: Generalizable 3D Human Rendering from Multiview Images
Xiangjun Gao (Beijing institute of technology); Jiaolong Yang (Microsoft Research); Jongyoo Kim (Microsoft Research Asia); Sida Peng (Zhejiang University); Zicheng Liu (Microsoft); Xin Tong (Microsoft)

#26: Differentiable Appearance Acquisition from a Flash/No-flash RGB-D Pair
Hyun Jin Ku (KAIST); Hyunho Ha (KAIST); Joo-Ho Lee (Sogang University); Dahyun Kang (KAIST); James Tompkin (Brown University); Min H. Kim (KAIST)

#17: HiddenPose: Non-line-of-sight 3D Human Pose Estimation
Ping Liu (ShanghaiTech University); Yanhua Yu (ShanghaiTech University); Zhengqing Pan (ShanghaiTech University); Xingyue Peng (ShanghaiTech University); Ruiqian Li (ShanghaiTech University); wang yh (ShanghaiTech University ); Shiying Li (ShanghaiTech University); Jingyi Yu (Shanghai Tech University)

#61: Physics to the Rescue: A Physically Inspired Deep Model for Rapid Non-line-of-sight Imaging
Fangzhou Mu (University of Wisconsin-Madison); SICHENG MO (University of Wisconsin-Madison); Jiayong Peng (University of Science and Technology of China); Xiaochun Liu (University of Wisconsin-Madison); Ji Hyun Nam (University of Wisconsin-Madison); Siddeshwar Raghavan (Purdue University); Andreas Velten (University of Wisconsin-Madison); Yin Li (University of Wisconsin-Madison)

Go to the original article...

Embedded Vision Summit 2022

Image Sensors World        Go to the original article...

The Edge AI and Vision Alliance, a 118-company worldwide industry partnership is organizing the 2022 Embedded Vision Summit, May 16-19 at the Santa Clara Convention Center, Santa Clara, California.

The premier conference and tradeshow for practical, deployable computer vision and edge AI, the Summit focuses on empowering product creators to bring perceptual intelligence to products. This year’s Summit will attract more than 1,000 innovators and feature 90+ expert speakers and 60+ exhibitors across four days of presentations, exhibits and deep-dive sessions. Registration is now open.

Highlights of this year’s program include:
  • Keynote speaker Prof. Ryad Benosman of University of Pittsburgh and the CMU Robotics Institute will speak on “Event-based Neuromorphic Perception and Computation: The Future of Sensing and AI”
  • General session speakers include:
  • Zach Shelby, co-founder and CEO of Edge Impulse, speaking on “How Do We Enable Edge ML Everywhere? Data, Reliability, and Silicon Flexibility”
  • Ziad Asghar, Vice President of Product Management at Qualcomm, speaking on “Powering the Connected Intelligent Edge and the Future of On-Device AI”
  • 90+ sessions across four tracks—Fundamentals, Technical Insights, Business Insights, and Enabling Technologies
  • 60+ exhibitors including Premier Sponsors Edge Impulse and Qualcomm, Platinum Sponsors FlexLogix and Intel, and Gold Sponsors Arm, Arrow, Avnet, BDTi, City of Oulu, Cadence, Hailo, Lattice, Luxonis, Network Optics, Nota, Perceive, STMicroelectronics, Synaptics and AMD Xilinx
  • Deep Dive Sessions — offering opportunities to explore cutting-edge topics in-depth — presented by Edge Impulse, Qualcomm, Intel, and Synopsys
  • “We are delighted to return to being in-person for the Embedded Vision Summit after two years of online Summits,” said Jeff Bier, founder of the Edge AI and Vision Alliance. “Innovation in visual and edge AI continues at an astonishing pace, so it’s more important than ever to be able to see, in one place, the myriad of practical applications, use cases and building-block technologies. Attendees with diverse technical and business backgrounds tell us this is the one event where they get a complete picture and can rapidly sort out the hype from what’s working. A whopping 98% of attendees would recommend attending to a colleague.”
Registration is now open at https://embeddedvisionsummit.com.

The Embedded Vision Summit is operated by the Edge AI and Vision Alliance, a worldwide industry partnership bringing together technology providers and end-product companies to accelerate the adoption of edge AI and vision in products. More at https://edge-ai-vision.com.


EETimes Article

EETimes has published a "teaser" article written by the general chair of this year's summit.

Half a billion years ago something remarkable occurred: an astonishing, sudden increase in new species of organisms. Paleontologists call it the Cambrian Explosion, and many of the animals on the planet today trace their lineage back to this event.

A similar thing is happening in processors for embedded vision and artificial intelligence (AI) today, and nowhere will that be more evident than at the Embedded Vision Summit, which will be an in–person event held in Santa Clara, California, from May 16–19. The Summit focuses on practical know–how for product creators incorporating AI and vision in their products. These products demand AI processors that balance conflicting needs for high performance, low power, and cost sensitivity. The staggering number of embedded AI chips that will be on display at the Summit underscores the industry’s response to this demand. While the sheer number of processors targeting computer vision and ML is overwhelming, there are some natural groupings that make the field easier to comprehend. Here are some themes we’re seeing. 
Founded in 2011, the Edge AI and Vision Alliance is a worldwide industry partnership that brings together technology providers who are enabling innovative and practical applications for edge AI and computer vision. Its 100+ Member companies include suppliers of processors, sensors, software and services.

First, some processor suppliers are thinking about how to best serve applications that simultaneously apply machine learning (ML) to data from diverse sensor types — for example, audio and video. Synaptics’ Katana low–power processor, for example, fuses inputs from a variety of sensors, including vision, sound, and environmental. Xperi’s talk on smart toys for the future touches on this, as well.

Second, a subset of processor suppliers are focused on driving power and cost down to a minimum. This is interesting because it enables new applications. For example, Cadence will be presenting on additions to their Tensilica processor portfolio that enable always–on AI applications. Arm will be presenting low–power vision and ML use cases based on their Cortex–M series of processors. And Qualcomm will be covering tools for creating low–power computer vision apps on their Snapdragon family.

Third, although many processor suppliers are focused mainly or exclusively on ML, a few are addressing other kinds of algorithms typically used in conjunction with deep neural networks, such as classical computer vision and image processing.  A great example is quadric, whose new q16 processor is claimed to excel at a wide range of algorithms, including both ML and conventional computer vision.

Finally, an entirely new species seems to be coming to the fore: neuromorphic processors. Neuromorphic computing refers to approaches that mimic the way the brain processes information. For example, biological vision systems process events in the field of view, as opposed to classical computer vision approaches that typically capture and process all the pixels in a scene at a fixed frame rate that has no relation to the source of the visual information. The Summit’s keynote talk, “Event–based Neuromorphic Perception and Computation: The Future of Sensing and AI” by Prof. Ryad Benosman, will give an overview of the advantages to be gained by neuromorphic approaches. Opteran will be presenting on their neuromorphic processing approach to enable vastly improved vision and autonomy, the design of which was inspired by insect brains.

Whatever your application is, and whatever your requirements are, somewhere out there is an embedded AI or vision processor that’s the best fit for you. At the Summit, you’ll be able to learn about many of them, and speak with the innovative companies developing them.  Come check them out, and be sure to check back in 10 years — when we will see how many of 2032’s AI processors trace their lineage to this modern–day Cambrian Explosion!

—Jeff Bier is the president of consulting firm BDTI, founder of the Edge AI and Vision Alliance, and the general chair of the Embedded Vision Summit.

About the Edge AI and Vision Alliance

The mission of the Alliance is to accelerate the adoption of edge AI and vision technology by:
  • Inspiring and empowering product creators to incorporate AI and vision technology into new products and applications
  • Helping Member companies achieve success with edge AI and vision technology by:
  • Building a vibrant AI and vision ecosystem by bringing together suppliers, end-product designers, and partners
  • Delivering timely insights into AI and vision market research, technology trends, standards and application requirements
  • Assisting in understanding and overcoming the challenges of incorporating AI in their products and businesses

Go to the original article...

Videos du jour – CICC, PhotonicsNXT and EPIC

Image Sensors World        Go to the original article...

IEEE CICC 2022 best paper candidates present their work

Solid-State dToF LiDAR System Using an Eight-Channel Addressable, 20W/Ch Transmitter, and a 128x128 SPAD Receiver with SNR-Based Pixel Binning and Resolution Upscaling
Shenglong Zhuo, Lei Zhao,Tao Xia, Lei Wang, Shi Shi, Yifan Wu, Chang Liu, et al.
Fudan University, PhotonIC Technologies, Southern Univ. of S&T

A 93.7%-Efficiency 5-Ratio Switched-Photovoltaic DC-DC Converter
Sandeep Reddy Kukunuru,Yashar Naeimi, Loai Salem
University of California, Santa Barbara

A 23-37GHz Autonomous Two-Dimensional MIMO Receiver Array with Rapid Full-FoV Spatial Filtering for Unknown Interference Suppression
Boce Lin, Tzu-Yuan Huang,Amr Ahmed, Min-Yu Huang, Hua Wang
Georgia Institute of Technology


PhotonicsNXT Fall Summit keynote discusses automotive lidar

This keynote session by Pierrick Boulay of Yole Developpement at the PhotonicsNXT Fall Summit held on October 28, 2021 provides an overview of the lidar ecosystem and shows how lidar is being used within the auto industry for ranging and imaging.




EPIC Online Technology Meeting on Single Photon Sources and Detectors

The power hidden in one single photon is unprecedented. But we need to find ways to harness that power. This meeting will discuss cutting-edge technologies paving the way for versatile and efficient pure single-photon sources and detection schemes with low dark count rates, high saturation levels, and high detection efficiencies. This meeting will gather the key players in the photonic industry pushing the development of these technologies towards commercializing products that harness the intrinsic properties of photons.



Go to the original article...

[Updated] 2022 International SPAD Sensor Workshop Final Program Available

Image Sensors World        Go to the original article...

About ISSW 2022

Devices | Architectures | Applications

The International SPAD Sensor Workshop focuses on the study, modeling, design, fabrication, and characterization of SPAD sensors. The workshop welcomes all researchers, practitioners, and educators interested in SPADs, SPAD imagers, and associated applications, not only in imaging but also in other fields.

The third edition of the workshop will gather experts in all areas of SPADs and SPAD related applications using Internet virtual conference technology.  The program is under development, expect three full days of with over 40 speakers from all over the world. This edition is sponsored by ams OSRAM.

Workshop website: https://issw2022.at/

Final program: https://issw2022.at/wp-content/uploads/2022/03/amsOSRAM_ISSW22_Program_3003.pdf











Go to the original article...

Low Power Edge-AI Vision Sensor

Image Sensors World        Go to the original article...

Another interesting article from the upcoming tinyML conference. This one is titled "P2M: A Processing-in-Pixel-in-Memory Paradigm for Resource-Constrained TinyML Applications" and is work done by a team from University of Southern California.

The demand to process vast amounts of data generated from state-of-the-art high resolution cameras has motivated novel energy-efficient on-device AI solutions. Visual data in such cameras are usually captured in the form of analog voltages by a sensor pixel array, and then converted to the digital domain for subsequent AI processing using analog-to-digital converters (ADC). Recent research has tried to take advantage of massively parallel low-power analog/digital computing in the form of near- and in-sensor processing, in which the AI computation is performed partly in the periphery of the pixel array and partly in a separate on-board CPU/accelerator. Unfortunately, high-resolution input images still need to be streamed between the camera and the AI processing unit, frame by frame, causing energy, bandwidth, and security bottlenecks. To mitigate this problem, we propose a novel Processing-in-Pixel-in-memory (P2M) paradigm, that customizes the pixel array by adding support for analog multi-channel, multi-bit convolution and ReLU (Rectified Linear Units). Our solution includes a holistic algorithm-circuit co-design approach and the resulting P2M paradigm can be used as a drop-in replacement for embedding memory-intensive first few layers of convolutional neural network (CNN) models within foundry-manufacturable CMOS image sensor platforms. Our experimental results indicate that P2M reduces data transfer bandwidth from sensors and analog to digital conversions by ~21x, and the energy-delay product (EDP) incurred in processing a MobileNetV2 model on a TinyML use case for visual wake words dataset (VWW) by up to ~11x compared to standard near-processing or in-sensor implementations, without any significant drop in test accuracy.






arXiv preprint: https://arxiv.org/pdf/2203.04737.pdf

tinyML conference information: https://www.tinyml.org/event/summit-2022/

Go to the original article...

Ultra-Low Power Camera for Intrusion Monitoring

Image Sensors World        Go to the original article...

An interesting paper titled "Millimeter-Scale Ultra-Low-Power Imaging System for Intelligent Edge Monitoring"  will be presented at the upcoming tinyML Research Symposium. This symposium is colocated with the tinyML Summit 2022 to be held from March 28-30 in Burlingame, CA (near SFO).

Millimeter-scale embedded sensing systems have unique advantages over larger devices as they are able to capture, analyze, store, and transmit data at the source while being unobtrusive and covert. However, area-constrained systems pose several challenges, including a tight energy budget and peak power, limited data storage, costly wireless communication, and physical integration at a miniature scale. This paper proposes a novel 6.7×7×5mm imaging system with deep-learning and image processing capabilities for intelligent edge applications, and is demonstrated in a home-surveillance scenario. The system is implemented by vertically stacking custom ultra-low-power (ULP) ICs and uses techniques such as dynamic behavior-specific power management, hierarchical event detection, and a combination of data compression methods. It demonstrates a new image-correcting neural network that compensates for non-idealities caused by a mm-scale lens and ULP front-end. The system can store 74 frames or offload data wirelessly, consuming 49.6μW on average for an expected battery lifetime of 7 days.

Preprint is up on arXiv: https://arxiv.org/abs/2203.04496



Personally, I find such work quite fascinating. With recent advances in learning based approaches for computer vision, we're seeing a "race to the top" --- larger neural networks, humongous datasets, and even beefier GPUs drawing 100's of watts of power. But, on the other hand, there's also a "race to the bottom" driven by edge computing/IoT applications that are extremely resource constrained --- microwatts of power, low image resolutions, and splitting hairs over every bit, every byte of data transferred.

Go to the original article...

Telluride Neuromorphic Workshop 2022

Image Sensors World        Go to the original article...

The 2022 edition of the Telluride Neuromorphic Workshop series will be held in-person June 26 to July 16 in beautiful Telluride, Colorado. The topics of interest are broadly in "neuromorphic engineering" with neuromorphic vision sensors (including event cameras and other "spiking"-based vision sensors) being key areas of interest.

Neuromorphic engineers design and fabricate artificial neural systems whose organizing principles are based on those of biological nervous systems. Over the past 27 years, the neuromorphic engineering research community focused on the understanding of low-level sensory processing and systems infrastructure; efforts are now expanding to apply this knowledge and infrastructure to addressing higher-level problems in perception, cognition, and learning. In this 3-week intensive workshop and through the Institute for Neuromorphic Engineering (INE), the mission is to promote interaction between senior and junior researchers; to educate new members of the community; to introduce new enabling fields and applications to the community; to promote ongoing collaborative activities emerging from the Workshop, and to promote a self-sustaining research field.

The workshop will be organized in four topic areas

  • Neuromorphic Tactile Exploration (Enhance the tactile exploration capabilities of robots)
  • Lifelong Learning at Scale: From Neuroscience Theory to Robotic Applications (Apply neuro-inspired principles of lifelong learning to autonomous systems.)
  • Cross-modality brain signals: auditory, visual and motor 
  • Neuromorphics Tools, Techniques and Hardware (SpiNNaker 2 and FPAAs)

Researchers from academia, industry and national labs are all encouraged to apply... 

... in particular if they are prepared to work on specific projects, talk about their own work or bring demonstrations to Telluride (e.g. robots, chips, software). 

An application is required to attend, and financial support is available. Application deadline is April 8, 2022.

Call for applications.

Application submission page.

Go to the original article...

css.php