A Job Opening with Euresys

Image Sensors World        Go to the original article...

Euresys

Sales Manager - Europe            Liège, Belgium or Schongau, Germany           Link

Go to the original article...

Artilux announces room temperature GeSi SPAD

Image Sensors World        Go to the original article...

 
HSINCHU, Feb. 22, 2024 /PRNewswire/ -- Artilux, the renowned leader of GeSi (germanium-silicon) photonics technology for CMOS (complementary metal-oxide-semiconductor) based SWIR (short-wavelength infrared) sensing and imaging, announced today that the research team at Artilux has made a breakthrough in advancing SWIR GeSi SPAD (single-photon avalanche diode) technology, which has been recognized and published by Nature, one of the world's most prestigious scientific journals. The paper, titled "Room temperature operation of germanium-silicon single-photon avalanche diode," presented the Geiger-mode operation of a high-performing GeSi avalanche photodiode at room temperature, which in the past was limited to operation at a low temperature below at least 200 Kelvin. Nature's rigorous peer-review process ensures that only research of the highest caliber and broadest interest is published, and the acceptance and publication of the paper in Nature is another pivotal mark in exemplifying Artilux's leadership in CMOS-based SWIR sensing and imaging.

The research work, led by Dr. Neil Na, CTO of Artilux, has unveiled a CMOS-compatible GeSi SPAD operated at room temperature and elevated temperatures, featuring a noise-equivalent power improvement over previously demonstrated Ge-based SPADs by several orders of magnitude. The paper showcases key parameters of the GeSi SPAD, including dark count rate, single-photon detection probability at SWIR spectrum, timing jitter, after-pulsing characteristic time, and after-pulsing probability, at a low breakdown voltage and a small excess bias. As a proof of concept, three-dimensional point-cloud images were captured with TOF (direct time-of-flight) technique using the GeSi SPAD. "When we started the project, there were overwhelming evidence in the literature indicating that a room-temperature operation of GeSi SPAD is simply not possible," said Dr. Na, "and I am proud of our team turning the scientific research into a commercial reality against all odds."

The findings set a new milestone in CMOS photonics. The potential deployment of single-photon sensitive SWIR sensors, imagers, and photonic integrated circuits shall unlock critical applications in TOF sensors and imagers, LiDAR (light detection and ranging), bio-photonics, quantum computing and communication, artificial intelligence, robotics, and more. Artilux is committed to continuing its leadership in CMOS photonics technology, aiming to further contribute to the scientific community and photonics industry.

Abstract of article in Nature (Feb 2024): https://www.nature.com/articles/s41586-024-07076-x
The ability to detect single photons has led to the advancement of numerous research fields. Although various types of single-photon detector have been developed, because of two main factors—that is, (1) the need for operating at cryogenic temperature and (2) the incompatibility with complementary metal–oxide–semiconductor (CMOS) fabrication processes—so far, to our knowledge, only Si-based single-photon avalanche diode (SPAD) has gained mainstream success and has been used in consumer electronics. With the growing demand to shift the operation wavelength from near-infrared to short-wavelength infrared (SWIR) for better safety and performance, an alternative solution is required because Si has negligible optical absorption for wavelengths beyond 1 µm. Here we report a CMOS-compatible, high-performing germanium–silicon SPAD operated at room temperature, featuring a noise-equivalent power improvement over the previous Ge-based SPADs by 2–3.5 orders of magnitude. Key parameters such as dark count rate, single-photon detection probability at 1,310 nm, timing jitter, after-pulsing characteristic time and after-pulsing probability are, respectively, measured as 19 kHz µm−2, 12%, 188 ps, ~90 ns and <1%, with a low breakdown voltage of 10.26 V and a small excess bias of 0.75 V. Three-dimensional point-cloud images are captured with direct time-of-flight technique as proof of concept. This work paves the way towards using single-photon-sensitive SWIR sensors, imagers and photonic integrated circuits in everyday life.


Go to the original article...

Nikon to acquire RED.com

Image Sensors World        Go to the original article...

From Nikon newsroom: https://www.nikon.com/company/news/2024/0307_01.html

Nikon to Acquire US Cinema Camera Manufacturer RED.com, LLC

March 7, 2024

TOKYO - Nikon Corporation (Nikon) hereby announces its entry into an agreement to acquire 100% of the outstanding membership interests of RED.com, LLC (RED) whereby RED will become a wholly-owned subsidiary of Nikon, pursuant to a Membership Interest Purchase Agreement with Mr. James Jannard, its founder, and Mr. Jarred Land, its current President, subject to the satisfaction of certain closing conditions thereunder.

Since its establishment in 2005, RED has been at the forefront of digital cinema cameras, introducing industry-defining products such as the original RED ONE 4K to the cutting-edge V-RAPTOR [X] with its proprietary RAW compression technology. RED's contributions to the film industry have not only earned it an Academy Award but have also made it the camera of choice for numerous Hollywood productions, celebrated by directors and cinematographers worldwide for its commitment to innovation and image quality optimized for the highest levels of filmmaking and video production.

This agreement was reached as a result of the mutual desires of Nikon and RED to meet the customers’ needs and offer exceptional user experiences that exceed expectations, merging the strengths of both companies. Nikon's expertise in product development, exceptional reliability, and know-how in image processing, as well as optical technology and user interface along with RED’s knowledge in cinema cameras, including unique image compression technology and color science, will enable the development of distinctive products in the professional digital cinema camera market.

Nikon will leverage this acquisition to expand the fast-growing professional digital cinema camera market, building on both companies' business foundations and networks, promising an exciting future of product development that will continue to push the boundaries of what is possible in film and video production.

Go to the original article...

Job Postings – Week of 17 March 2024

Image Sensors World        Go to the original article...

WeRide

Camera Sensor Engineer

San Jose, California, USA

Link

ISDI

Image Sensor Engineer

London, England, UK

Link

HRL Laboratories

Focal Plane Engineer

Camarillo, California, USA

Link

HRL Laboratories

Senior Infrared Detector Research Scientist

Camarillo, California, USA

Link

Paul Scherrer Institute

Postdoctoral Fellow in detector development

Villigen, Switzerland

Link

Kappa Optronics

Engineer for image sensor and camera technology

Göttingen, Germany

Link

Caeleste

Characterization Engineer

Mechelen, Belgium

Link

University of Amsterdam - NIKHEF

Postdoc position in ALICE and Detector R&D for Experimental Particle Physics

Amsterdam, Netherlands

Link

GE Healthcare

Detector Mechanical Engineer

Hino, Tokyo, Japan

Link

Go to the original article...

Three New Videos from Photonis

Image Sensors World        Go to the original article...

Photonis has released new videos describing the latest improvements in its image intensifiers.  

A little background might be useful to those with little exposure to image intensifiers.

First, Photonis itself. Those of you who are interested in the whole complex story can find it here. The original Photonis was a renamed spinoff of Philips that subsequently acquired a few other companies including Burle, the renamed spinoff of RCA's vacuum tube operation. Recently, the Photonis Group renamed itself Exosens but still uses Photonis as the brand for its image intensifiers.

Image intensifiers are vacuum tubes that have at one end a surface that emits electrons on receipt of photons, some sort of acceleration and electron multiplication mechanism and a phosphor at the other end to produce a brighter visible image. As new developments have been applied to intensifiers, various generations have been assigned.

Gen 0 - See this (somewhat irreverent) link. (Not real, of course.) Sometimes the first low-gain tubes are called Gen 0.

Gen 1 - Light hitting an alkali photocathode produces electrons that are accelerated and electrostatically focused by a metal cone on to a curved phosphor. These invert the image, which is reverted by the optics - 1930s - 1960s.

Gen 2 - Proximity-focused electrons from the photocathode hit a microchannel plate in which they are multiplied. The electron output is proximity-focused on a flat phosphor. Some of these still have the focusing cone to provide image inversion. 1970s

Gen 3 - The alkali photocathode is replaced by a cesium-coated gallium arsenide membrane. 1970s-1990s

Gen 4 - Photocathode improvements of various types and, typically, electronic gating. Strictly speaking, these are still Gen 3. 2000s+

The videos showing tubes Photonis characterizes as Gen 4+:

1 - Demonstration of electronic gating

2 - Demonstration of performance

3 - Demonstration of halo improvements





Go to the original article...

IEEE ICCP 2024 Call for Papers, Submission Deadline March 22, 2024

Image Sensors World        Go to the original article...

Call for Papers: IEEE International Conference on Computational Photography (ICCP) 2024
https://iccp-conference.org/iccp2024/call-for-papers/
Submission Deadline: March 22, 2024 @ 23:59 CET

ICCP is an international venue for disseminating and discussing new scholarly work in computational photography, novel imaging, sensors and optics techniques. This year, ICCP will take place in EPFL, Lausanne Switzerland, on July 22-24th!

As in previous years, ICCP is coordinating with the IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) for a special issue on Computational Photography to be published after the conference.

 ICCP 2024 seeks novel and high-quality submissions in all areas of computational photography, including, but not limited to:

  •  High-performance imaging.
  •  Computational cameras, illumination, and displays.
  •  Advanced image and video processing.
  •  Integration of imaging, physics, and machine learning.
  •  Organizing and exploiting photo / video collections.
  •  Structured light and time-of-flight imaging.
  •  Appearance, shape, and illumination capture.
  •  Computational optics (wavefront coding, digital holography, compressive sensing, etc.).
  •  Sensor and illumination hardware.
  •  Imaging models and limits
  •  Physics-based rendering, neural rendering, and differentiable rendering.
  •  Applications: imaging on mobile platforms, scientific imaging, medicine and biology, user interfaces, AR/VR systems.

Learn more on the ICCP 2024 website, and submit your latest advancements by Friday, 22nd March, 2024.

The call for posters and demo will be published soon with a deadline end of April. It will also be a great opportunity to advertise your work.

 



Go to the original article...

Sigma 500mm f5.6 DG DN Sports review

Cameralabs        Go to the original article...

The Sigma 500mm f5.6 DG DN OS Sports is a light and compact super-telephoto prime lens aimed at sports, wildlife and aviation photographers. Here's my full review!…

Go to the original article...

Prophesee Qualcomm demo at Mobile World Congress

Image Sensors World        Go to the original article...

Prophesee and Qualcomm recently showcased their "blur free" mobile photography technology at the Mobile World Congress in Barcelona.

Press release: https://prophesee-1.reportablenews.com/pr/prophesee-s-metavision-image-deblur-solution-for-smartphones-is-now-production-ready-seamlessly-optimized-for-the-snapdragon-8-gen-3-mobile-platform

February 27, 2024 – Paris, France - Prophesee SA, inventor of the most advanced neuromorphic vision systems, today announced that the progress achieved through its collaboration with Qualcomm Technologies, Inc. has now reached production stage. A live demo during Mobile World Congress Barcelona is showcasing Prophesee’s native compatibility with premium Snapdragon® mobile platforms, bringing the speed, efficiency, and quality of neuromorphic-enabled vision to cameras in mobile devices.

Prophesee’s event-based Metavision sensors and AI, optimized for use with Snapdragon platforms now brings motion blur cancellation and overall image quality to unprecedented levels, especially in the most challenging scenarios faced by conventional frame-based RGB sensors, fast-moving and low-light scenes.

“We have made significant progress since we announced this collaboration in February 2023, achieving the technical milestones that demonstrate the impressive impact on image quality our event-based technology has in mobile devices containing Snapdragon mobile platforms. As a result, our Metavision Deblur solution has now reached production readiness,” said Luca Verre, CEO and co-founder of Prophesee. “We look forward to unleashing the next generation of Smartphone's photography and video with Prophesee's Metavision.”

“Qualcomm Technologies is thrilled to continue our strong collaboration with Prophesee, joining efforts to efficiently optimize Prophesee’s event-based Metavision technology for use with our flagship Snapdragon 8 Gen 3 Mobile Platform. This will deliver significant enhancements to image quality and bring new features enabled by event cameras’ shutter-free capability to devices powered by Snapdragon mobile platforms,” said Judd Heape, VP of Product Management at Qualcomm Technologies, Inc.

How it works
Prophesee’s breakthrough sensors add a new sensing dimension to mobile photography. They change the paradigm in traditional image capture by focusing only on changes in a scene, pixel by pixel, continuously, at extreme speeds.

Each pixel in the Metavision sensor embeds a logic core, enabling it to act as a neuron.
They each activate themselves intelligently and asynchronously depending on the amount of photons they sense. A pixel activating itself is called an event. In essence, events are driven by the scene’s dynamics, not an arbitrary clock anymore, so the acquisition speed always matches the actual scene dynamics.

High-performance event-based deblurring is achieved by synchronizing a frame-based and Prophesee’s event-based sensor. The system then fills the gaps between and inside the frames with microsecond events to algorithmically extract pure motion information and repair motion blur.
Learn more: https://www.prophesee.ai/event-based-vision-mobile/

Go to the original article...

Preprint on "Skipper-in-CMOS" image sensor

Image Sensors World        Go to the original article...

A recent preprint on ArXiv https://arxiv.org/abs/2402.12516 titled presents a new CMOS image sensor designed to achieve sub-electron read noise and photon number resolving capability.

Skipper-in-CMOS: Non-Destructive Readout with Sub-Electron Noise Performance for Pixel Detectors

Abstract: The Skipper-in-CMOS image sensor integrates the non-destructive readout capability of Skipper Charge Coupled Devices (Skipper-CCDs) with the high conversion gain of a pinned photodiode in a CMOS imaging process, while taking advantage of in-pixel signal processing. This allows both single photon counting as well as high frame rate readout through highly parallel processing. The first results obtained from a 15 x 15 um^2 pixel cell of a Skipper-in-CMOS sensor fabricated in Tower Semiconductor's commercial 180 nm CMOS Image Sensor process are presented. Measurements confirm the expected reduction of the readout noise with the number of samples down to deep sub-electron noise of 0.15rms e-, demonstrating the charge transfer operation from the pinned photodiode and the single photon counting operation when the sensor is exposed to light. The article also discusses new testing strategies employed for its operation and characterization.







Go to the original article...

Job Postings – Week of 10 March 2024

Image Sensors World        Go to the original article...

Qualcomm

ADAS camera Engineer

Farnborough, UK

Link

Onsemi

Product Engineer

Meridian, Idaho, USA

Link

University of Warwick

Towards Silicon Photonics Based Gas Sensors

Coventry, UK

Link

Johnson & Johnson

Sr. Manager Visualization Hardware

Santa Clara, CA, USA

Link

NASA

Development of infrared detectors and focal plane arrays for space instruments

Pasadena, CA, USA

Link

Apple

Hardware Sensing Systems Engineer

Cupertino, CA, USA

Link

Sony

Software Engineer/Researcher for Image Sensors

Tokyo, Japan

Link

Meta

Image Sensor Architect

Redmond, Washington, USA

Link

Queen Mary University

Silicon Detector Technician

London, England, UK

Link

Go to the original article...

Samsung defends AI editing on photos

Image Sensors World        Go to the original article...

From TechRadar: https://www.techradar.com/phones/samsung-galaxy-phones/there-is-no-such-thing-as-a-real-picture-samsung-defends-ai-photo-editing-on-galaxy-s24

"There is no such thing as a real picture": Samsung defends AI photo editing on Galaxy S24

Like most technology conferences in recent months, Samsung’s latest Galaxy Unpacked event was dominated by conversations surrounding AI. From two-way call translation to gesture-based search, the Samsung Galaxy S24 launched with several AI-powered tricks up its sleeve – but one particular feature is already raising eyebrows.

Set to debut on the Galaxy S24 and its siblings, Generative Edit will allow users to artificially erase, recompose and remaster parts of an image in a bid to achieve photographic perfection. This isn’t a new concept, and any edits made using this generative AI tech will result in a watermark and metadata changes. But the seamlessness with which the Galaxy S24 enables such edits has understandably left some Unpacked-goers concerned.

Samsung, however, is confident that its new Generative Edit feature is ethical, desirable and even necessary in today’s misinformation-filled world. In a revealing interview with TechRadar, Samsung’s Head of Customer Experience, Patrick Chomet, defended the company’s position on AI and its implications.

“There was a very nice video by Marques Brownlee last year on the moon picture,” Chomet told us. “Everyone was like, ‘Is it fake? Is it not fake?’ There was a debate around what constitutes a real picture. And actually, there is no such thing as a real picture. As soon as you have sensors to capture something, you reproduce [what you’re seeing], and it doesn’t mean anything. There is no real picture. [...] You can try to define a real picture by saying, ‘I took that picture’, but if you used AI to optimize the zoom, the autofocus, the scene – is it real? Or is it all filters? There is no real picture, full stop.”
“But still, questions around authenticity are very important,” Chomet continued, “and we [Samsung] go about this by recognizing two consumer needs; two different customer intentions. Neither of them are new, but generative AI will accelerate one of them.

“One intention is wanting to capture the moment – wanting to take a picture that’s as accurate and complete as possible. To do that, we use a lot of AI filtering, modification and optimization to erase shadows, reflections and so on. But we are true to the user's intention, which was to capture that moment.

“Then there is another intention, which is wanting to make something. When people go on Instagram, they add a bunch of funky black and white stuff – they create a new reality. Their intention isn’t to recreate reality, it’s to make something new. So [Generative Edit] isn’t a totally new idea. Generative AI tools will accelerate that intention exponentially in the next few years [...] so there is a big customer need to distinguish between the real and the new. That’s why our Generative Edit feature adds a watermark and edits the metadata, and we’re working with regulatory bodies to ensure people understand the difference.”

On the subject of AI regulation, Chomet said that Samsung "is very aligned with European regulations on AI," noting that governments are right to express early concerns around the potential implications of widespread AI use.

"The industry needs to be responsible and it needs to be regulated," added Chomet, noting that Samsung is actively working on that. "Our new technology is amazing and powerful – but like anything, it can be used in good and bad ways. So, it’s appropriate to think deeply about the bad ways.”

As for how Generative Edit will end up being used on Samsung's new Galaxy phones, only time will tell. Perhaps the feature will simply help average smartphone users (i.e. those unfamiliar with Photoshop) get the photos they really want, rather than facilitate mass photo fakery. Indeed, it still remains to be seen whether generative AI tech as a whole will be a benefit or a hindrance to society as we know it.


Go to the original article...

Hasselblad 907X & CFV 100C review

Cameralabs        Go to the original article...

The Hasselblad 907X & CFV 100C is a medium format camera with 100 Megapixels, built-in 1TB SSD, tilting screen, and a modular design that lets you breathe new life into vintage lenses or use modern optics for some of the best-looking images I’ve seen. Check out my in-depth review!…

Go to the original article...

Lomography Lomomatic 110 review

Cameralabs        Go to the original article...

The Lomomatic 110 from Lomography is a new pocket-sized camera that takes 110 film cartridges. Find out how I got on re-living my 1970's film fantasy in my review!…

Go to the original article...

GPixel on the verge of IPO?

Image Sensors World        Go to the original article...

From: http://www.myzaker.com/article/65d3ce24b15ec01a56438179

(Translated with Google Translate)

...

Against the backdrop of an improving market, Changchun Changguangchenxin Microelectronics Co., Ltd. (hereinafter referred to as "Changguangchenxin"), a domestic company specializing in CMOS image sensors, has recently launched its IPO application on the Shanghai Stock Exchange Science and Technology Innovation Board. to the inquiry stage.

In this IPO, Changguang Chenxin plans to raise 1.557 billion yuan, and the funds are planned to be invested in the research and development and industrialization projects of CMOS image sensors in different directions, including the field of machine vision, scientific instruments, and professional imaging. At the same time, funds are also planned to be invested in high-end CMOS image sensor R&D center construction projects and to supplement working capital.

However, in recent years, Changguangchenxin's net profit has turned from profit to loss during the reporting period. Moreover, as a company that wants to be listed on the Science and Technology Innovation Board, Changguangchenxin's R&D expense rate has been decreasing year by year, and the detailed list of R&D expenses has been focused on by the Shanghai Stock Exchange. 

...

Go to the original article...

Canon designs recognized with internationally renowned iF Design Awards for 30th consecutive year

Newsroom | Canon Global        Go to the original article...

Go to the original article...

Canon converts 100% of power to renewable energy at five manufacturing sites for printing business

Newsroom | Canon Global        Go to the original article...

Go to the original article...

Canon converts 100% of power to renewable energy at five manufacturing sites for printing business

Newsroom | Canon Global        Go to the original article...

Go to the original article...

Andes and MetaSilicon collaborate on automotive CIS

Image Sensors World        Go to the original article...

From Yahoo Finance news:

Andes Technology and MetaSilicon Collaborate to Build the World’s First Automotive-Grade CMOS Image Sensor Product Using RISC-V IP SoC

Hsinchu Taiwan, Feb. 22, 2024 (GLOBE NEWSWIRE) -- RISC-V IP vendor Andes Technology and edge computing chip provider MetaSilicon jointly announced that the MetaSilicon MAT Series is the world's first automotive-grade CMOS image sensor series using RISC-V IP SoC, using Andes' AndesCore™ N25F-SE processor. They are designed in accordance with the ISO26262 functional safety standard to achieve ASIL-B level and follow the AEC-Q100 Grade 2 to achieve a high level of safety and reliability. And by using technologies such as HDR, advanced imaging can be achieved in a simple, economical, and efficient system. They not only address the effects of high dynamic range, high sensitivity, and high color reproduction, but also meet the application requirements of ADAS decision-making.

The N25F-SE from Andes Technology is a 32-bit RISC-V CPU core that can support the standard IMACFD instruction set, which includes an efficient integer instruction set and a single/double precision floating point operation instruction set. The N25F-SE's high-efficiency five-stage pipeline achieves a good balance between high operating frequency and a streamlined design. It also has rich configurable options and flexible interface configuration, which greatly simplify the SoC development. In addition, the N25F-SE has obtained the ISO 26262 ASIL-B full compliance certification, which enables the image sensor chip to meet the vehicle-level safety requirement. For the development of MetaSilicon's automotive-grade chips, the N25F-SE and its safety package provide a good fit CPU solution and together with Andes’ technical support shorten the chip development time significantly.

MetaSilicon has first-class innovative R&D capabilities and has developed several cutting-edge technologies including LOFIC (Lateral Overflow Integration Capacitor) + DCG (Dual Conversion Gain) HDR (High-Dynamic Range), which meet the high-quality image requirements for smart car vision applications. The MAT Series 1MP CMOS image sensor chip has low power consumption and high dynamic range (HDR) characteristics. Its effective image resolution is 1280 H * 960 V, and it can support high dynamic range image output up to 60fps @120dB. The other MAT Series 3MP CIS has multiple capabilities such as low power consumption, ultra-high dynamic range (HDR), on-chip ISP, LFM, etc. Its effective image resolution is 1920 H * 1536 V, and can support up to 60fps frame rate, and the dynamic range can reach the industry-leading 140dB+. These chips can provide reliable high-quality image information for intelligent automotive applications.

"The N25F-SE provides a safety package, which includes a safety manual, safety analysis report and a development interface outline. The N25F-SE and its safety package are effective, high-performance and flexible automotive solutions. They can significantly reduce the time required to design automotive grade SoCs and to comply with the ISO 26262 standard", said Dr. Charlie Su, President and CTO of Andes Technology. "We are very pleased that N25F-SE's IP and safety package efficiently support MetaSilicon shorten the development time for its two automotive-grade chips. We also look forward to more cooperation between the two companies in the future to create more innovative products."

Jianhua Zheng, CTO of MetaSilicon said, “Among the various sensors used in automotive ADAS applications, visual image processing is particularly important. Once the image is not accurate and timely enough, it will directly lead to errors in the judgment of the back-end algorithm, so HDR performance requirements are extremely high. MetaSilicon's LOFIC+DCG HDR technology can achieve an ultra-high dynamic range of 140dB+ to meet practical application needs in the automotive ADAS field. We are honored to work closely with Andes Technology on two high-performance chips, using the world's first ISO 26262 certified RISC-V core N25F-SE that meets the functional safety standards. As a result, we can shorten the product development time and achieve functional safety goals."

Go to the original article...

Job Postings – Week of 3 March 2024

Image Sensors World        Go to the original article...

Prophesee

Internship - R&D - Sensor Simulation

Paris, France

Link

Sandia National Laboratories

Postdoctoral Appointee Optoelectronics, HI & Quantum Device

Albuquerque, New Mexico, USA

Link

Sandia National Laboratories

Intern - Photonic & Phononic Microsystems - R&D Undergrad Summer

Albuquerque, New Mexico, USA

Link

Tesla

Engineering Technician, Camera

Palo Alto, California, USA

Link

University of London

Silicon Detector Technician

Mile End, London, England, UK

Link

Turion Space

Advanced Sensors Engineer

Irvine, California, USA

Link

SmartRay

Sensor Application Engineer

Penang, Malaysia

Link

Joint Institute for Nuclear Research

Postdoctoral Programme in Novel Cherenkov Detector Development

Dubna, Russia

Link

Go to the original article...

VPS Semi presents a 600MP image sensor

Image Sensors World        Go to the original article...

From: http://www.vpssemi.com/NewsDetail?id=72 (Translated to English with Google Translate)

New product release 
VPS800 - New large area array image sensor chip released for wide-area surveillance
 


On September 6, the 24th China International Optoelectronics Expo kicked off at the Shenzhen Baoan International Convention and Exhibition Center. At this Optoelectronic Expo, Nanjing VPS Semiconductor Technology Co., Ltd. released a new product for the wide-area monitoring field - VPS800 large area array imaging. This series of chips has a pixel count of over 600 million, a pixel size of 0.7 microns, and supports 16 ROIs (regions of interest). It can provide imaging solutions at longer distances and a wider range, expanding the new boundaries of existing wide-area monitoring solutions.

The VPS800 large area array imaging chip is based on the internally-developed vertical charge transfer imaging device (VPS) as the core. It has a single-chip pixel size of more than 600 million, which can solve the problems of complexity, large volume, and high power consumption of existing large area array camera systems. It achieves long range and large field of view while reducing size, weight, power consumption, and cost, allowing coverage of a wider range clearly while obtaining more micro details. Currently, it is mainly used in security monitoring, commercial satellites, industrial inspection, etc. 

For scenarios that require both large-scale observation and the acquisition of a large number of micro-details, the VPS800 large-area image sensor chip can support long-distance fixed-point shooting. With one imaging, large-scale observation can be achieved and the fine details of the entire image can be retained.
 
For scenarios presenting large target areas and high resolution such as commercial satellite surveillance, imaging sensors are required to be "small" and "light". The VPS800 large-area imaging chip can support a single-chip pixel size of more than 600 million without the need for splicing. It is small in size and light in weight - more in line with the demand scenarios of micro/nano satellites.
 
It is worth mentioning that the chip supports 16 ROI (Region of Interest) functions, which allows users to read sensor information from any area, thus reducing the amount of information read. The target can be continuously observed through one frame, and it can also achieve multi-target synchronous tracking. It can be used as a supplementary solution to existing security monitoring solutions, expanding the observation scope and application boundaries of existing security monitoring solutions.

Note: This startup was previously featured in a blogpost from 2022: https://image-sensors-world.blogspot.com/2022/01/vps-semiconductor-raises-100m-rmb-in.html

Go to the original article...

STMicroelectronics announces new ToF Sensors

Image Sensors World        Go to the original article...

VD55H1 Low-Noise Low-Power iToF Sensor
-- New design feat, packing 672 x 804 sensing pixels in a tiny chip size and can map a three-dimensional surface in great detail by measuring distance to over half a million points.
-- Lanxin Technology will use the VD55H1 for intelligent obstacle avoidance and high-precision docking in mobile robots; MRDVS will enhance its 3D cameras adding high-accuracy depth-sensing. 



VL53L9 dToF 3D Lidar Module
-- New high-resolution sensor with 5cm – 9m ranging distance ensures accurate depth measurements for camera assistance, hand tracking, and gesture recognition.
-- VR systems use the VL53L9 to depict depth more accurately within 2D and 3D imaging, improving mapping for immersive gaming and other applications like 3D avatars.

The two new products announced will enhance safer mobile robots in industrial environments​ and smart homes as well as enable advanced VR applications.



The VL53L9CA is a state of the art, dToF 3D lidar (light detection and ranging) module with market leading resolution of up to 2.3k zones and accurate ranging from 5cm to 10m.


Full press release:

STMicroelectronics expands into 3D depth sensing with latest time-of-flight sensors

STMicroelectronics (NYSE: STM), a global semiconductor leader serving customers across the spectrum of electronics applications, announced an all-in-one, direct Time-of-Flight (dToF) 3D LiDAR (Light Detection And Ranging) module with market-leading 2.3k resolution, and revealed an early design win for the world’s smallest 500k-pixel indirect Time-of-Flight (iToF) sensor.
 
“ToF sensors, which can accurately measure the distance to objects in a scene, are driving exciting new capabilities in smart devices, home appliances, and industrial automation. We have already delivered two billion sensors into the market and continue to extend our unique portfolio, which covers all types from the simplest single-zone devices up to our latest high-resolution 3D indirect and direct ToF sensors,” said Alexandre Balmefrezol, General Manager, Imaging Sub-Group at STMicroelectronics. “Our vertically integrated supply chain, covering everything from pixel and metasurface lens technology and design to fabrication, with geographically diversified in-house high-volume module assembly plants, lets us deliver extremely innovative, highly integrated, and high-performing sensors.”
 
The VL53L9, announced today, is a new direct ToF 3D LiDAR device with a resolution of up to 2.3k zones. Integrating a dual scan flood illumination, unique in the market, the LiDAR can detect small objects and edges and captures both 2D infrared (IR) images and 3D depth map information. It comes as a ready-to-use low power module with its on-chip dToF processing, requiring no extra external components or calibration. Additionally, the device delivers state-of-the-art ranging performance from 5cm to 10 meters.
 
VL53L9’s suite of features elevates camera-assist performance, supporting macro up to telephoto photography. It enables features such as laser autofocus, bokeh, and cinematic effects for still and video at 60fps (frame per second). Virtual reality (VR) systems can leverage accurate depth and 2D images to enhance spatial mapping for more immersive gaming and other VR experiences like virtual visits or 3D avatars. In addition, the sensor’s ability to detect the edges of small objects at short and ultra-long ranges makes it suitable for applications such as virtual reality or SLAM (simultaneous localization and mapping).
 
ST is also announcing news of its VD55H1 ToF sensor, including the start of volume production and an early design win with Lanxin Technology, a China-based company focusing on mobile-robot deep-vision systems. MRDVS, a subsidiary company, has chosen the VD55H1 to add high-accuracy depth-sensing to its 3D cameras. The high-performance, ultra-compact cameras with ST’s sensor inside combine the power of 3D vision and edge AI, delivering intelligent obstacle avoidance and high-precision docking in mobile robots.

In addition to machine vision, the VD55H1 is ideal for 3D webcams and PC applications, 3D reconstruction for VR headsets, people counting and activity detection in smart homes and buildings. It packs 672 x 804 sensing pixels in a tiny chip size and can accurately map a three-dimensional surface by measuring distance to over half a million points. ST’s stacked-wafer manufacturing process with backside illumination enables unparalleled resolution with smaller die size and lower power consumption than alternative iToF sensors in the market. These characteristics give the sensors their excellent credentials in 3D content creation for webcams and VR applications including virtual avatars, hand modeling and gaming.

First samples of the VL53L9 are already available for lead customers and mass production is scheduled for early 2025. The VD55H1 is in full production now.

Pricing information and sample requests are available at local ST sales offices. ST will showcase a range of ToF sensors including the VL53L9 and explain more about its technologies at Mobile World Congress 2024, in Barcelona, February 26-29, at booth 7A61.
 

Go to the original article...

Canon requests removal of toner cartridges from Amazon.com, including Cool Toner brand cartridges sold by Epicartridges US based on Canon patent relating to toner cartridges having certain internal configurations

Newsroom | Canon Global        Go to the original article...

Go to the original article...

Canon requests removal of toner cartridges from Amazon.com, including Cool Toner brand cartridges sold by Epicartridges US based on Canon patent relating to toner cartridges having certain internal configurations

Newsroom | Canon Global        Go to the original article...

Go to the original article...

Canon requests removal of toner cartridges from Amazon.com, including Cool Toner brand cartridges sold by Epicartridges US based on Canon patent relating to toner cartridges having certain internal configurations

Newsroom | Canon Global        Go to the original article...

Go to the original article...

Canon requests removal of toner cartridges from Amazon.com, including Cool Toner brand cartridges sold by Epicartridges US based on Canon patent relating to toner cartridges having certain internal configurations

Newsroom | Canon Global        Go to the original article...

Go to the original article...

Five Jobs from Omnivision in Norway and Belgium

Image Sensors World        Go to the original article...

 Omnivision has sent us the following list of openings in their CMOS sensor development teams -

In Oslo, Norway:

Analog Characterization Engineer   Link

Functional Safety Verification Engineer   Link

Sr. Digital Design Engineer   Link

Staff Digital Design Engineer   Link

In Mechelen, Belgium:

Staff Characterization Engineer   Link

Go to the original article...

Jobs Submitted by Employers

Image Sensors World        Go to the original article...

Sony EUTDC (31 Mar 2024)   Link

Euresys (21 Mar 2024)   Link

onsemi (11 Feb 2024)   Link 

Sony (7 Feb 2024)    Link

Qurv (3 Feb 2024)   Link 

Photonis - (31 Jan 2024)   Link 

Sony Semiconductor Solutions - America (25 Jan 2024)   Link

CEA Leti (23 Jan 2024)   Link 

ISAE SUPAERO (23 Jan 2024)   Link  

Transformative Optics (20 Jan 2024)   Link

Teledyne (13 Dec 2023)   Link

Go to the original article...

Job Postings – Week of 25 February 2024

Image Sensors World        Go to the original article...

Sony UK Technology Centre

Industrial Engineer

Penceod, Wales, UK

Link

Apple

Image Quality Analyst

San Diego, California, USA

Link

Jenoptik

Imaging Engineer

Camberley, England, UK

Link

NASA

Postdoc - Materials and Process Development for Ultraviolet Detector Technologies (apply by 1 Mar 2024)

Pasadena, California, USA

Link

ASML

Research Group Lead Sensor Modelling and Computational Imaging

Veldhoven, Netherlands

Link

Brookhaven National Laboratory

Deputy Director-Instrumentation Division

Upton, New York, USA

Link

Axon

Principal Systems Engineer (Remote)

Scottsdale, Arizona, USA

Link

Rochester Institute of Technology

Tenure Track Faculty – Center for Imaging Science

Rochester, New York, USA

Link

Science and Technology Facilities Council – Rutherford Appleton

Detector Scientist Industrial Placement

Didcot, Oxfordshire, England, UK

Link


Go to the original article...

Job Postings – Week of 25 February 2024

Image Sensors World        Go to the original article...

Sony UK Technology Centre

Industrial Engineer

Penceod, Wales, UK

Link

Apple

Image Quality Analyst

San Diego, California, USA

Link

Jenoptik

Imaging Engineer

Camberley, England, UK

Link

NASA

Postdoc - Materials and Process Development for Ultraviolet Detector Technologies (apply by 1 Mar 2024)

Pasadena, California, USA

Link

ASML

Research Group Lead Sensor Modelling and Computational Imaging

Veldhoven, Netherlands

Link

Brookhaven National Laboratory

Deputy Director-Instrumentation Division

Upton, New York, USA

Link

Axon

Principal Systems Engineer (Remote)

Scottsdale, Arizona, USA

Link

Rochester Institute of Technology

Tenure Track Faculty – Center for Imaging Science

Rochester, New York, USA

Link

Science and Technology Facilities Council – Rutherford Appleton

Detector Scientist Industrial Placement

Didcot, Oxfordshire, England, UK

Link


Go to the original article...

Conference List – August 2024

Image Sensors World        Go to the original article...

International Symposium on Sensor Science - 1-4 Aug 2024 - Singapore - Website

Quantum Structure Infrared Photodetector (QSIP)  International Conference - 12-16 Aug 2024 - Santa Barbara, California, USA - Website

SPIE Optics & Photonics - 18-22 Aug 2024 - San Diego, California, USA - Website

International Conference on Sensors and Sensing Technology - 29-31 August 2024 - Valencia, Spain - Website

If you know about additional local conferences, please add them as comments.

Return to Conference List index

 

Go to the original article...

css.php