Photonis acquires El-Mul

Image Sensors World        Go to the original article...

Photonis announces the acquisition of El-Mul, leader in ion and electron detection solutions

Mérignac, France and Rehovot, Israel – July 19, 2023
 
Photonis a global leader of electro-optical detection and imaging technologies for defense and industrial markets, held by HLD since 2021, is pleased to announce the acquisition of Israeli company El-Mul, a specialist developer and manufacturer of advanced charged particle detectors and devices.

By welcoming El-Mul along with Xenics, Telops and Proxivision acquired in the last eight months, Photonis Group pursues its diversification and establishes itself as the sole sizeable European technology platform providing differentiated detection and imaging solutions across the electromagnetic spectrum to a variety of high-growth end markets worldwide.

“With the acquisition of El-Mul, Photonis group will gain access to the Electron Microscopy and Semiconductor inspection markets from a strong leading position, will reinforce its technology leadership in the Mass Spectrometry market and accelerate its growth into industrial and commercial markets.” Jérôme Cerisier, CEO of Photonis Group said.

El-Mul, based in Israel with 50 employees, is a well-established technology leader in the field of detection systems for Scanning Electron Microscopes for both the Analytical and Semiconductor industries as well as the field of electron and ion optics for Mass Spectrometry, having a strong position in the worldwide high-end markets.

“El-Mul has emerged as an innovative leader in electron and ion detection with the continued support of its founders and shareholders Cheifez family since 1992. Joining Photonis Group is a real opportunity to accelerate our growth. We will benefit from the group expertise, technological and commercial base, and international reach. There are also very promising synergies between our companies in terms of market, product range and R&D. Especially new R&D co-developments should bring significant added value to our customers.” Sasha Kadyshevitch, CEO of El-Mul said.

The transaction is finalized. Terms of the transaction are not being disclosed.
 
 
ABOUT PHOTONIS:
 
Accompanied by HLD since 2021, Photonis is a high-tech company, with more than 85 years of experience in the innovation, development, manufacture and sale of technologies in the field of photo detection and imaging. Today, it offers its customers detectors and detection solutions: its power tubes, digital cameras, neutron & gamma detectors, scientific detectors and intensifier tubes allow Photonis to respond to complex issues in environments extremely demanding by offering tailor-made solutions to its customers. Thanks to its sustained and permanent investment, Photonis is internationally recognized as a major innovator in optoelectronics, with production and R&D carried out on 8 sites, in Europe and the USA and over 1200 employees.

For more information: photonis.com
 
ABOUT EL-MUL
 
Since its founding in 1992, El-Mul Technologies has established itself as a leading supplier of advanced, high performance particle detectors that meet the most challenging needs of its customers. El-Mul excels in tailor-design of solutions that match customers’ requirements. Complex detection solutions which incorporate mechanical, optical and electronic components are conceived from square one through to full development, prototyping and serial manufacturing. El-Mul’s products range from traditional detection modules to state-of-the-art systems. An emphasis on innovation, confidentiality and personal service drives its business philosophy. A key strategic business goal for El-Mul is to build long-term and fruitful relationships with its customers – delivering performance, high confidence and clear value.

For more information: el-mul.com

Go to the original article...

Paper on "Charge-sweep" CIS Pixel

Image Sensors World        Go to the original article...

In a recent paper titled "Design and Characterization of a Burst Mode 20 Mfps Low Noise CMOS Image Sensor" (https://www.mdpi.com/1424-8220/23/14/6356) Xin Yue and Eric Fossum write:

This paper presents a novel ultra-high speed, high conversion-gain, low noise CMOS image sensor (CIS) based on charge-sweep transfer gates implemented in a standard 180 nm CIS process. Through the optimization of the photodiode geometry and the utilization of charge-sweep transfer gates, the proposed pixels achieve a charge transfer time of less than 10 ns without requiring any process modifications. Moreover, the gate structure significantly reduces the floating diffusion capacitance, resulting in an increased conversion gain of 183 µV/e−. This advancement enables the image sensor to achieve the lowest reported noise of 5.1 e− rms. To demonstrate the effectiveness of both optimizations, a proof-of-concept CMOS image sensor is designed, taped-out and characterized.











Go to the original article...

Edgehog Glass: Flare-Free Imaging with Next-Generation Anti-Reflection

Image Sensors World        Go to the original article...

Edgehog is a Montreal-based startup that has developed a solution for the stray light problem in camera and LiDAR sensors' coverglass.

Edgehog glass, a next-generation anti-reflection technology, removes image artifacts through the innovative process of glass nanotexturing by creating a gradient of refractive index on filters and image sensor covers. This enables uncompromised visuals from cameras and flare-free imaging with CMOS image sensors even in challenging lighting conditions. The result is a cleaner raw signal from the hardware without expensive image processing, laying the foundation for superior computer vision applications. The advanced nanotextured Edgehog glass enables camera optics designers to achieve unparalleled image clarity for a wide viewing angle.









 

Email: info@edgehogtech.com
Phone: +1 (438) 230 0101
Web: http://www.edgehogtech.com

Go to the original article...

onsemi Analyst Day 2023

Image Sensors World        Go to the original article...

onsemi held its annual Analyst Day on May 16 2023. A video recording below.



PDF slides are also available here: https://www.onsemi.com/site/pdf/2023_Analyst_Day_Presentation.pdf

Image sensors-related slides start around #63.



















Go to the original article...

12 ps resolution Vernier time-to-digital converter

Image Sensors World        Go to the original article...

Huang et al. from Shanghai Advanced Research Institute recently published a paper titled "A 13-Bit, 12-ps Resolution Vernier Time-to-Digital Converter Based on Dual Delay-Rings for SPAD Image Sensor" in Sensors journal.

Link: https://www.mdpi.com/1424-8220/21/3/743

Abstract:
A three-dimensional (3D) image sensor based on Single-Photon Avalanche Diode (SPAD) requires a time-to-digital converter (TDC) with a wide dynamic range and fine resolution for precise depth calculation. In this paper, we propose a novel high-performance TDC for a SPAD image sensor. In our design, we first present a pulse-width self-restricted (PWSR) delay element that is capable of providing a steady delay to improve the time precision. Meanwhile, we employ the proposed PWSR delay element to construct a pair of 16-stages vernier delay-rings to effectively enlarge the dynamic range. Moreover, we propose a compact and fast arbiter using a fully symmetric topology to enhance the robustness of the TDC. To validate the performance of the proposed TDC, a prototype 13-bit TDC has been fabricated in the standard 0.18-µm complementary metal–oxide–semiconductor (CMOS) process. The core area is about 200 µm × 180 µm and the total power consumption is nearly 1.6 mW. The proposed TDC achieves a dynamic range of 92.1 ns and a time precision of 11.25 ps. The measured worst integral nonlinearity (INL) and differential nonlinearity (DNL) are respectively 0.65 least-significant-bit (LSB) and 0.38 LSB, and both of them are less than 1 LSB. The experimental results indicate that the proposed TDC is suitable for SPAD-based 3D imaging applications.
 

Structure and operation of a typical Single-Photon Avalanche Diode (SPAD)-based direct time-of-flight (D-ToF) system.


Principle block diagram of the proposed vernier time-to-digital converter (TDC).




The architecture of the TDC core implemented by the 16-stages dual delay-rings.

The timing diagram of the TDC core.



Schematic of the proposed pulse-width self-restricted (PWSR) delay element.


The simulated results of the proposed PWSR delay element: (a) dependence of the delay time on the controlled voltage VNL/VNS and (b) dependence of the delay time on temperature.



Block diagram of the 3D image sensor based on our proposed TDC (right) and its pixel circuit schematic (left).

Go to the original article...

IDQuantique provides QRNG capabilities to Samsung Galaxy phones

Image Sensors World        Go to the original article...

From IDQuantique: https://www.idquantique.com/sk-telecom-and-samsung-unveil-the-galaxy-quantum-4/

SK Telecom and Samsung unveil the Galaxy Quantum 4, providing more safety and performance with IDQ’s QRNG Chip

Geneva, June 12th 2023

ID Quantique (IDQ), the world leader in quantum-safe security solutions, SK Telecom and Samsung Electronics, have worked together to release the ‘Galaxy Quantum 4’, the fourth Samsung smartphone equipped with quantum technology, designed to protect customers’ information.

With features matching those of Samsung’s flagship smartphones of the S23 series – i.e. waterdrop camera with image stabilization (OIS) and nightography (night/low-light shooting), rear glass design, large capacity battery – along with strengthened quantum-safe technology, the Galaxy Quantum 4 will be a new choice for customers who value both high performance and security.

Like its predecessor, the Galaxy Quantum 4 is equipped with the world’s smallest (width 2.5mm x length 2.5 mm) Quantum Random Number Generator (QRNG) chipset, designed by ID Quantique; enabling trusted authentication and encryption of information. It allows smartphone holders to use an even wider number of applications and services in a safer and more secure manner by generating unpredictable true random numbers.

IDQ’s QRNG chip enhances the security of a very large number of services provided by the operator. QRNG protects the process from log-in/authentication/payment/unlock/OTP generation of service apps ranging from financial apps to social media apps and games offering a much higher level of trust to the users.

As an example, when an application provides authentication services, sensitive data such as fingerprints and facial images must be protected. Our QRNG, embedded in this new smartphone, can therefore be leveraged to generate encryption keys and, in conjunction with the keystore of the terminal, provide quantum enhanced security every time a user logs in to the app. The QRNG is also used to encrypt data stored in the external memory card.

As the previous version, the ‘Galaxy Quantum 4’ offers a differentiated security experience to customers by providing a ‘quantum indicator’ on the status bar so that customers can realize that they are using a quantum security service. Its price point is comparable to previous versions, but with increased performance and security.

“Protecting one’s private data is a priority for users. The Galaxy Quantum 4 is the latest in the Quantum series, which offers strong quantum security and premium performance. As a leading player in this area, we will continue to expand the use of quantum cryptography technology to provide users with greater security and safety,” said Moon Kab-in, Vice President and Head of Smart Device Center at SKT.

“Mobile phone users don’t want to get their data stolen. The Galaxy Quantum 4 includes top performances and more quantum-secured applications than ever before, bringing applications and services to a new level of security in the mobile phone industry” said Grégoire Ribordy, CEO and co-founder of ID Quantique.

Go to the original article...

Article on Machine Vision + AI Opportunities

Image Sensors World        Go to the original article...

From Semiconductor Engineering https://semiengineering.com/machine-vision-plus-ai-ml-opens-huge-opportunities/

Machine Vision Plus AI/ML Adds Vast New Opportunities

Traditional technology companies and startups are racing to combine machine vision with AI/ML, enabling it to “see” far more than just pixel data from sensors, and opening up new opportunities across a wide swath of applications.

In recent years, startups have been able to raise billions of dollars as new MV ideas come to light in markets ranging from transportation and manufacturing to health care and retail. But to fully realize its potential, the technology needs to address challenges on a number of fronts, including improved performance and security, and design flexibility.

Fundamentally, a machine vision system is a combination of software and hardware that can capture and process information in the form of digital pixels. These systems can analyze an image, and take certain actions based on how it is programmed and trained. A typical vision system consists of an image sensor (camera and lens), image and vision processing components (vision algorithm) and SoCs, and the network/communication components.

Both still and video digital cameras contain image sensors. So do automotive sensors such as lidar, radar, ultrasound, which deliver an image in digital pixel form, although not with the same resolution. While most people are familiar these types of images, a machine also can “see” can heat and audio signals data, and they can analyze that data to create a multi-dimensional image.
“CMOS image sensors have seen drastic improvement over the last few years,” said Ron Lowman, strategic marketing manager at Synopsys. “Sensor bandwidth is not being optimized for human sight anymore, but rather for the value AI it can provide. For instance, MIPI CSI, the dominant vision sensor interface, is not only increasing bandwidths, but also adding AI features such as Smart Region of Interest (SROI) and higher color depth. Although these color depth increases can’t be detected by the human eye, for machine vision it can improve the value of a service dramatically.”

Machine vision is a subset of the broader computer vision. “While both disciplines rely on looking at primarily image data to deduce information, machine vision implies ‘inspection type’ applications in an industry or factory setting,” said Amol Borkar, director of product management, marketing and business development, Tensilica Vision and AI DSPs at Cadence. “Machine vision relies heavily on using cameras for sensing. However, ‘cameras’ is a loaded term because we are typically familiar with an image sensor that produces RGB images and operates in the visible light spectrum. Depending on the application, this sensor could operate in infrared, which could be short wave, medium wave, long wave IR, or thermal imaging, to name a few variants. Event cameras, which are very hyper-sensitive to motion, were recently introduced. On an assembly line, line scan cameras are a slightly different variation from typical shutter-based cameras. Most current applications in automotive, surveillance, and medical rely on one or more of these sensors, which are often combined to do some form of sensor fusion to produce a result better than a single camera or sensor.”

Benefits
Generally speaking, MV can see better than people. The MV used in manufacturing can improve productivity and quality, lowering production costs. Paired with ADAS for autonomous driving, MV can take over some driving functions. Together with AI, MV can help analyze medical images.
The benefits of using machine vision include higher reliability and consistency, along with greater precision and accuracy (depending on camera resolution). And unlike humans, machines do not get tired, provided they receive routine maintenance. Vision system data can be stored locally or in the cloud, then analyzed in real-time when needed. Additionally, MV reduces production costs by detecting and screening out defective parts, and increases inventory control efficiency with OCR and bar-code reading, resulting in lower overall manufacturing costs.

Today, machine vision usually is deployed in combination with AI, which greatly enhances the power of data analysis. In modern factories, automation equipment, including robots, is combined with machine vision and AI to increase productivity.

How AI/ML and MV interact
With AI/ML, MV can self-learn and improve after capturing digital pixel data from sensors.
“Machine vision (MV) and artificial intelligence (AI) are closely related fields, and they often interact in various ways,” said Andy Nightingale, vice president of product marketing at Arteris IP. “Machine vision involves using cameras, sensors, and other devices to capture images or additional data, which is then processed and analyzed to extract useful information. Conversely, AI involves using algorithms and statistical models to recognize patterns and make predictions based on large amounts of data.”
This also can include deep learning techniques. “Deep learning is a subset of AI that involves training complex neural networks on large datasets to recognize patterns and make predictions,” Nightingale explained. ” Machine vision systems can use deep learning algorithms to improve their ability to detect and classify objects in images or videos. Another way that machine vision and AI interact is through the use of computer vision algorithms. Computer vision is a superset of machine vision that uses algorithms and techniques to extract information from images and videos. AI algorithms can analyze this information and predict what is happening in the scene. For example, a computer vision system might use AI algorithms to analyze traffic patterns and predict when a particular intersection will likely become congested. Machine vision and AI can also interact in the context of autonomous systems, such as self-driving cars or drones. In these applications, machine vision systems are used to capture and process data from sensors. In contrast, AI algorithms interpret this data and make decisions about navigating the environment.”

AI/ML, MV in autonomous driving
AI has an increasing number of roles in modern vehicles, but the two major roles are in perception and decision making.

“Perception is the process of understanding one’s surroundings through onboard and external sensor arrays,” said David Fritz, vice president of hybrid and virtual systems at Siemens Digital Industries Software. “Decision-making first takes the understanding of the surrounding state and a goal such as moving toward the destination. Next, the AI decides the safest, most effective way to get there by controlling the onboard actuators for steering, braking, accelerating, etc. These two critical roles address very different problems. From a camera or other sensor, the AI algorithms will use raw data from the sensors to perform object detection. Once an object is detected, the perception stack will classify the object, for example, whether the object is a car, a person, or an animal. The training process is lengthy and requires many training sets presenting objects from many different angles. After training, the AI network can be loaded into the digital twin or physical vehicle. Once objects are detected and classified decisions can be made by another trained AI network to control steering, braking, and acceleration. Using a high-fidelity digital twin to validate the process virtually has been shown to result in safer, more effective vehicles faster than simply using open road testing.”

How much AI/ML is needed is a question frequently asked by developers. In the case of modern factories, MV can be used to simply detect and pick out defective parts in an assembly line or employed to assemble automobiles. Doing the latter requires advanced intelligence and a more sophisticated design to ensure timing, precision, and calculation of motion and distance in the assembly process.
“Automation using robotics and machine vision has increased productivity in modern factories,” observed Geoff Tate, CEO of Flex Logix. “Many of these applications use AI. A simple application — for instance, detecting if a label is applied correctly — does not require a great deal of intelligence. On the other hand, a sophisticated, precision robot arm performing 3D motion requires much more GPU power. In the first application, one tile of AI IP will be sufficient, while the second application may need multiple tiles. Having flexible and scalable AI IPs would make designing robotics and machine vision much easier.”

Applications
Machine vision applications are limited only by one’s imagination. MV can be used in almost any industrial and commercial segment, so long as it requires vision and processing. Here is a partial list:
 Transportation (autonomous driving, in-cabin monitoring, traffic flow analysis, moving violation and accident detection);

  •  Manufacturing and automation (productivity analysis, quality management);
  •  Surveillance (detection of motion and intrusion monitor);
  •  Health care (imaging, cancer and tumor detection, cell classification);
  •  Agriculture (farm automation, plant disease and insect detection);
  •  Retail (customer tracking, empty shelf detection, theft detection), and
  •  Insurance (accident scene analysis from images).

There are many other applications. Consider drinking water or soft drink bottling. A machine vision system can be used to inspect fill levels, which typically is done by highly efficient robots. But robots occasionally make mistakes. MV can ensure the fill level is consistent and the labels are applied correctly.

Detecting any machine parts that deviate from measurement specification limits is another job for MV. Once the MV is trained on the specification, it can detect the parts that are outside the specification limits.

MV can detect uniform shapes such as squares or circles as well as odd-shaped parts, so it can be used to identify, detect, measure, count, and (with robots), pick and place.
Finally, combining AI, MV can perform tire assembly with precision and efficiency. Nowadays, OEMs automate vehicle assembly with robots. One of the processes is to install the four wheels to a new vehicle. Using MV, a robotic arm can detect the correct distance and apply just the right amount of pressure to prevent any damage.

Types of MV
MV technologies can be divided into one-dimensional (1D), two-dimensional (2D), and three-dimensional (3D).

1D systems analyze data one line at a time, comparing variations among groups. Usually it is used in production of items such as plastics and paper on a continual basis. 2D systems, in contrast, use a camera to scan line by line to form an area or a 2D image. In some cases, the whole area is scanned and the object image can then be unwrapped for detailed inspection. 

3D systems consist of multiple cameras or laser sensors to capture the 3D view of an object. During the training process, the object or the cameras need to be moved to capture the entire product. Recent technology can produce accuracy within micrometers. 3D systems produce higher resolution but are also more expensive.

Emerging MV startups and new innovations
Tech giants, including IBM, Intel, Qualcomm, and NVIDIA, have publicly discussed investments in MV. In addition, many startups are developing new MV solutions such as Airobotics , Arcturus Networks, Deep Vision AI , Hawk-Eye Innovations, Instrumental, lending AI, kinara, Mech-Mind, Megvii, NAUTO, SenseTime, Tractable, ViSenze, Viso, and others. Some of these companies have been able to raise funding in excess of $1 billion.

In transportation, insurance companies can use MV to scan photographs and videos of scenes of accidents and disasters for financial damage analysis. Additionally, AI-based MV can power safety platforms to analyze driver behavior.

In software, computer vision platforms can be created without the knowledge of coding. Other startups have developed the idea for MV authentication software. And in the field of sports, AI, vision, and data analysis could provide coaches the ability to understand how decisions are made by players during a game. Also, one startup devised a cost reduction idea for surveillance by combining AI and MV in unmanned, aerial drone design.

Both MV and AI are changing quickly, and will continue to increase in performance, including precision and accuracy, while high GPU and ML power will come down in cost, propelling new MV applications.

Arteris’ Nightingale noted there will be further improvements in accuracy and speed. “Machine vision systems will likely become more accurate and faster. This will be achieved through advancements in hardware, such as sensors, cameras, and processors, as well as improvements in algorithms and machine learning models,” he said, pointing to an increased use of deep learning, as well. “Deep learning has been a significant driver of progress in machine vision technology in recent years, and it is likely to play an even more substantial role in the future. Deep learning algorithms can automatically learn data features and patterns, leading to better accuracy and performance. There will be an enhanced ability to process and analyze large amounts of data, as machine vision technology can process and analyze large amounts of data quickly and accurately. We may also see advancements in machine vision systems that can process significantly larger datasets, leading to more sophisticated and intelligent applications.”

Further, MV and AI are expected to integrate with other technologies to provide additional high-performance, real-time applications.


“Machine vision technology is already integrated with other technologies, such as robotics and automation,” he said. “This trend will likely continue, and we may see more machine vision applications in health care, transportation, and security. As well, there will be more real-time applications. Machine vision technology is already used for real-time applications, such as facial recognition and object tracking. In the future, we may see more applications that require real-time processing, such as self-driving cars and drones.”

MV design challenges
Still, there are challenges in training an MV system. Its accuracy and performance depend on how well the MV is trained. Inspection can encompass parameters such as orientation, variation of the surfaces, contamination, and accuracy tolerances such as diameter, thickness, and gaps. 3D systems can perform better than 1D or 2D systems when detecting cosmetic and service variation effects. In other cases, when seeing an unusual situation, human beings can draw on knowledge from a different discipline, while MV and AI may not have that ability.

“Some of today’s key challenges include data flow management and control – especially with real-time latency requirements such as those in automotive applications — while keeping bandwidth to a minimum,” said Alexander Zyazin, senior product manager in Arm‘s Automotive Line of Business. “In camera-based systems, image quality (IQ) remains critical. It requires a hardware design to support ultra-wide dynamic range and local tone mapping. But it also requires IQ tuning, where traditionally subjective evaluation by human experts was necessary, making the development process lengthy and costly. The new challenge for MV is that this expertise might not result in the best system performance, as perception engines might prefer to see images differently to humans and to one another, depending on the task.”

In general, machines can do a better job when doing mundane tasks over and over again, or when recognizing an image with more patterns than humans can typically process. “As an example, a machine may do a better job recognizing an anomaly in a medical scan than a human, simply because the doctor may make a mistake, be distracted or tired,” said Thomas Andersen, vice president for AI and machine learning at Synopsys. “When inspecting high-precision circuits, a machine can do a much better job analyzing millions of patterns and recognizing errors, a task a human could not do, simply due to the size of the problem. On the other hand, machines have not yet reached the human skill of recognizing the complex scenes that can occur while driving a car. It may seem easy for a human to recognize and anticipate certain reactions, while the machine may be better in ‘simple’ situations that a human easily could deal with, but did not due to a distraction, inattention or incapacitation – for example auto stop safety systems to avoid an imminent collision. A machine can always react faster than a human, assuming it interprets the situation correctly.”

Another challenge is making sure MV is secure. With cyberattacks increasing constantly, it will be important to ensure no production disruption or interference from threat actors.

“Security is critical to ensuring the output of MV technology isn’t compromised,” said Arm’s Zyazin. “Automotive applications are a good example of the importance of security in both hardware and software. For instance, the information processed and extracted from the machine is what dictates decisions such as braking or lane-keep assist, which can pose a risk to those inside the vehicle if done incorrectly.”

Conclusion
MV designs include a mixture of chips (processors, memories, security), IPs, modules, firmware, hardware and software. The rollout of chiplets and multi-chip packaging will allow those systems to be combined in novel ways more easily and more quickly, adding new features and functions and improving the overall efficiency and capabilities of these systems.

“Known good die (KGD) solutions can provide cost and space efficient alternatives to packaged products with limited bonding pads and wires,” said Tetsu Ho, DRAM manager at Winbond. That helps improve design efficiency, provides enhanced hardware security performance, and especially time-to-market for product launch. These die go through 100% burn-in and are tested to the same extent as discrete parts. KGD 2.0 is needed to assure end-of-line yield in 2.5D/3D assembly and 2.5D/3D multichip devices to realize improvements in PPA, which means bandwidth performance, power efficiency, and area as miniaturization, driven by the explosion of technologies such as edge-computing AI.”

This will open new options for MV in new an existing markets. It will be used to support humans in autonomous driving, help robots perform with precision and efficiency in manufacturing, and perform surveillance with unmanned drones. In addition, MV will be able to explore places that are considered dangerous for humans, and provide data input and analysis for many fields, including insurance, sports, transportation, defense, medicine, and more.

Go to the original article...

Article on Machine Vision + AI Opportunities

Image Sensors World        Go to the original article...

From Semiconductor Engineering https://semiengineering.com/machine-vision-plus-ai-ml-opens-huge-opportunities/

Machine Vision Plus AI/ML Adds Vast New Opportunities

Traditional technology companies and startups are racing to combine machine vision with AI/ML, enabling it to “see” far more than just pixel data from sensors, and opening up new opportunities across a wide swath of applications.

In recent years, startups have been able to raise billions of dollars as new MV ideas come to light in markets ranging from transportation and manufacturing to health care and retail. But to fully realize its potential, the technology needs to address challenges on a number of fronts, including improved performance and security, and design flexibility.

Fundamentally, a machine vision system is a combination of software and hardware that can capture and process information in the form of digital pixels. These systems can analyze an image, and take certain actions based on how it is programmed and trained. A typical vision system consists of an image sensor (camera and lens), image and vision processing components (vision algorithm) and SoCs, and the network/communication components.

Both still and video digital cameras contain image sensors. So do automotive sensors such as lidar, radar, ultrasound, which deliver an image in digital pixel form, although not with the same resolution. While most people are familiar these types of images, a machine also can “see” can heat and audio signals data, and they can analyze that data to create a multi-dimensional image.
“CMOS image sensors have seen drastic improvement over the last few years,” said Ron Lowman, strategic marketing manager at Synopsys. “Sensor bandwidth is not being optimized for human sight anymore, but rather for the value AI it can provide. For instance, MIPI CSI, the dominant vision sensor interface, is not only increasing bandwidths, but also adding AI features such as Smart Region of Interest (SROI) and higher color depth. Although these color depth increases can’t be detected by the human eye, for machine vision it can improve the value of a service dramatically.”

Machine vision is a subset of the broader computer vision. “While both disciplines rely on looking at primarily image data to deduce information, machine vision implies ‘inspection type’ applications in an industry or factory setting,” said Amol Borkar, director of product management, marketing and business development, Tensilica Vision and AI DSPs at Cadence. “Machine vision relies heavily on using cameras for sensing. However, ‘cameras’ is a loaded term because we are typically familiar with an image sensor that produces RGB images and operates in the visible light spectrum. Depending on the application, this sensor could operate in infrared, which could be short wave, medium wave, long wave IR, or thermal imaging, to name a few variants. Event cameras, which are very hyper-sensitive to motion, were recently introduced. On an assembly line, line scan cameras are a slightly different variation from typical shutter-based cameras. Most current applications in automotive, surveillance, and medical rely on one or more of these sensors, which are often combined to do some form of sensor fusion to produce a result better than a single camera or sensor.”

Benefits
Generally speaking, MV can see better than people. The MV used in manufacturing can improve productivity and quality, lowering production costs. Paired with ADAS for autonomous driving, MV can take over some driving functions. Together with AI, MV can help analyze medical images.
The benefits of using machine vision include higher reliability and consistency, along with greater precision and accuracy (depending on camera resolution). And unlike humans, machines do not get tired, provided they receive routine maintenance. Vision system data can be stored locally or in the cloud, then analyzed in real-time when needed. Additionally, MV reduces production costs by detecting and screening out defective parts, and increases inventory control efficiency with OCR and bar-code reading, resulting in lower overall manufacturing costs.

Today, machine vision usually is deployed in combination with AI, which greatly enhances the power of data analysis. In modern factories, automation equipment, including robots, is combined with machine vision and AI to increase productivity.

How AI/ML and MV interact
With AI/ML, MV can self-learn and improve after capturing digital pixel data from sensors.
“Machine vision (MV) and artificial intelligence (AI) are closely related fields, and they often interact in various ways,” said Andy Nightingale, vice president of product marketing at Arteris IP. “Machine vision involves using cameras, sensors, and other devices to capture images or additional data, which is then processed and analyzed to extract useful information. Conversely, AI involves using algorithms and statistical models to recognize patterns and make predictions based on large amounts of data.”
This also can include deep learning techniques. “Deep learning is a subset of AI that involves training complex neural networks on large datasets to recognize patterns and make predictions,” Nightingale explained. ” Machine vision systems can use deep learning algorithms to improve their ability to detect and classify objects in images or videos. Another way that machine vision and AI interact is through the use of computer vision algorithms. Computer vision is a superset of machine vision that uses algorithms and techniques to extract information from images and videos. AI algorithms can analyze this information and predict what is happening in the scene. For example, a computer vision system might use AI algorithms to analyze traffic patterns and predict when a particular intersection will likely become congested. Machine vision and AI can also interact in the context of autonomous systems, such as self-driving cars or drones. In these applications, machine vision systems are used to capture and process data from sensors. In contrast, AI algorithms interpret this data and make decisions about navigating the environment.”

AI/ML, MV in autonomous driving
AI has an increasing number of roles in modern vehicles, but the two major roles are in perception and decision making.

“Perception is the process of understanding one’s surroundings through onboard and external sensor arrays,” said David Fritz, vice president of hybrid and virtual systems at Siemens Digital Industries Software. “Decision-making first takes the understanding of the surrounding state and a goal such as moving toward the destination. Next, the AI decides the safest, most effective way to get there by controlling the onboard actuators for steering, braking, accelerating, etc. These two critical roles address very different problems. From a camera or other sensor, the AI algorithms will use raw data from the sensors to perform object detection. Once an object is detected, the perception stack will classify the object, for example, whether the object is a car, a person, or an animal. The training process is lengthy and requires many training sets presenting objects from many different angles. After training, the AI network can be loaded into the digital twin or physical vehicle. Once objects are detected and classified decisions can be made by another trained AI network to control steering, braking, and acceleration. Using a high-fidelity digital twin to validate the process virtually has been shown to result in safer, more effective vehicles faster than simply using open road testing.”

How much AI/ML is needed is a question frequently asked by developers. In the case of modern factories, MV can be used to simply detect and pick out defective parts in an assembly line or employed to assemble automobiles. Doing the latter requires advanced intelligence and a more sophisticated design to ensure timing, precision, and calculation of motion and distance in the assembly process.
“Automation using robotics and machine vision has increased productivity in modern factories,” observed Geoff Tate, CEO of Flex Logix. “Many of these applications use AI. A simple application — for instance, detecting if a label is applied correctly — does not require a great deal of intelligence. On the other hand, a sophisticated, precision robot arm performing 3D motion requires much more GPU power. In the first application, one tile of AI IP will be sufficient, while the second application may need multiple tiles. Having flexible and scalable AI IPs would make designing robotics and machine vision much easier.”

Applications
Machine vision applications are limited only by one’s imagination. MV can be used in almost any industrial and commercial segment, so long as it requires vision and processing. Here is a partial list:
 Transportation (autonomous driving, in-cabin monitoring, traffic flow analysis, moving violation and accident detection);

  •  Manufacturing and automation (productivity analysis, quality management);
  •  Surveillance (detection of motion and intrusion monitor);
  •  Health care (imaging, cancer and tumor detection, cell classification);
  •  Agriculture (farm automation, plant disease and insect detection);
  •  Retail (customer tracking, empty shelf detection, theft detection), and
  •  Insurance (accident scene analysis from images).

There are many other applications. Consider drinking water or soft drink bottling. A machine vision system can be used to inspect fill levels, which typically is done by highly efficient robots. But robots occasionally make mistakes. MV can ensure the fill level is consistent and the labels are applied correctly.

Detecting any machine parts that deviate from measurement specification limits is another job for MV. Once the MV is trained on the specification, it can detect the parts that are outside the specification limits.

MV can detect uniform shapes such as squares or circles as well as odd-shaped parts, so it can be used to identify, detect, measure, count, and (with robots), pick and place.
Finally, combining AI, MV can perform tire assembly with precision and efficiency. Nowadays, OEMs automate vehicle assembly with robots. One of the processes is to install the four wheels to a new vehicle. Using MV, a robotic arm can detect the correct distance and apply just the right amount of pressure to prevent any damage.

Types of MV
MV technologies can be divided into one-dimensional (1D), two-dimensional (2D), and three-dimensional (3D).

1D systems analyze data one line at a time, comparing variations among groups. Usually it is used in production of items such as plastics and paper on a continual basis. 2D systems, in contrast, use a camera to scan line by line to form an area or a 2D image. In some cases, the whole area is scanned and the object image can then be unwrapped for detailed inspection. 

3D systems consist of multiple cameras or laser sensors to capture the 3D view of an object. During the training process, the object or the cameras need to be moved to capture the entire product. Recent technology can produce accuracy within micrometers. 3D systems produce higher resolution but are also more expensive.

Emerging MV startups and new innovations
Tech giants, including IBM, Intel, Qualcomm, and NVIDIA, have publicly discussed investments in MV. In addition, many startups are developing new MV solutions such as Airobotics , Arcturus Networks, Deep Vision AI , Hawk-Eye Innovations, Instrumental, lending AI, kinara, Mech-Mind, Megvii, NAUTO, SenseTime, Tractable, ViSenze, Viso, and others. Some of these companies have been able to raise funding in excess of $1 billion.

In transportation, insurance companies can use MV to scan photographs and videos of scenes of accidents and disasters for financial damage analysis. Additionally, AI-based MV can power safety platforms to analyze driver behavior.

In software, computer vision platforms can be created without the knowledge of coding. Other startups have developed the idea for MV authentication software. And in the field of sports, AI, vision, and data analysis could provide coaches the ability to understand how decisions are made by players during a game. Also, one startup devised a cost reduction idea for surveillance by combining AI and MV in unmanned, aerial drone design.

Both MV and AI are changing quickly, and will continue to increase in performance, including precision and accuracy, while high GPU and ML power will come down in cost, propelling new MV applications.

Arteris’ Nightingale noted there will be further improvements in accuracy and speed. “Machine vision systems will likely become more accurate and faster. This will be achieved through advancements in hardware, such as sensors, cameras, and processors, as well as improvements in algorithms and machine learning models,” he said, pointing to an increased use of deep learning, as well. “Deep learning has been a significant driver of progress in machine vision technology in recent years, and it is likely to play an even more substantial role in the future. Deep learning algorithms can automatically learn data features and patterns, leading to better accuracy and performance. There will be an enhanced ability to process and analyze large amounts of data, as machine vision technology can process and analyze large amounts of data quickly and accurately. We may also see advancements in machine vision systems that can process significantly larger datasets, leading to more sophisticated and intelligent applications.”

Further, MV and AI are expected to integrate with other technologies to provide additional high-performance, real-time applications.


“Machine vision technology is already integrated with other technologies, such as robotics and automation,” he said. “This trend will likely continue, and we may see more machine vision applications in health care, transportation, and security. As well, there will be more real-time applications. Machine vision technology is already used for real-time applications, such as facial recognition and object tracking. In the future, we may see more applications that require real-time processing, such as self-driving cars and drones.”

MV design challenges
Still, there are challenges in training an MV system. Its accuracy and performance depend on how well the MV is trained. Inspection can encompass parameters such as orientation, variation of the surfaces, contamination, and accuracy tolerances such as diameter, thickness, and gaps. 3D systems can perform better than 1D or 2D systems when detecting cosmetic and service variation effects. In other cases, when seeing an unusual situation, human beings can draw on knowledge from a different discipline, while MV and AI may not have that ability.

“Some of today’s key challenges include data flow management and control – especially with real-time latency requirements such as those in automotive applications — while keeping bandwidth to a minimum,” said Alexander Zyazin, senior product manager in Arm‘s Automotive Line of Business. “In camera-based systems, image quality (IQ) remains critical. It requires a hardware design to support ultra-wide dynamic range and local tone mapping. But it also requires IQ tuning, where traditionally subjective evaluation by human experts was necessary, making the development process lengthy and costly. The new challenge for MV is that this expertise might not result in the best system performance, as perception engines might prefer to see images differently to humans and to one another, depending on the task.”

In general, machines can do a better job when doing mundane tasks over and over again, or when recognizing an image with more patterns than humans can typically process. “As an example, a machine may do a better job recognizing an anomaly in a medical scan than a human, simply because the doctor may make a mistake, be distracted or tired,” said Thomas Andersen, vice president for AI and machine learning at Synopsys. “When inspecting high-precision circuits, a machine can do a much better job analyzing millions of patterns and recognizing errors, a task a human could not do, simply due to the size of the problem. On the other hand, machines have not yet reached the human skill of recognizing the complex scenes that can occur while driving a car. It may seem easy for a human to recognize and anticipate certain reactions, while the machine may be better in ‘simple’ situations that a human easily could deal with, but did not due to a distraction, inattention or incapacitation – for example auto stop safety systems to avoid an imminent collision. A machine can always react faster than a human, assuming it interprets the situation correctly.”

Another challenge is making sure MV is secure. With cyberattacks increasing constantly, it will be important to ensure no production disruption or interference from threat actors.

“Security is critical to ensuring the output of MV technology isn’t compromised,” said Arm’s Zyazin. “Automotive applications are a good example of the importance of security in both hardware and software. For instance, the information processed and extracted from the machine is what dictates decisions such as braking or lane-keep assist, which can pose a risk to those inside the vehicle if done incorrectly.”

Conclusion
MV designs include a mixture of chips (processors, memories, security), IPs, modules, firmware, hardware and software. The rollout of chiplets and multi-chip packaging will allow those systems to be combined in novel ways more easily and more quickly, adding new features and functions and improving the overall efficiency and capabilities of these systems.

“Known good die (KGD) solutions can provide cost and space efficient alternatives to packaged products with limited bonding pads and wires,” said Tetsu Ho, DRAM manager at Winbond. That helps improve design efficiency, provides enhanced hardware security performance, and especially time-to-market for product launch. These die go through 100% burn-in and are tested to the same extent as discrete parts. KGD 2.0 is needed to assure end-of-line yield in 2.5D/3D assembly and 2.5D/3D multichip devices to realize improvements in PPA, which means bandwidth performance, power efficiency, and area as miniaturization, driven by the explosion of technologies such as edge-computing AI.”

This will open new options for MV in new an existing markets. It will be used to support humans in autonomous driving, help robots perform with precision and efficiency in manufacturing, and perform surveillance with unmanned drones. In addition, MV will be able to explore places that are considered dangerous for humans, and provide data input and analysis for many fields, including insurance, sports, transportation, defense, medicine, and more.

Go to the original article...

GPixel (Changguang Chenxin) files for an IPO

Image Sensors World        Go to the original article...

Original article (in Chinese): https://finance.eastmoney.com/a/202307032768592891.html

English translation using Google Translate:

In this IPO, Changguang Chenxin intends to raise 1.557 billion yuan to invest in machine visionSerialized CMOS image sensor in the fieldR&D and industrialization projects for scientific instruments, R&D and industrialization projects for serialized CMOS image sensors in the field of scientific instruments, R&D and industrialization projects for serialized CMOS image sensors in the field of professional imaging , serialized CMOS image sensors for medical imaging Sensor research and development and industrialization projects, high-end CMOS image sensor research and development center construction projects and supplementary working capital.

According to the prospectus, Changguang Chenxin focuses on the research and development, design, testing and sales of high-performance CMOS image sensors, as well as related customized services.

The company includes customers D, Teledyne, Vieworks, Adimec and other overseas manufacturers, Hikvision Robotics, Huarui Technology, Xintu Optoelectronics, Eco OptoelectronicsAnd other domestic manufacturers, as well as scientific research institutes such as the Changchun Institute of Optics and Mechanics of the Chinese Academy of Sciences, the Shanghai Institute of Technology of the Chinese Academy of Sciences, the Xi’an Institute of Optics and Mechanics of the Chinese Academy of Sciences, and the National Astronomical Observatory of the Chinese Academy of Sciences.

In terms of performance , from 2020 to 2022, the company's operating income will be 198 million yuan, 411 million yuan, and 604 million yuan; the net profit attributable to the parent during the same period will be 59.3872 million yuan, -33.1685 million yuan, and -83.1481 million yuan.

It is worth noting that Changguang Chenxin has overseas business risks. In the context of global cooperation in the integrated circuit supply chain, overseas procurement and overseas sales are an important part of the company's business activities. During the reporting period, the company's overseas procurement accounted for more than 80%, and overseas sales accounted for more than 30%.

In addition, the company also has a high proportion of inventory and the risk of falling prices. At the end of each reporting period, the book values ​​of inventories were 80.1680 million yuan, 224 million yuan and 304 million yuan respectively, accounting for 23.59%, 40.94% and 29.05% of the total assets respectively, maintaining a relatively high level overall.

Go to the original article...

Report predicts large growth in CCD image sensor market

Image Sensors World        Go to the original article...

[(July 5, 2023): There is a strong suspicion that this is a machine generated article and so its veracity is questionable.]

Experts Predict Stunning Growth for Global Image Sensor Market, Reaching USD 55.8 Billion by 2032




market.us recently published a research report on, the "Global Image Sensor Market By Technology, By Type, By Application, By Region and Companies - Industry Segment Outlook, Market Assessment, Competition Scenario, Trends, and Forecast 2023-2032". According to the report, the Global Image Sensors Market size was valued at USD 26.1 billion in 2022 and is projected to reach USD 55.8 billion by 2032, growing at a CAGR of 8.1% from 2023-2032. 

Rising security spending across public places worldwide combined with technology designed to bolster anti-terror equipment that also prevents security breaches are expected to drive this sector of industry forward.

The global image sensors market is poised for significant growth as technological advancements and expanding applications continue to fuel demand. With an increasing need for high-quality imaging solutions across industries such as automotive, consumer electronics, healthcare, and security, the image sensors market is expected to reach new milestones in the coming years.

Key Takeaways:

  • In 2022, the 2D segment emerged as the top revenue generator in the Global Image Sensors Market.
  • The Automotive Sector segment is dominating the market in terms of application and is expected to grow significantly from 2023 to 2032.
  • The Asia-Pacific Region held the largest revenue share of 41% in 2022, establishing its dominance in the market.
  • Europe secured the second position in revenue share in 2022 and is projected to experience substantial growth from 2023 to 2032.

Sensors have quickly become an indispensable element of modern vehicles, Advanced Driver Assistance Systems (ADAS), medical devices and automated production technologies. Sensors have also become more affordable, robust, precise, specific, frequent smarter and communicative. Their benefits will make them attractive options for deployment in future smart infrastructure systems; due to superior image quality and sensitivity of Charge-Coupled Device (CCD) technology this was previously the dominant solution used.

Since 2004, due to technological developments, CMOS image sensors have outshone CCD image sensors in terms of volume of shipment since 2004. CCD sensors utilize high voltage analog circuitry while CMOS uses less power and has smaller dimensions; CCD remains more popular due to increased revenue generated for growth in image sensors market growth.

Firstly, the continuous evolution of camera technologies and the proliferation of smartphones have revolutionized the consumer electronics sector. The demand for high-resolution imaging capabilities, augmented reality (AR) applications, and enhanced camera features in smartphones has been a major driving force behind the growth of the image sensors market.

Additionally, the automotive industry has witnessed a rapid integration of advanced driver-assistance systems (ADAS) and autonomous driving technologies. Image sensors play a crucial role in enabling these systems by providing accurate and real-time information for object detection, lane departure warnings, and adaptive cruise control. The increasing adoption of electric vehicles and the rising trend of in-car entertainment systems further contribute to the demand for image sensors in the automotive sector.

Furthermore, the healthcare industry has embraced the use of image sensors in medical devices such as endoscopes, surgical cameras, and X-ray machines. These sensors facilitate precise imaging, aiding medical professionals in diagnostics, minimally invasive surgeries, and patient monitoring. The growing emphasis on telemedicine and remote patient monitoring is also expected to drive the demand for image sensors in the healthcare sector.

In the realm of security and surveillance, image sensors have become indispensable components in surveillance cameras, facial recognition systems, and biometric scanners. The need for enhanced security measures across residential, commercial, and public sectors, coupled with the increasing adoption of smart city initiatives, is propelling the image sensors market forward.

To cater to the evolving market demands, leading companies in the image sensors industry are heavily investing in research and development activities to develop advanced sensor technologies. Innovations such as backside-illuminated (BSI) sensors, stacked CMOS sensors, and time-of-flight (ToF) sensors are gaining prominence, enabling improved image quality, higher resolutions, and faster data processing.

Top Trends in Global Image Sensors Market

Many vendors are now adopting CMOS image sensor technology, signalling its rapid advancement into low-cost camera designs. Although compared with CCD sensors for image quality at similar price points, CMOS sensors have grown increasingly popular due to their on-chip functionality in low-cost consumer markets like consumer electronics, automotive security, surveillance and others.

Consumer electronics has seen an explosion of demand for smartphones equipped with both rear- and front-facing cameras, driven largely by autonomous vehicles equipped with Advanced Driver Assistance Systems (ADAS) that enhance driver safety. Furthermore, as CMOS images can be utilized as security applications even under low light or dim light lighting conditions - their usage has skyrocketed as security applications become ever more critical in business operations.

Sony Corporation of Japan holds an unparalleled position in the CMOS sensor market and was the pioneer for commercializing automotive cameras equipped with their sensors. To increase production capacity of stacked image sensors for automotive cameras, Sony invested USD 895 Million (105 Billion JPY).

SmartSens is an industry leader when it comes to CMOS image sensors. Recently they unveiled the SC550XS ultra-high resolution 50MP image sensor featuring 1.0 micrometer pixels - using SmartSens' proprietary technologies including SmartClarity-2, SFCPixel and PixGainHDR technologies to produce superior picture quality while using 22nm HKMG Stack process for outstanding imaging performance.

Metalenz, an international start-up that develops meta-optic lens technology, recently unveiled an innovation that embeds polarization sensing capabilities directly into mobile and consumer devices - improving healthcare management features and ultimately revolutionising healthcare management features.

Competitive Landscape

Sony Corporation
Samsung Electronics Co. Ltd
ON Semiconductor Corporation 
and STMicroelectronics Co. Ltd
They are leading the imaging sensor market share.

Image sensor manufacturers are continuously making innovations to their products that offer more robust, accurate sensing at lower costs - Time-Of-Fight technology (TOF) has emerged as a game-changer here.

Recent Trends of Image Sensor Market

Realme has recently announced the availability of their 9 Pro Series smartphones equipped with Sony's IMX766 image sensors in Europe in February 2022. Each sensor measures 1/1.56", offering large pixels for photography with optical image stabilisation (OIS), along with an aperture size of 0.88 which facilitates taking clear photos even at long range distance.

Sony Interactive Entertainment LLC (SIE), purchased Bungie Inc in January 2022 as an independent videogame developer that had long collaborated with them - they had helped produce iconic titles like Halo and Destiny with them! SIE now gained access to Bungie's technical knowledge as well as world-class live games; increasing SIE's potential reach of billions of gamers around the globe.

Go to the original article...

Random number generation from image sensor noise

Image Sensors World        Go to the original article...

A recent preprint titled "Practical Entropy Accumulation for Random Number Generators with Image Sensor-Based Quantum Noise Sources" by Choi et al. is available here:  https://www.preprints.org/manuscript/202306.1169/v1

 Abstract: The efficient generation of high-quality random numbers is essential in the operation of cryptographic modules. The quality of a random number generator is evaluated by the min-entropy of its entropy source. Typical method used to achieve high min-entropy of the output sequence is an entropy accumulation based on a hash function. This is grounded in the famous Leftover Hash Lemma which guarantees a lower bound on the min-entropy of the output sequence. However, the hash function based entropy accumulation has slow speed in general. For a practical perspective we need a new efficient entropy accumulation with the theoretical background for the min-entropy of the output sequence. In this work, we obtain the theoretical bound for the min-entropy of the output random sequence through the very efficient entropy accumulation using only bitwise XOR operations, where the input sequences from the entropy source are independent. Moreover we examine our theoretical results by applying to the quantum random number generator that uses dark noise arising from image sensor pixels as its entropy source.





Go to the original article...

Image Sensors World Blog Feedback Survey 2023 is open until July 7, 2023

Image Sensors World        Go to the original article...

We would like to know more about our readership and get feedback on how this blog can better serve you.

Please fill the form below (or use this Microsoft Form link: https://forms.office.com/r/n2Z4vvYYBN)
 
This survey is completely anonymous; we do not collect any personally identifying information (name, email, etc.)

There are 5 required questions. It won't take more than a few minutes.

Please respond by midnight your local time on July 7, 2023.

Thank you so much for your time!


Go to the original article...

Coherent – TriEye collaboration on SWIR imaging

Image Sensors World        Go to the original article...

PRESS RELEASE

COHERENT AND TRIEYE DEMONSTRATE LASER-ILLUMINATED SHORTWAVE INFRARED IMAGING SYSTEM FOR AUTOMOTIVE AND ROBOTIC APPLICATIONS

PITTSBURGH and TEL AVIV, Israel, June 26, 2023 (GLOBE NEWSWIRE) – Coherent Corp. (NYSE: COHR), a leader in semiconductor lasers, and TriEye Ltd., a pioneer in mass-market shortwave infrared (SWIR) sensing technology, today announced their successful joint demonstration of a laser-illuminated SWIR imaging system for automotive and robotic applications. 

The growing number of use cases for SWIR imaging, which expands vision in automotive and robotics beyond the visible spectrum, is driving demand for low-cost mass-market SWIR cameras. The companies leveraged TriEye’s spectrum enhanced detection and ranging (SEDAR) product platform and Coherent’s SWIR semiconductor laser to jointly design a laser-illuminated SWIR imaging system, the first of its kind that is able to reach lower cost points while achieving very high performance over a wide range of environmental conditions. The combination of these attributes is expected to enable wide deployment in applications such as front and rear cameras in cars as well as vision systems in industrial and autonomous robots. 

“This new solution combines best-in-class SWIR imaging and laser illumination technologies that will enable next-generation cameras to provide images through rain or fog, and in any lighting condition, from broad daylight to total darkness at night,” said Dr. Sanjai Parthasarathi, Chief Marketing Officer at Coherent Corp. “Both technologies are produced leveraging high-volume manufacturing platforms that will enable them to achieve the economies of scale required to penetrate markets in automotive and robotics.”

“We are happy to collaborate with a global leader in semiconductor lasers and to establish an ecosystem that the automotive and industrial robotics industries can rely on to build next-generation solutions,” said Avi Bakal, CEO and co-founder of TriEye. “This is the next step in the evolution of our technology innovation, which will enable mass-market applications. Our collaboration will allow us to continue revolutionizing sensing capabilities and machine vision by allowing the incorporation of SWIR technology into a greater number of emerging applications.”

The SEDAR product platform integrates TriEye’s next-generation CMOS-based SWIR sensor and illumination source with Coherent’s 1375 nm edge-emitting laser on surface-mount technology (SMT). The laser-illuminated imaging systems will enable the next generation of automotive cameras that can provide images through inclement weather. They will also enable autonomous robots to operate around the clock in any lighting conditions and move seamlessly between indoor and outdoor environments.

Coherent and TriEye will exhibit the imaging system at Laser World of Photonics in Munich, Germany, June 27-30, at Coherent’s stand B3.321. 





About TriEye

TriEye is the pioneer of the world’s first CMOS-based Shortwave Infrared (SWIR) image-sensing solutions. Based on advanced academic research, TriEye’s breakthrough technology enables HD SWIR imaging and accurate deterministic 3D sensing in all weather and ambient lighting conditions. The company’s semiconductor and photonics technology enabled the development of the SEDAR (Spectrum Enhanced Detection And Ranging) platform, which allows perception systems to operate and deliver reliable image data and actionable information while reducing expenditure up to 100x the existing industry rates. For more information, visit www.TriEye.tech.


About Coherent

Coherent empowers market innovators to define the future through breakthrough technologies, from materials to systems. We deliver innovations that resonate with our customers in diversified applications for the industrial, communications, electronics, and instrumentation markets. Headquartered in Saxonburg, Pennsylvania, Coherent has research and development, manufacturing, sales, service, and distribution facilities worldwide. For more information, please visit us at coherent.com. 


Contacts

TriEye Ltd.
Nitzan Yosef Presburger
Head of Marketing
news@trieye.tech

Coherent Corp.
Mark Lourie
Vice President, Corporate Communications
corporate.communications@coherent.com 

Go to the original article...

RADOPT 2023: workshop on radiation effects on optoelectronics and photonics technologies

Image Sensors World        Go to the original article...



RADOPT 2023: Workshop on Radiation Effects on      Optoelectronic Detectors and Photonics Technologies

28-30 Nov 2023 Toulouse (France)

 

 

First Call for Papers

You are cordially invited to participate to the second edition of the RADECS Workshop on Radiation Effects on Optoelectronics and Photonics Technologies (RADOPT 2023) to be held on 28th-30th November 2023 in Toulouse, France.

After the success of RADOPT 2021, this second edition of the workshop, will continue to combine and replace two well-known events from the Photonic Devices and IC’s community: the “Optical Fibers in Radiation Environments Days -FMR and the Radiation effects on Optoelectronic Detectors Workshop, traditionally organized every-two years by the COMET OOE of CNES.

The objective of the workshop is to provide a forum for the presentation and discussion of recent developments regarding the use of optoelectronics and photonics technologies in radiation-rich environments The workshop also offers the opportunity to highlight future prospects in the fast-moving space, high energy physics, fusion and fission research fields and to enhance exchanges and collaborations between scientists. Participation of young researchers (PhD) is especially encouraged.
Oral and poster communications are solicited reporting on original research (both experimental and theoretical) in the following areas:

  • Basic Mechanisms of radiation effects on optical properties of materials, devices and systems
  • Silicon Photonics, Photonic Integrated Circuits
  • Solar Cells
  • Cameras, Image sensors and detectors
  • Optically based dosimetry and beam monitoring techniques
  • Fiber optics and fiber-based sensors
  • Optoelectronics components and systems

Abstract Submission and Decision Notification:

Abstracts for both oral and poster presentations can be submitted. The final decision will be taken by the RADOPT Scientific Committee.

·       Abstract submission open: Monday April 3rd, 2023

·       Abstract submission deadline: Friday July 9th, 2023

à Send abstract to clementine.durnez@cnes.fr

Industrial Exhibition

An industrial exhibition will be organized during RADOPT 2023. The location allows the exhibits to be located adjacent to the auditorium where the oral sessions will be delivered. Please contact us for more details.



Go to the original article...

Canon presentation on CIS PPA Optimization

Image Sensors World        Go to the original article...

Canon presentation on "PPA Optimization Using Cadence Cerebrus for CMOS Image Sensor Designs" is available here: https://vimeo.com/822031091

Some slides:







Go to the original article...

ICCP Program Available, Early Registration Ends June 22

Image Sensors World        Go to the original article...

The IEEE International Conference on Computational Photography (ICCP) program is now available online: https://iccp2023.iccp-conference.org/conference-program/

ICCP is an in-person conference to be held at the Monona Terrace Conventional Center in Madison, WI (USA) from July 28-30, 2023.

Early registration ends June 22: https://iccp2023.iccp-conference.org/registration/

Friday, July 28th

09:00 Opening Remarks

09:30 Session 1: Polarization and HDR Imaging
1) Learnable Polarization-multiplexed Modulation Imager for Depth from Defocus
2) Polarization Multi-Image Synthesis with Birefringent Metasurfaces
3) Glare Removal for Astronomical Images with High Local Dynamic Range
4) Polarimetric Imaging Spaceborne Calibration Using Zodiacal Light

10:30 Invited Talk: Melissa Skala (UW-Madison)
Unraveling Immune Cell Metabolism and Function at Single-cell Resolution

11:00 Coffee break

11:30 Keynote: Aki Roberge (NASA)
Towards Earth 2.0: Exoplanets and Future Space Telescopes

12:30 Lunch; Industry Consortium Mentorship Event

14:00 Invited Talk: Lei Li (Rice)
New Generation Photoacoustic Imaging: From Benchtop Wholebody Imagers to Wearable Sensors

14:30 Session 2: Emerging and Unconventional Computational Sensing
1) CoIR: Compressive Implicit Radar
2) Parallax-Driven Denoising of Passively-Scattered Thermal Imagery
3) Moiré vision: A signal processing technology beyond pixels using the Moiré coordinate

15:15 Poster and demo Spotlights

15:30 Coffee break

16:00 Poster and demo Session 1

17:30 Community Poster and Demo Session


Saturday, July 29th

09:00 Invited Talk: Ellen Zhong (Princeton)
Machine Learning for Determining Protein Structure and Dynamics from Cryo-EM Images

09:30 Session 3: Neural and Generative Methods in Imaging
1) Learn to Synthesize Photorealistic Dual-pixel Images from RGBD frames
2) Denoising Diffusion Probabilistic Model for Retinal Image Generation and Segmentation
3) NeReF: Neural Refractive Field for Fluid Surface Reconstruction and Rendering
4) Supervision by Denoising

10:30 Invited Talk: Karen Schloss (UW-Madison)

11:00 Coffee break

11:30 Keynote: Aaron Hertzmann (Adobe)
A Perceptual Theory of Perspective

12:30 Lunch; Affinity Group Meetings

14:00 Invited Talk: Na Ji (UC Berkeley)

14:30 Session 4: Measuring Spectrum and Reflectance
1) Spectral Sensitivity Estimation Without a Camera
2) A Compact BRDF Scanner with Multi-conjugate Optics
3) Measured Albedo in the Wild: Filling the Gap in Intrinsics Evaluation
4) Compact Self-adaptive Coding for Spectral Compressive Sensing

15:30 Industry Consortium Talk: Tomoo Mitsunaga (Sony)
Computational Image Sensing at Sony

16:00 Poster and Demo Spotlights

16:15 Coffee Break

16:45 Poster and Demo Session 2

18:15 Reception


Sunday, July 30th

09:00 Session 5: Depth and 3D Imaging
1) Near-light Photometric Stereo with Symmetric Lights
2) Aberration-Aware Depth-from-Focus
3) Count-Free Single-Photon 3D Imaging with Race Logic

09:45 Invited Talk: Jules Jaffe (Scripps & UCSD)
Welcome to the Underwater Micro World: The Art and Science of Underwater Microscopy

10:15 Coffee Break

10:45 Invited Talk: Hooman Mohseini (Northwestern University)
New Material and Devices for Imaging

11:15 Keynote: Eric Fossum (Dartmouth)
Quanta Image Sensors and Remaining Challenges

12:15 Lunch; Industry Consortium Mentorship Event

12:45 Lunch (served)

14:00 Session 6: NLOS Imaging and Imaging Through Scattering Media
1) Isolating Signals in Passive Non-Line-of-Sight Imaging using Spectral Content
2) Fast Non-line-of-sight Imaging with Non-planar Relay Surfaces
3) Neural Reconstruction through Scattering Media with Forward and Backward Losses

14:45 Invited Talk: Jasper Tan (Glass Imaging)
Towards the Next Generation of Smartphone Cameras

15:15 Session 7: Holography and Phase-based Imaging
1) Programmable Spectral Filter Arrays using Phase Spatial Light Modulators
2) Scattering-aware Holographic PIV with Physics-based Motion Priors
3) Stochastic Light Field Holography

16:00 Closing Remarks

Go to the original article...

NEC uncooled IR camera uses carbon nanotubes

Image Sensors World        Go to the original article...

From JCN Newswire: https://www.jcnnewswire.com/english/pressrelease/82919/3/NEC-develops-the-world&aposs-first-highly-sensitive-uncooled-infrared-image-sensor-utilizing-carbon-

NEC develops the world's first highly sensitive uncooled infrared image sensor utilizing carbon nanotubes

- More than three times the sensitivity of conventional uncooled infrared image sensors -
TOKYO, Apr 10, 2023 - (JCN Newswire) - NEC Corporation (TSE: 6701) has succeeded in developing the world's first high-sensitivity uncooled infrared image sensor that uses high-purity semiconducting carbon nanotubes (CNTs) in the infrared detection area. This was accomplished using NEC's proprietary extraction technology. NEC will work toward the practical application of this image sensor in 2025.

Infrared image sensors convert infrared rays into electrical signals to acquire necessary information, and can detect infrared rays emitted from people and objects even in the dark. Therefore, infrared image sensors are utilized in various fields to provide a safe and secure social infrastructure, such as night vision to support automobiles driving in the darkness, aircraft navigation support systems and security cameras.

There are two types of infrared image sensors, the "cooled type," which operates at extremely low temperatures, and the "uncooled type," which operates near room temperature. The cooled type is highly sensitive and responsive, but requires a cooler, which is large, expensive, consumes a great deal of electricity, and requires regular maintenance. On the other hand, the uncooled type does not require a cooler, enabling it to be compact, inexpensive, and to consume low power, but it has the issues of inferior sensitivity and resolution compared to the cooled type.






 


In 1991, NEC discovered CNTs for the first time in the world and is now a leader in research and development related to nanotechnology. In 2018, NEC developed a proprietary technology to extract only semiconducting-type CNTs at high purity from single-walled CNTs that have a mixture of metallic and semiconducting types. NEC then discovered that thin films of semiconducting-type CNTs extracted with this technology have a large temperature coefficient of resistance (TCR) near room temperature.

The newly developed infrared image sensor is the result of these achievements and know-how. NEC applied semiconductor-type CNTs based on its proprietary technology that features a high TCR, which is an important index for high sensitivity. As a result, the new sensor achieves more than three times higher sensitivity than mainstream uncooled infrared image sensors using vanadium oxide or amorphous silicon.

The new device structure was achieved by combining the thermal separation structure used in uncooled infrared image sensors, the Micro Electro Mechanical Systems (MEMS) device technology used to realize this structure, and the CNT printing and manufacturing technology cultivated over many years for printed transistors, etc. As a result, NEC has succeeded in operating a high-definition uncooled infrared image sensor of 640 x 480 pixels by arraying the components of the structure.

Part of this work was done in collaboration with Japan's National Institute of Advanced Industrial Science and Technology (AIST). In addition, a part of this achievement was supported by JPJ004596, a security technology research promotion program conducted by Japan's Acquisition, Technology & Logistics Agency (ATLA).

Going forward, NEC will continue its research and development to further advance infrared image sensor technologies and to realize products and services that can contribute to various fields and areas of society.


Go to the original article...

Webinar on Latest Trends in High-speed Imaging & Introduction to BSI Sensors

Image Sensors World        Go to the original article...

Webinar on latest trends in High-Speed Camera, Introducing the BSI Camera Sensor.

Join this free tech talk by our expert speakers of Phantom High-Speed Cameras - Vision Research in which we explore the latest trends in high-speed cameras focusing on Backside Illuminated (BSI) sensor cameras and the associated benefits of improved processing speed and fill factor along with the challenges in such high-speed designs.

Webinar registration [link]

Date: 22nd June 2023
Time: 2:30pm IST / 2:00am Pacific / 5:00am Eastern

Topics to be covered:
Introducing the BSI sensor camera
Introducing FORZA & Sensor Insights
Introducing the MIRO C camera
Demo & display of the High-Speed Camera & its accessories.



Go to the original article...

A lens-less and sensor-less camera

Image Sensors World        Go to the original article...

An interesting combination of tech+art: https://bjoernkarmann.dk/project/paragraphica 

Paragraphica is a context-to-image camera that uses location data and artificial intelligence to visualize a "photo" of a specific place and moment. The camera exists both as a physical prototype and a virtual camera that you can try.




Will this put the camera and image sensor industry out of business? :)



Go to the original article...

Videos du jour — Sony, onsemi, realme/Samsung [June 16, 2023]

Image Sensors World        Go to the original article...


Stacked CMOS Image Sensor Technology with 2-Layer Transistor Pixel | Sony Official

Sony Semiconductor Solutions Corporation (“SSS”) has succeeded in developing the world’s first* stacked CMOS image sensor technology with 2-Layer Transistor Pixel.
This new technology will prevent underexposure and overexposure in settings with a combination of bright and dim illumination (e.g., backlit settings) and enable high-quality, low-noise images even in low-light (e.g., indoor, nighttime) settings.
LYTIA image sensors are designed to enable smartphone users to express and share their emotions more freely and to bring a creative experience far beyond your imagination. SSS continues to create a future where everyone can enjoy a life full of creativity with LYTIA.
*: As of announcement on December 16, 2021.



New onsemi Hyperlux Image Sensor Family Leads the Way in Next-Generation ADAS to Make Cars Safer
onsemi's new Hyperlux™ image sensors are steering the future of autonomous driving!
Armed with 150db ultra-high dynamic range to capture high-quality images in the most extreme lighting conditions, our Hyperlux™ sensors use up to 30% less power with a footprint that's up to 28% smaller than competing devices.
 


When realme11Pro+ gets booted with ISOCELL HP3 Super Zoom, a 200MP Image Sensor | realme
The ISOCELL HP3 SuperZoom, a 200MP image sensor, equipped in realme 11 Pro+ combined with realme’s advanced camera technology. What will you capture with this innovation?





Go to the original article...

Sony’s World-first two-layer image sensor: TechInsights preliminary analysis and results

Image Sensors World        Go to the original article...

By TechInsights Image Sensor Experts: Eric Rutulis, John Scott-Thomas, PhD

We first heard about it at IEDM 2021, and Sony provided more details at the 2022 IEEE Symposium on VLSI Technology and Circuits conference. Now it’s on the market and TechInsights has had our first look at the “world’s first” two-layer image sensor and we present our preliminary results here. The device was found in a Sony Xperia 1V smartphone main camera having a 48 MP, 1.12 µm pixel pitch and we can confirm it has dual photodiodes (a Left and Right photodiode in each pixel for full array PDAF). The die size measures 11.37 x 7.69 mm edge-to-edge.

In fact, the sensor actually has three layers of active silicon, with an Image Signal Processor (ISP) stacked in a conventional arrangement using a Direct Bond Interface (DBI) to the “second layer” (we will use Sony’s nomenclature when possible) of the CMOS Image Sensor (CIS). Figure 1 shows a SEM cross-section through the array. Light enters from the bottom of the image, through the microlenses and color filters. Each pixel is separated by an aperture grid (with compound layers) to increase the quantum efficiency. Front Deep Trench Isolation is used between each photodiode and it appears that Sony is using silicon dioxide in the deep trench to improve Full Well Capacity and Quantum Efficiency (this will be confirmed with further analysis).This layer also has the planar Transfer Gate used to transfer photocharge from the diode to the floating diffusion. Above the first layer is the “second layer” of silicon that contains three transistors for each pixel; the Reset, Amp (Source-Follower) and Select transistors. These transistors sit above the second layer silicon and connection to the first layer is achieved using “Deep contacts” which pass through the second layer essentially forming Through Silicon Vias (TSVs). Finally, the ISP sits above the metallization of the second layer, connected using Hybrid (Direct) Bonding. The copper of the ISP used for connection to the CIS DBI Cu is not visible in this image.

Figure 1: SEM Cross-section through the sensor indicating the three active silicon layers.

Key to this structure is a process flow that can withstand the thermal cycling needed to create the thermal oxide and activate the implants on the second layer. Sony has described the process flow in some detail (IEDM 2021, “3D Sequential Process Integration for CMOS Image Sensor”).

Figure 2 is an image from this paper showing the process flow. The first layer photodiodes and Transfer Gate are formed, and the second layer is wafer bonded and thinned. Only then are the second layer gate oxides formed and the implants are activated. Finally, the deep contacts are formed, etching through the second layer, and contacting the first layer devices.

Figure 2: Process flow for two-layer CIS described in “3D Sequential Process Integration for CMOS Image Sensor”, IEDM 2021.


The interface between the first and second layer is shown in more detail in Figure 3. The Transfer Gate (TG in the image) is connected to the first metal layer of the second layer. Slightly longer deep contacts lie below the sample surface and are partially visible in the image. These connect the floating diffusion node between the first and second layer. A sublocal connection (below the sample surface) is used to interconnect four photodiodes just above the first layer to the source of the Reset FET and gate of the AMP (Source-Follower) FET.

 
                    Figure 3: SEM cross-section detail of the first and second layer interface.

The sublocal connection is explored more in Figure 4. This is a planar SEM image of the first layer at the substrate level. Yellow boxes outline the pixel, with PDL and PDR indicating the left and right photodiodes. One microlense covers each pixel. Sublocal connections are indicated and are used to interconnect the Floating Diffusion for two pixels and ground for four pixels. The sublocal connection appears to be polysilicon; this is currently being confirmed with further analysis.


Figure 4: SEM planar view of the pixel first layer at the substrate level.


The motivation for the two-layer structure is multiple. The photodiode full well capacity can be maintained even with the reduced pixel pitch. The use of sublocal contacts reduces the capacitance of the floating diffusion, increasing the conversion gain of the pixels. The increased area available on the second layer allows the AMP (Source-Follower) transistor area to be increased, reducing noise (flicker and telegraph) create in the channel of this device.

It's worth taking a moment to appreciate Sony’s achievement here. The new process flow and deep contact technology allow two layers of active devices to be interconnected with an impressive 0.46 µm (center-to-center) spacing of the deep contacts (or Through Silicon Vias). Even the hybrid bonding to the ISP is just 1.12 µm; the smallest pitch TechInsights has seen to date. At the recent International Image Sensors Workshop, Sony described an upcoming generation that will use “buried” sublocal connections embedded in the first layer and pixel FinFets in the second layer (to be published). Perhaps we are seeing the first stages of truly three-dimensional circuitry, with active devices on multiple layers of silicon, all interconnected. Congratulations, Sony!

TechInsights' first Device Essentials analysis on this device will be published shortly with more analyses underway.

Access the TechInsights Platform for more content and reports on image sensors.



Go to the original article...

inVISION Days Conference presentations

Image Sensors World        Go to the original article...

inVISION Days Conference presentations are now available online.

The first day of the inVISION Days Conference will give an overview of current developments in cameras and lenses, such as new image sensors for applications outside the visible range, high-speed interfaces... The panel discussion will explore what to expect next in image sensors.

All webinars are available for free (create a login account first):
https://openwebinarworld.com/en/webinar/invision-days-day-1-cameras/#video_library
 

Session 1: Machine Vision Cameras
Session 2: Optics & Lenses
Session 3: High-Speed Vision

 

At the first inVISION Day Metrology current applications and new technologies will be presented at the four sessions 3D Scanner, Inline Metrology, Surface Metrology, CT & X-Ray. The free online conference will be completed by a keynote speech, the panel discussion 'Metrology in the Digital Age' and the EMVA Pitches, where four start-up companies will present their innovations. You can find more information under invdays.com/metrology.

https://openwebinarworld.com/en/webinar/invision-day-metrology/
 
Session 1: 3D Scanner
Session 2: Inline Metrology
Session 3: Surface Metrology
Session 4: CT & X-ray

Go to the original article...

PetaPixel article on an 18K (316MP) HDR sensor

Image Sensors World        Go to the original article...

Link: https://petapixel.com/2023/06/12/sphere-studios-big-sky-cinema-camera-features-an-insane-18k-sensor/

Sphere Studios’ Big Sky Cinema Camera Features an Insane 18K Sensor

Sphere Studios has developed a brand new type of cinema camera called The Big Sky. It features a single 316-megapixel HDR image sensor that the company says is a 40x resolution increase over existing 4K cameras and PetaPixel was given an exclusive look at the incredible technology.

 


 

Those who have visited Las Vegas in the last few years may have noticed the construction of a giant sphere building near the Venetian Hotel. Set to open in the fall of 2023, the Sphere Entertainment Co has boasted that this new facility will provide “immersive experiences at an unparalleled scale” featuring a 580,000 square-foot LED display and the largest LED screen on Earth.

As PetaPixel covered last fall, the venue will house the world’s highest resolution LED screen: a 160,000 square-foot display plane that will wrap up, over, and behind the audience at a resolution over 80 times that of a high-definition television with approximately 17,500 seats and a scalable capacity up to 20,000 guests. While the facility for viewing these immersive experiences sounds impressive on its own, it leaves one wondering what kind of cameras and equipment are needed to capture the content that gets played there.

The company has said “an innovative new camera system developed internally that sets a new bar for Image fidelity, eclipsing all current cinematic cameras with unparalleled edge-to-edge sharpness” — a very bold claim. While on paper it doesn’t seem much different from any other camera manufactures claims about their next-gen system, spending time with the new system in person and seeing what it is capable of paints an entirely different picture that honestly has to be seen to be believed.

“Sphere Studios is not only creating content, but also technology that is truly transformative,” says David Dibble, Chief Executive Officer of MSG Ventures, a division of Sphere Entertainment focused on developing advanced technologies for live entertainment.

“Sphere in Las Vegas is an experiential medium featuring an LED display, sound system and 4D technologies that require a completely new and innovative approach to filmmaking. We created Big Sky – the most advanced camera system in the world – not only because we could, but out of innovative necessity. This was the only way we could bring to life the vision of our filmmakers, artists, and collaborators for Sphere.”

According to the company, the new Big Sky camera system “is a groundbreaking ultra-high-resolution camera system and custom content creation tool that was developed in-house at Sphere Studios to capture stunning video for the world’s highest resolution screen at Sphere. Every aspect of Big Sky represents a significant advancement on current state-of-the-art cinema camera systems, including the largest single sensor in commercial use capable of capturing incredibly detailed, large-format images.”

The Big Sky features an “18K by 18K” (or 18K Square Format) custom image sensor which absolutely dwarfs current full frame and large format systems. When paired with the Big Sky’s single-lens system –which the company boasts is the world’s sharpest cinematic lens — it can achieve the extreme optical requirements necessary to match Sphere’s 16K by 16K immersive display plane from edge to edge.

Currently the camera has two primary lens designs: a 150-degree field of view which is true to the view of the sphere where the content will be projected, and a 165-degree field of view which is designed for “overshoot and stabilization” particularly useful in filming situations where the camera is in rapid motion or on an aircraft with a lot of vibrations (ie a helicopter).

The Big Sky features a single 316-megapixel, 3-inch by 3-inch HDR image sensor that the company says is a 40x resolution increase over existing 4K cameras and 160x over HD cameras. In addition to its massive sensor size, the camera is capable of capturing 10-bit footage at 120 frames per second (FPS) in the 18K square format as well as 60 FPS at 12-bit.

“With underwater and other lenses currently in development, as well as the ability to use existing medium format lenses, Sphere Studios is giving immersive content creators all the tools necessary to create extraordinary content for Sphere,” the company says.

Since the media captured by the Big Sky camera is massive, it requires some substantial processing power as well as some objectively obscene amounts of storage solutions. As such, just like the lenses, housings (including underwater and aerial gimbals), and camera, the entire media recorder infrastructure was designed and built entirely in-house to precisely meet the company’s needs.

According to the engineering team at Sphere, “the Big Sky camera creates a 500 gigabit per second pipe off the camera with 400 gigabit of fiber between the camera head and the media recorder. The media recorder itself is currently capable of recording 30 gigabytes of data per second (sustained) with each media magazine containing 32 terabytes and holds approximately 17 minutes of footage.”
The company says the media recorder is capable of handling 600 gigabits per second of network connectivity, as well as built-in media duplication, to accelerate and simplify on-set and post-production workflows. This allows their creative team to swap out drives and continue shooting for as long as they need.

Basically, as long as they have power and extra media magazines, they can run the camera pretty much all day without any issues. I did ask the team about overheating and heat dissipation of the massive system, and they went into great detail about how the entire system has been designed with a sort of internal “chimney” that maintained airflow through the camera ensuring it would not overheat and can keep running even in some of the craziest weather scenarios ranging from being completely underwater to surrounded by dust storms without incident.

What’s even more impressive is the camera can run completely separate from this recording technology as long as it is connected through its cable system, this includes distances of up to a reported mile away.

Since the entire system was built in-house, the team at Sphere Studios had to build their own image processing software specifically for Big Sky that utilizes GPU-accelerated RAW processing to make the workflows of capturing and delivering the content to the Sphere screen practical and efficient. Through the use of proxy editing, a standard laptop can be used, connected to the custom media decks to view and edit the footage with practically zero lag.

Why Is This A Big Deal?
While the specs on paper are unarguably mind-boggling, it’s practically impossible to express just how impressive the footage and experience is to see it captured and presented on the sphere screens it was meant for.

The good news is that PetaPixel was invited to the Los Angeles division for a private tour and demonstration of the groundbreaking technology so we could see it all firsthand and not just go off of the press release. I wasn’t able to take photos or video myself — the images and video in this write-up were provided by the Sphere Studios team — but I can confirm that this technology is wildly impressive and will definitely change the filmmaking industry in the coming years.

When showing me the initial concepts and design mock-ups, the team didn’t think of the content they deliver as simply footage, but rather “experiential storytelling” and after having experienced it for myself, I wholeheartedly agree.

During my tour of the facility, I got to see the camera first hand, look at live footage and rendering in real-time, as well as see some test images and video footage, including some scenes that may make it into “Postcard from Earth” which is the first experience being revealed at the Sphere in Las Vegas this fall that has footage captured from all over the planet that should give viewers a truly unique perspective of what the planet and this new camera system has to offer.

On top of the absolutely massive camera, the system they have developed to “experience” the footage includes haptic seating, true personal-headphone level sound without the headphones from any seat, as well as a revolutionary “environmental” system that can help viewers truly feel the environment they are watching with changing temperatures, familiar scents, and even a cool breeze.
“Sphere Studios is not only creating content, but also technology that is truly transformative,” says Dibble.

“Sphere in Las Vegas is an experiential medium featuring an LED display, sound system and 4D technologies that require a completely new and innovative approach to filmmaking. We created Big Sky – the most advanced camera system in the world – not only because we could, but out of innovative necessity. This was the only way we could bring to life the vision of our filmmakers, artists, and collaborators for Sphere.”

Something worth noting is all of this came to life effectively in just a few short years. The camera started out as an “array” of existing 8K cameras mounted in a massive custom housing. This created an entirely new series of challenges when processing and rendering the massive visuals, which lead to the development of the Big Sky single-lens camera itself, which is currently in its version 2.5 stage of development.

Each generation has made the system more compact and efficient also. The original system was over 100 pounds with the current (v2) weighing a little over 60 pounds, with the next generation lens being developed bringing the system under 30 pounds.

Equally impressive was the amount of noise the camera made, which is to say it was practically silent in operation. Even with the cooling system running it was as quiet or even quieter than most existing 8K systems in the cinematic world — comparing it to an IMAX wouldn’t even be fair… to the IMAX.

The Big Sky cameras are not up for sale (yet) but they are meeting with film companies and filmmakers to find ways to bring the technology to the home-entertainment world. A discussion we had on-site revolved around gimbals mounted on helicopters, airplanes, and automobiles and how those systems, even “the best” still experience some jitter/vibration which is often stabilized which causes the footage to be cropped in.


The technology built for Big Sky helps eliminate a massive percentage of this vibration, and even without it, the sheer amount of resolution the camera offers can provide a ton of space for post-production stabilization. This alone could be a game changer for Hollywood when capturing aerial and “chase scene” footage from vehicles allowing for even more detail than ever before.

Big Sky’s premiere experience at Sphere in Las Vegas is set to open on September 29 with the first of 25 concerts by U2, as well as many other film and live event projects that will be announced soon.

Go to the original article...

Sony Business Segment meeting discusses ambitious expansion plan

Image Sensors World        Go to the original article...

Sony held its 2023 Business Segment meeting on May 24, 2023.
https://www.sony.com/en/SonyInfo/IR/library/presen/business_segment_meeting/
 

Slides from its image sensors division below. Sony has quite ambitious plans to touch 85% of the automotive vision sensing market (slide 10).
https://www.sony.com/en/SonyInfo/IR/library/presen/business_segment_meeting/pdf/2023/ISS_E.pdf

































Go to the original article...

VoxelSensors announces Switching Pixels technology for AR/VR applications

Image Sensors World        Go to the original article...

GlobalNewswire: https://www.globenewswire.com/news-release/2023/05/29/2677822/0/en/VoxelSensors-Debuts-the-Global-Premiere-of-Revolutionary-Switching-Pixels-Active-Event-Sensor-Evaluation-Kit-for-3D-Perception-to-Seamlessly-Blend-the-Physical-and-Digital-Worlds.html

VoxelSensors Debuts the Global Premiere of Revolutionary Switching Pixels® Active Event Sensor Evaluation Kit for 3D Perception to Seamlessly Blend the Physical and Digital Worlds

BRUSSELS, Belgium, May 29, 2023 (GLOBE NEWSWIRE) -- VoxelSensors is to reveal its innovative 3D Perception technology, the Switching Pixels® Active Event Sensor (SPAES), and globally premiere the related Andromeda Evaluation Kit at AWE USA 2023. Experience this breakthrough technology from May 31 to June 2 at AWE booth #914 in Santa Clara (California, USA).

VoxelSensors’ Switching Pixels® Active Event Sensor is a novel category of ultra-low power and ultra-low latency 3D perception sensors for Extended Reality (XR) to seamlessly blend the physical and digital worlds.

Extended Reality device manufacturers require low power consumption and low latency 3D Perception technology to flawlessly blend the physical and digital worlds and unlock the true potential of immersive experiences. VoxelSensors’ patented Switching Pixels® Active Event Sensor technology has uniquely resolved these significant challenges and is the world’s first solution that has achieved a threshold of less than 10 milliwatts in terms of power consumption, combined with less than 5 milliseconds of latency. Furthermore, this is possible while being resistant to indoor and outdoor lighting at distances over 5 meters and being immune to crosstalk.

This breakthrough technology offers an alternative to traditional 3D sensors, eliminating the need for slow frames. It sends 3D data points in real-time serially to the device and application at nanosecond refresh rates. Designed for efficiency, SPAES delivers the lowest latency for perception applications at minimal power consumption addressing previously unmet needs such as precise segmentation, spatial mapping, anchoring, and natural interaction.

“SPAES disrupts the standard in 3D Perception,” says Christian Mourad, co-founder and VP of Engineering at VoxelSensors. “The Andromeda Evaluation Kit, available for the selected OEMs and integrators in the summer of 2023, demonstrates our commitment to advancing XR/AR/MR and VR applications. This innovation, however, isn’t limited to Extended Reality and expands into robotics, the automotive industry, drones, and medical applications.”

VoxelSensors was founded in 2020 by a team of seasoned experts in the field of 3D sensing and perception, with over 50 years of collective experience. The team’s success includes co-inventing an efficient 3D Time-of-Flight sensor and camera technology, which leading tech company Sony acquired in 2015.

In May 2023, VoxelSensors announced a €5M investment led by Belgian venture capitals Capricorn Partners and Qbic with contributions from the investment firm finance&invest.brussels, along with existing investors and the team. The funding will bolster VoxelSensors' roadmap, talent acquisition, and enhance customer relations in the U.S. and Asia.

“At VoxelSensors, we aim to fuse the physical and digital realms until they're indistinguishable,” says Johannes Peeters, co-founder and CEO of VoxelSensors. “With Extended Reality gaining momentum it is our duty to discover, create, work, and play across sectors like gaming, healthcare, and manufacturing. Our Switching Pixels® Active Event Sensor technology stands ready to pioneer transformative user experiences!”

For information related to an Andromeda Evaluation Kit or a possible purchase contact: sales@voxelsensors.com.

Go to the original article...

Videos du jour — onsemi, CEA-Leti, Teledyne e2v [June 7, 2023]

Image Sensors World        Go to the original article...


 

Overcoming Challenging Lighting Conditions with eHDR: onsemi’s AR0822 is an innovative image sensor that produces high-quality 4K video at 60 frames-per-second.


Discover Wafer-to-wafer process
: Discover CEA-Leti expertise in terms of hybrid bonding: the different stages of Wafer-to-wafer process in CEA-Leti clean room, starting with Chemical Mechanical Planarization (CMP), through wafer-to-wafer bonding, alignment measurement, characterization of bonding quality, grinding and results analysis.

 

Webinar - Pulsed Time-of-Flight: a complex technology for a simpler and more versatile system: Hosted by Vision Systems Design and presented by Yoann Lochardet, 3D Marketing Manager at Teledyne e2v in June 2022, this webinar discusses how, at first glance, Pulsed Time-of-Flight (ToF) can be seen as a very complex technology that is difficult to understand and use. That is true in the sense that this technology is state-of-the-art and requires the latest technical advancements. However, it is a very flexible technology, with features and capabilities that reduce the complexity of the whole system, allowing for a simpler and more versatile system.


Go to the original article...

IISW Summary from TechInsights

Image Sensors World        Go to the original article...

The International Image Sensor Workshop 2023 offered an excellent overview of sensors past, present and future

John-Scott Thomas PhD, TechInsights (Image Sensor Subject Matter Expert)

After a long hiatus courtesy of COVID, the International Image Sensor Workshop (IISW) 2023 was held in-person at the charming Crieff Hydro Hotel in the highlands of Scotland from May 21-25. With over two hundred attendees by my count, the workshop presented a lively and informative forum for image sensor devices past, present and future. TechInsights was honored to open the meeting with a presentation on the state-of-the-art in small pixel (mobile) devices. With fifteen minutes available only the briefest overview was possible, and we focused on the technologies that enable the transition to the 0.56 micron pixel pitch (Samsung and OmniVision) and 0.70 micron (Sony) pixel pitch. You can read the TechInsights paper here.

Sony (presented by Masatak Sugimoto) then described the structure of a two-layer image sensor where the photodiode and transfer gate of the pixel is placed on one semiconductor layer and the reset, source-follower, and select transistors are placed on a lower layer. This structure allows optimization of the two layers with different processes for each and pushes the current limits of hybrid bonding. This was all the more interesting as TechInsights located a Sony sensor using 2-layer transistor pixels (in the Xperia 1V smartphone) as the workshop began. We’ll have plenty more analysis in our channels for this world-first device. Samsung (Sungsoo Choi) and OmniVision (Chung Yung Ai) then presented further technical details of the 0.56 micron pixels the two companies are producing. The first session was rounded out with another Samsung (Minho Kwon) presentation on a switchable resolution sensor and an onsemi (Vladi Korobov) surveillance sensor optimized for low light and Near Infra-red (NIR).
Following sessions discussed noise and pixel design. The Automotive session focused on High Dynamic Range, and a presentation by Manual Innocent (onsemi) shared an impressive video clip showing an automotive camera emerging from a dark tunnel to bright sunlight with excellent image quality using  a 150 dB sensor. Automotive cameras will be a high growth segment and are particularly suited to sensing outside the visible spectrum. More exotic applications included X-ray sensors, Ultraviolet and Short Wavelength Infrared sensors, discussed later in the conference. The final two sessions covered Time of Flight and SPAD sensors; already used in mobile applications, these are promising technologies in surveillance and automotive devices.

Of particular note were the discussions about digital image processing, artificial intelligence, and cybersecurity. There was general agreement that future devices will have much more digital processing included in the stacked Image Signal Processor, although many attendees felt most of the image processing should be performed on the applications processor when possible since this device uses a more advanced process node. The younger attendees showed a significant interest in digital image processing through their presentations, posters, and questions; a sign of things to come no doubt. This was highlighted by the two invited speakers. Charles Bouman (Purdue University) provided an overview of the abilities of computational imaging and emphasized the need for more dialogue between the image sensing community and the digital processing community. Jerome Chossat (STMicroelectronics) presented trends analysis clearly showing there will be plenty of computational power available in future stacked image sensors.

A banquet concluded the workshop – complete with a starlit (electric, of course) hall, bagpipes and kilts. Neil Dutton (STMicroelectronics) opened the evening and in general provided excellent management of the sessions. Boyd Fowler (OmniVision) presented awards to the best papers and posters, and finally three awards to seasoned veterans of the image sensor world. John Tower was recognized for his contributions to Image Sensor publications, Takeharu Goji Etoh for his sustained contributions to High Speed Cameras and Edoardo Charbon for imaging using SPAD arrays. Edoardo showcased an amazing video clip of a light pulse travelling through air and bouncing from mirrors. If you haven’t seen this before, you really should check it out.

Much of the value at a workshop happens with the conversations that take place out of session and at the many social events happening beyond formalities. This event reminded me of the importance of in-person meetings. TechInsights will continue to participate and watch this exciting field for further innovation. The International Image Sensor Society intends to provide all of the workshop papers on their website in the next few weeks.

You can also read the TechInsights paper here.

Go to the original article...

Compressive diffuse correlation spectroscopy with SPADs

Image Sensors World        Go to the original article...

Optics.org news article https://optics.org/news/14/5/9 about recently published work from U. Edinburgh. https://doi.org/10.1117/1.JBO.28.5.057001

University of Edinburgh improves diffuse imaging of blood flow

10 May 2023
New data processing approach could relieve bottleneck for speckle techniques in clinics.

Diffuse correlation spectroscopy (DCS) can assess blood flow non-invasively, by analyzing diffused light returning from illuminated areas of tissue and detecting the speckled spectral signals of blood cells in motion.

The potential impact of DCS was recognized in a 2022 SPIE report, which concluded that "an exciting era of technology transfer is emerging as research groups have spun-out well-established, early-stage startup ventures intending to commercialize DCS for clinical use."

The SPIE report identified the increasing availability of advanced single-photon avalanche diode (SPAD) detectors as a key factor in the current rise of DCS techniques. However, those same detectors have introduced a potential new hurdle, caused by the increased data handling requirements of diffuse spectroscopic methods.

The extremely high data rates of modern SPAD cameras can exceed the maximum data transfer rates of commonly used communication protocols, a bottleneck that has limited the scalability of SPAD cameras to higher pixel resolutions and hindered the development of better multispeckle DCS techniques.

A project based at the University of Edinburgh and funded by Meta Platforms has now demonstrated a new data compression scheme that could improve the sensitivity and usability of multispeckle DCS instruments.

The study, published in Journal of Biomedical Optics, describes a novel data compression scheme in which most calculations involving SPAD data are performed directly on a commercial programmable circuit called a field-programmable gate array (FPGA). This alleviates the previous need for high computational power and extremely fast data transfer rates between the DSC system and the host system upon which the data is visualized, according to the project.

Clearer views of the brain
If the key part of the computational analysis, a per-pixel calculation termed the autocorrelation function, takes place locally on the FPGA, then a higher imaging frame rate can be maintained than is possible with existing hardware autocorrelators.

To test this approach, the Edinburgh project constructed a large array SPAD camera in which 128 linear autocorrelators were embedded in an FPGA integrated circuit. Packaged into a camera module christened Quanticam, this was able to calculate 12,288 channels of data and compute the ensemble autocorrelation function from 192 x 64 pixels of DCS data in real time.

"Our proposed system achieved a significant gain in the signal-to-noise ratio, which is 110 times higher than that possible on a single-speckle DSC implementation and 3 times higher than other state-of-the-art multispeckle DSC systems," commented Robert Henderson from the University of Edinburgh.

If FPGA-based designs can help researchers adopt SPAD arrays with high pixel resolution but without the data processing load currently involved, then SPAD cameras could become more widely adopted in the biomedical research community. This would expand the horizons of multispeckle DCS to more areas of biomedical research, including the imaging of cerebral blood dynamics.

"Intense research effort in SPAD camera development is currently ongoing to improve camera capabilities toward even larger pixel count, shorter exposure time and higher detection probability," said the project in its paper. "Soon we should expect high-performance SPAD cameras with FPGA-embedded or even on-chip computing that could surpass the multispeckle DCS requirements for noninvasive detection of local brain activation."

Go to the original article...

Course on semiconductor radiation detectors in Barcelona July 3-7, 2023

Image Sensors World        Go to the original article...

The Barcelona Techno Weeks are a series of events that focus on a specific technological topic of interest for both academia and industry. These events include keynote presentations by world experts, networking activities, and a comprehensive course on solid state radiation detection. CERN and ICCUB organized three editions of the Techno Week in the past, which focused on semiconductor radiation detectors in 2016, 2018, and 2021.

Detailed schedule is available here: https://indico.icc.ub.edu/event/176/timetable/#all.detailed

Course on semiconductor detectors
The core of the 7th Techno Week is a comprehensive in-person course on solid state radiation detection, which covers topics such as the physics of interaction of radiation with matter, signal formation in detectors, different solid state radiation and photon detection technologies, detector analog and digital pulse processing readout circuits, detector packaging and advanced interconnect technologies and the use of radiation and photon detectors in scientific and industrial applications. The event also includes a participant poster session, presentations from industry professionals and a series of laboratories and social events.
 
The next edition will take place from the 3rd to the 7th July 2023 and it will be in-person. The course is divided into four sections: Sensors and Interconnects, Microelectronics, Detector Technologies, and Applications.

Objectives

  •  Explain fundamentals of interaction of radiation with matter and signal formation.
  •  Understand different solid state radiation and photon detection technologies (including monolithic sensors, CMOS imagers, SPAD sensors, etc).
  •  Review detector analog and digital pulse processing readout circuits (with emphasis in microelectronics and ASIC design).
  •  Provide an insight of packaging and advanced interconnect technologies (hybrid sensors, 3D integration, etc).
  •  Survey the use of radiation and photon detectors in industrial applications.
  •  Present new trends in radiation and photon detection.

In addition to the lectures from experts, the event includes a participant poster session and presentations from industry professionals combined with a series of laboratories and social events.
 
Who it is aimed at
The event is aimed at researchers, postdocs, PhD students, and industry professionals working in fields such as particle detectors, astronomy, space, medical imaging, scientific instrumentation, material analysis, neutron imaging, process monitoring and control. It offers a good opportunity for young researchers to meet with senior experts from academia and industry.

Lecturers
Rafael Ballabriga (CERN)
Massimo Caccia (U. Degli Studi Dell'Insubria)
Michael Campbell (CERN)
Ricardo Carmona Galán (IMSE-CNM/CSIC-US)
Edoardo Charbon (EPFL)
Perceval Coudrain (CEA)
David Gascón (ICCUB)
Alberto Gola (FBK)
Daniel Hynds (U. Oxford)
Frank Koppens (ICFO)
Angelo Rivetti (INFN)
Ángel Rodríguez Vázquez (US)
Antonio Rubio (UPC)
Dennis Schaart (TU Delft)
Francesc Serra-Graells (IMB-CNM/CSIC)
Renato Turchetta (IMASENIC)
 
Organization Team
Joan Mauricio (ICCUB)
Sergio Gómez (Serra Hunter - UPC)
Eduardo Picatoste (ICCUB)
Andreu Sanuy (ICCUB)
Rafael Ballabriga (CERN)
David Gascón (ICCUB)
Daniel Guberman (ICCUB)
Esther Pallarés (ICCUB)
Anna Argudo (ICCUB)


Some interesting talks on the schedule:

Contribution: Introduction to Semiconductors detectors
Time and Place: (Jul 3, 2023 - Jul 3, 2023)
Presenter: : Daniel Hynds

Contribution: Introduction to Semiconductors detectors
Time and Place: (Jul 3, 2023 - Jul 3, 2023)
Presenter: : Daniel Hynds

Contribution: Introduction to CMOS
Time and Place: (Jul 3, 2023 - Jul 3, 2023)
Presenter: : Francesc Serra-Graells

Contribution: Hybrid pixels and FE electronics
Time and Place: (Jul 4, 2023 - Jul 4, 2023)
Presenter: : Rafael Ballabriga

Contribution: Signal conditioning, digitization and Time pick-off
Time and Place: (Jul 4, 2023 - Jul 4, 2023)
Presenter: : Angelo Rivetti

Contribution: Sensor integration and packaging
Time and Place: (Jul 4, 2023 - Jul 4, 2023)
Presenter: : Perceval Coudrain

Contribution: Monolithic pixel detector + CMOS
Time and Place: (Jul 5, 2023 - Jul 5, 2023)
Presenter: : Renato Turchetta

Contribution: SPAD + Cryogenic
Time and Place: (Jul 5, 2023 - Jul 5, 2023)
Presenter: : Edoardo Charbon

Contribution: Embedded in-sensor intelligence for analog-to-information
Time and Place: (Jul 5, 2023 - Jul 5, 2023)
Presenters: : Ricardo Carmona Galán; Ángel Rodríguez-Vázquez

Contribution: SiPMs
Time and Place: (Jul 6, 2023 - Jul 6, 2023)
Presenter: : Alberto Gola

Contribution: Electronics for Fast Detectors
Time and Place: (Jul 6, 2023 - Jul 6, 2023)
Presenter: : David Gascon Fora

Contribution: Introduction to fast timing applications in medical physics
Time and Place: (Jul 7, 2023 - Jul 7, 2023)
Presenter: : Dennis R. Schaart

Contribution: Quantum applications of detectors
Time and Place: (Jul 7, 2023 - Jul 7, 2023)
Presenter: : Massimo Caccia

Contribution: Graphene
Time and Place: (Jul 7, 2023 - Jul 7, 2023)
Presenter: : Frank Koppens

Contribution: Electronics beyond CMOS (such as Carbon Nanotubes)
Time and Place: (Jul 7, 2023 - Jul 7, 2023)
Presenter: : Antonio Rubio

Go to the original article...

VoxelSensors Raises €5M in Seed Funding for blending the physical and digital worlds through 3D perception

Image Sensors World        Go to the original article...

Press release:
https://voxelsensors.com/wp-content/uploads/2023/05/VoxelSensors_Announces_Seed_Round_Closing_May-17-2023-_-RC_FINAL.pdf

Brussels (Belgium), May 17, 2023
- VoxelSensors today announces an investment of €5M led by Belgian venture capital firms Capricorn Partners and Qbic, with participation from the investment firm finance&invest.brussels, existing investors and the team. VoxelSensors’ Switching Pixels® Active Event Sensor (SPAES) is a novel category of ultra-low power and ultra-low latency 3D perception sensors for Extended Reality (XR)1 to blend the physical and digital worlds. The funding will be used to further develop VoxelSensors’ roadmap, hire key employees, and strengthen business engagements with customers in the U.S. and Asia. Furthermore, VoxelSensors remains committed to raising funds in order to back its ambitious growth plans.

Extended Reality device manufacturers require low power consumption and low latency 3D
perception technology to seamlessly blend the physical and digital worlds and unlock the true
potential of immersive experiences. VoxelSensors’ patented Switching Pixels® Active Event Sensor technology has uniquely resolved these significant 3D perception challenges and is the world’s first solution reaching less than 10 milliwatts power consumption combined with less than 5 milliseconds latency while being resistant to outdoor lighting at distances over 5 meters and being immune to crosstalk interferences.

The founders of VoxelSensors boast a combined experience of more than 50 years in the development of cutting-edge 3D sensor technologies, systems and software. Their track record of success includes co-inventing an efficient 3D Time of Flight sensor and camera technology, which was acquired by a leading tech company.

“Our goal at VoxelSensors is to seamlessly integrate the physical and digital worlds to a point level where they become indistinguishable,” said Johannes Peeters, co-founder and CEO of VoxelSensors. "Extended Reality has rapidly gained traction in recent years, with diverse applications across sectors such as gaming, entertainment, education, healthcare, manufacturing, and more. With our Switching Pixels® Active Event Sensor technology we are poised to deliver unparalleled opportunities for groundbreaking user experiences. We are excited by the opportunity to contribute to the growth of our growing industry and honored by the trust of these investors to help us expand the company and accelerate market penetration.”

“We are excited to invest with the Capricorn Digital Growth Fund in VoxelSensors. We appreciate the broad experience in the team, the flexibility of the 3D perception solution towards different applications and the solid intellectual property base, essential for the success of a deep tech start-up. The team has a proven track record to build a scalable business model within a Europe-based semiconductor value chain. We also highly value the support of the Brussels region via Innoviris,” explained Marc Lambrechts, Investment Director at Capricorn Partners.

“As an inter-university fund, Qbic is delighted to support VoxelSensors in this phase of its journey. It’s a pleasure to see the team that led one of Vrije Universiteit Brussels’ (VUB) most prominent spinoffs to successful exit, start another initiative in this space. They will leverage again the expertise VUB has in this domain, through an extensive research collaboration,” said Steven Leuridan, Partner at Qbic III Fund. “We truly believe VoxelSensors is a shining example of a European fabless semiconductor company that holds potential to lead its market.”

Marc Lambrechts from Capricorn Partners and Steven Leuridan from Qbic are appointed to VoxelSensors’ Board of Directors, effective immediately.

“With Switching Pixels® Active Event Sensing (SPAES) we challenge the status quo in 3D perception,” concludes VoxelSensors’ co-founder and CTO of VoxelSensors, PhD Ward van der
Tempel. “This groundbreaking technology unlocks new possibilities in Extended Reality by addressing
previously unmet needs such as precise segmentation, spatial mapping, anchoring and natural interaction. Moreover, this breakthrough innovation extends beyond Extended Reality, and has exciting potential in various industries, including robotics, automotive, drones, and medical applications.”

VoxelSensors will showcase their breakthrough technology at the Augmented World Expo (AWE) USA 2023 from May 31 to June 2, 2023, in Santa Clara (California, USA). Evaluation Kits of the SPAES technology are available for purchase through sales@voxelsensors.com

Go to the original article...

css.php