Archives for July 2023

Ubicept raises $8M for SPAD-based computer vision

Image Sensors World        Go to the original article...

From Businesswire: https://www.businesswire.com/news/home/20230725606397/en/Ubicept-Raises-8M-to-Unlock-Computer-Vision-in-All-Lighting-Conditions-by-Counting-Individual-Photons

Ubicept Raises $8M to Unlock Computer Vision in All Lighting Conditions by Counting Individual Photons

Company plans to use capital to attract new talent and expand into several new industries including 3D scanning and industrial automation 

BOSTON--(BUSINESS WIRE)--Ubicept, the revolutionary computer vision technology company, today announced it has secured $8M in funding. The oversubscribed seed investment round was led by Ubiquity Ventures and E14 Fund, with participation from Wisconsin Alumni Research Foundation, Phoenix Venture Partners (PVP), and several other investors and angel contributors.

Born out of the world-class labs of MIT and University of Wisconsin-Madison, Ubicept is redefining boundaries in the field of computer vision. Traditional computer vision relies on a dated "still frame" approach, whereas Ubicept bypasses this old logic and directly leverages single-photon sensors to turn the individual photons that hit an imaging sensor into a reliable computer vision output. The resulting perception system can operate in extreme lighting conditions, capture sharp images of high-speed motion, and even "see" around corners. Ubicept targets a price point similar to conventional camera systems.

“We are excited about this major milestone. This funding will allow us to accelerate our efforts to transform the way computers 'see' and understand the world, especially in challenging environments," said Sebastian Bauer, co-founder and CEO.

"Ubicept is the first company in the world with this "count individual photons" approach to computer vision. I see tremendous demand right now for this next generation of perception and the use cases it unlocks," said Sunil Nagaraj of Ubiquity Venture. Mr. Nagaraj has also joined the Ubicept Board of Directors.

Habib Haddad, Managing Partner of E14 Fund, adds: “The development in the market for single-photon sensors has picked up dramatically in the last few years, with smartphone manufacturers adding them to their devices for depth sensing. The processing Ubicept adds to such sensor type will enable their widespread use for general-purpose imaging and a wide array of computer vision applications. The output quality is so much better than what conventional sensors provide.” The new capital will be used to expand the Ubicept team, secure further intellectual property rights, and bring their product to more customers across several industries. This investment will strengthen Ubicept's position as the leader in single-photon computer vision. https://www.ubicept.com/

About Ubicept
Ubicept is a computer vision startup spun out of the labs of MIT and UW-Madison. The company is developing advanced computer vision and image processing algorithms using single-photon sensitive image sensors that can function in extreme lighting conditions, swiftly capture motion, and even see around corners.

About Ubiquity Ventures
Ubiquity Ventures is a seed-stage institutional venture capital firm that invests in "software beyond the screen" startups and has over $150 million under management. Ubiquity's portfolio includes B2B technology companies that utilize smart hardware or machine learning to solve business problems outside the reach of computers and smartphones. By transforming real-world physical problems into the domain of software, Ubiquity startups tap into large greenfield markets and offer more effective solutions. See more details on the Ubiquity Ventures website at http://www.ubiquity.vc.

About E14 Fund
E14 Fund is the MIT-affiliated, early stage venture capital firm. E14 Fund invests in MIT deep tech startups that are transforming traditional industries across a broad array of market-ready, scalable innovations in AI/ML, robotics, climate, biomanufacturing, life sciences, material science, sensing and more, supporting its portfolio and community with resources from across the MIT ecosystem. For more information, visit www.e14fund.com.

Go to the original article...

Sony still leads CIS market: Yole report

Image Sensors World        Go to the original article...

Yole press release July 21, 2023: CIS: Sony is still leading the market

  • A 5.1% CAGR between 2022 and 2028 is announced in Yole Intelligence‘s yearly analysis. The market should reach $28.8 billion at the end of the period.
  • CIS player market shares: there are no changes in the Top 5 compared to the year before. Sony is still leading the market with a 42% market share.
  • The CIS industry is pushed by technological innovations linked to performance, integrability, and new sensing capabilities.


2022 has been a transition year for the CIS industry. In the Status of the CMOS Image Sensor Industry report, Yole Intelligence, part of Yole Group analysts see similar revenues to the year before and a slight decline in overall volumes. However, a significant transformation is underway in the market structure, as evident in the growth of the automotive segment and the increase in the CIS average selling price.


Florian Domengie, Senior, Technology & Market Analyst, Imaging at Yole Intelligence says:
"There is a trend for custom CMOS image sensor products for mid- and low-volume differentiated markets, including niche markets, that do not face the same performance and cost pressure as higher-volume markets such as mobile, automotive, and consumer. Currently, numerous companies are adopting this approach."

Sony is again increasing its commanding position while Omnivision has retreated to close to its pre-COVID-19 market share. Samsung also reduced its footprint, apparently to the benefit of SK hynix. Onsemi saw an exceptional 2022, boosted by the automotive and industrial markets. GalaxyCore and SmartSens have retreated, apparently due to the disinflation of the low-end mobile and security camera markets.

The economic conflict between the U.S. and China has left its mark on the geographically competitive CIS landscape. Initially, the U.S. sanctions on Huawei mainly hit Sony while boosting the Chinese CIS players. The latter then prospered thanks to domestic market opportunities in consumer, automotive, and security. However, in 2022 the bubble burst in the security market, while the U.S. efforts to hit Chinese semiconductor firms also translated into CIS suppliers.

With the slowdown in the mobile and computing market and the recent temporary drop in the security CIS market, Chinese CIS suppliers aim to decrease their exposure to these markets and gain market share in those that are thriving and deliver higher value and ASP : Automotive and industrial. In addition, the domestic market ensures a high demand for these applications.

Overall, there are ongoing investments to either secure capacity, including for logic wafer production, or develop in-house technologies as a strategic vision to get further market share.


 


 

From a market perspective, Yole Intelligence announces a return to steady growth. CIS revenues stagnated in 2022, at US$21.3 billion, in the continuity of a soft-landing situation compared to the largely inflated growth experienced in previous years. The general inflation in 2022 translated to a significant slowdown in consumer product sales, such as smartphones: Yole Intelligence’s imaging analysts estimate a 10% decrease. 

However, higher-end CIS products and new sensing opportunities will sustain the mobile CIS market in the coming years. In addition, automotive cameras are experiencing good growth enabled by in-cabin, viewing, and ADAS applications, promoted further by safety regulations.
In parallel, the share of the mobile CIS market should continue to decrease compared to the growing share of automotive, security, and industrial CIS, with the resulting product mix maintaining the overall ASP beyond US$3.

“We have adjusted downward our long-term CIS forecast, with a 5.1% revenue CAGR from 2022 – 2028, and the resulting CIS revenues should reach US$29 billion by 2028”, explains Domengie.

Go to the original article...

Videos of the day: DXOMARK, ams OSRAM

Image Sensors World        Go to the original article...

DXO published a webinar on automotive sensors characterization. Image Science Director Laurent Chanas and Product Marketing Specialist Fabien Montagné present the IEEE-P2020 full testing suite dedicated to the automotive industry and answer some questions from the audience.




ams OSRAM Mira050 0.5MP image sensor demo: Demonstrating how the ams OSRAM Mira050 0.5MP image sensor provides an industry leading solution for augmented and virtual reality glasses.



Go to the original article...

Canon’s activities lead to the removal of 739 listings from Amazon in Germany, Italy, Spain, the United Kingdom, France, the Netherlands, Sweden, Poland and Belgium

Newsroom | Canon Global        Go to the original article...

Go to the original article...

214 listings removed from Amazon in Canada, Mexico and the United States of America after Canon files infringement reports

Newsroom | Canon Global        Go to the original article...

Go to the original article...

Photonis acquires El-Mul

Image Sensors World        Go to the original article...

Photonis announces the acquisition of El-Mul, leader in ion and electron detection solutions

Mérignac, France and Rehovot, Israel – July 19, 2023
 
Photonis a global leader of electro-optical detection and imaging technologies for defense and industrial markets, held by HLD since 2021, is pleased to announce the acquisition of Israeli company El-Mul, a specialist developer and manufacturer of advanced charged particle detectors and devices.

By welcoming El-Mul along with Xenics, Telops and Proxivision acquired in the last eight months, Photonis Group pursues its diversification and establishes itself as the sole sizeable European technology platform providing differentiated detection and imaging solutions across the electromagnetic spectrum to a variety of high-growth end markets worldwide.

“With the acquisition of El-Mul, Photonis group will gain access to the Electron Microscopy and Semiconductor inspection markets from a strong leading position, will reinforce its technology leadership in the Mass Spectrometry market and accelerate its growth into industrial and commercial markets.” Jérôme Cerisier, CEO of Photonis Group said.

El-Mul, based in Israel with 50 employees, is a well-established technology leader in the field of detection systems for Scanning Electron Microscopes for both the Analytical and Semiconductor industries as well as the field of electron and ion optics for Mass Spectrometry, having a strong position in the worldwide high-end markets.

“El-Mul has emerged as an innovative leader in electron and ion detection with the continued support of its founders and shareholders Cheifez family since 1992. Joining Photonis Group is a real opportunity to accelerate our growth. We will benefit from the group expertise, technological and commercial base, and international reach. There are also very promising synergies between our companies in terms of market, product range and R&D. Especially new R&D co-developments should bring significant added value to our customers.” Sasha Kadyshevitch, CEO of El-Mul said.

The transaction is finalized. Terms of the transaction are not being disclosed.
 
 
ABOUT PHOTONIS:
 
Accompanied by HLD since 2021, Photonis is a high-tech company, with more than 85 years of experience in the innovation, development, manufacture and sale of technologies in the field of photo detection and imaging. Today, it offers its customers detectors and detection solutions: its power tubes, digital cameras, neutron & gamma detectors, scientific detectors and intensifier tubes allow Photonis to respond to complex issues in environments extremely demanding by offering tailor-made solutions to its customers. Thanks to its sustained and permanent investment, Photonis is internationally recognized as a major innovator in optoelectronics, with production and R&D carried out on 8 sites, in Europe and the USA and over 1200 employees.

For more information: photonis.com
 
ABOUT EL-MUL
 
Since its founding in 1992, El-Mul Technologies has established itself as a leading supplier of advanced, high performance particle detectors that meet the most challenging needs of its customers. El-Mul excels in tailor-design of solutions that match customers’ requirements. Complex detection solutions which incorporate mechanical, optical and electronic components are conceived from square one through to full development, prototyping and serial manufacturing. El-Mul’s products range from traditional detection modules to state-of-the-art systems. An emphasis on innovation, confidentiality and personal service drives its business philosophy. A key strategic business goal for El-Mul is to build long-term and fruitful relationships with its customers – delivering performance, high confidence and clear value.

For more information: el-mul.com

Go to the original article...

Paper on "Charge-sweep" CIS Pixel

Image Sensors World        Go to the original article...

In a recent paper titled "Design and Characterization of a Burst Mode 20 Mfps Low Noise CMOS Image Sensor" (https://www.mdpi.com/1424-8220/23/14/6356) Xin Yue and Eric Fossum write:

This paper presents a novel ultra-high speed, high conversion-gain, low noise CMOS image sensor (CIS) based on charge-sweep transfer gates implemented in a standard 180 nm CIS process. Through the optimization of the photodiode geometry and the utilization of charge-sweep transfer gates, the proposed pixels achieve a charge transfer time of less than 10 ns without requiring any process modifications. Moreover, the gate structure significantly reduces the floating diffusion capacitance, resulting in an increased conversion gain of 183 µV/e−. This advancement enables the image sensor to achieve the lowest reported noise of 5.1 e− rms. To demonstrate the effectiveness of both optimizations, a proof-of-concept CMOS image sensor is designed, taped-out and characterized.











Go to the original article...

Canon requests removal of toner cartridges from Amazon.com, including LEMERO UEXPECT brand cartridges sold by Color Office Tech

Newsroom | Canon Global        Go to the original article...

Go to the original article...

Edgehog Glass: Flare-Free Imaging with Next-Generation Anti-Reflection

Image Sensors World        Go to the original article...

Edgehog is a Montreal-based startup that has developed a solution for the stray light problem in camera and LiDAR sensors' coverglass.

Edgehog glass, a next-generation anti-reflection technology, removes image artifacts through the innovative process of glass nanotexturing by creating a gradient of refractive index on filters and image sensor covers. This enables uncompromised visuals from cameras and flare-free imaging with CMOS image sensors even in challenging lighting conditions. The result is a cleaner raw signal from the hardware without expensive image processing, laying the foundation for superior computer vision applications. The advanced nanotextured Edgehog glass enables camera optics designers to achieve unparalleled image clarity for a wide viewing angle.









 

Email: info@edgehogtech.com
Phone: +1 (438) 230 0101
Web: http://www.edgehogtech.com

Go to the original article...

onsemi Analyst Day 2023

Image Sensors World        Go to the original article...

onsemi held its annual Analyst Day on May 16 2023. A video recording below.



PDF slides are also available here: https://www.onsemi.com/site/pdf/2023_Analyst_Day_Presentation.pdf

Image sensors-related slides start around #63.



















Go to the original article...

12 ps resolution Vernier time-to-digital converter

Image Sensors World        Go to the original article...

Huang et al. from Shanghai Advanced Research Institute recently published a paper titled "A 13-Bit, 12-ps Resolution Vernier Time-to-Digital Converter Based on Dual Delay-Rings for SPAD Image Sensor" in Sensors journal.

Link: https://www.mdpi.com/1424-8220/21/3/743

Abstract:
A three-dimensional (3D) image sensor based on Single-Photon Avalanche Diode (SPAD) requires a time-to-digital converter (TDC) with a wide dynamic range and fine resolution for precise depth calculation. In this paper, we propose a novel high-performance TDC for a SPAD image sensor. In our design, we first present a pulse-width self-restricted (PWSR) delay element that is capable of providing a steady delay to improve the time precision. Meanwhile, we employ the proposed PWSR delay element to construct a pair of 16-stages vernier delay-rings to effectively enlarge the dynamic range. Moreover, we propose a compact and fast arbiter using a fully symmetric topology to enhance the robustness of the TDC. To validate the performance of the proposed TDC, a prototype 13-bit TDC has been fabricated in the standard 0.18-µm complementary metal–oxide–semiconductor (CMOS) process. The core area is about 200 µm × 180 µm and the total power consumption is nearly 1.6 mW. The proposed TDC achieves a dynamic range of 92.1 ns and a time precision of 11.25 ps. The measured worst integral nonlinearity (INL) and differential nonlinearity (DNL) are respectively 0.65 least-significant-bit (LSB) and 0.38 LSB, and both of them are less than 1 LSB. The experimental results indicate that the proposed TDC is suitable for SPAD-based 3D imaging applications.
 

Structure and operation of a typical Single-Photon Avalanche Diode (SPAD)-based direct time-of-flight (D-ToF) system.


Principle block diagram of the proposed vernier time-to-digital converter (TDC).




The architecture of the TDC core implemented by the 16-stages dual delay-rings.

The timing diagram of the TDC core.



Schematic of the proposed pulse-width self-restricted (PWSR) delay element.


The simulated results of the proposed PWSR delay element: (a) dependence of the delay time on the controlled voltage VNL/VNS and (b) dependence of the delay time on temperature.



Block diagram of the 3D image sensor based on our proposed TDC (right) and its pixel circuit schematic (left).

Go to the original article...

Sony FE 70-200mm f4 G OSS II review

Cameralabs        Go to the original article...

The Sony FE 70-200mm f4G OSS II is a short telephoto zoom designed for Alpha mirrorless cameras. It comes almost ten years after the original version, so let's see how they compare in my full review!…

Go to the original article...

Sony A6700 review

Cameralabs        Go to the original article...

The Sony A6700 is a mid-range mirrorless camera with a 26 Megapixel APSC sensor, 4k 120 video and IBIS. It's Sony’s first new hybrid APSC camera in four years, so find out how it performs in my in-depth review!…

Go to the original article...

Canon develops instant, high-accuracy measurement and sorting method of conventionally challenging black plastic pieces, thus promoting recycling and supporting a circular economy

Newsroom | Canon Global        Go to the original article...

Go to the original article...

IDQuantique provides QRNG capabilities to Samsung Galaxy phones

Image Sensors World        Go to the original article...

From IDQuantique: https://www.idquantique.com/sk-telecom-and-samsung-unveil-the-galaxy-quantum-4/

SK Telecom and Samsung unveil the Galaxy Quantum 4, providing more safety and performance with IDQ’s QRNG Chip

Geneva, June 12th 2023

ID Quantique (IDQ), the world leader in quantum-safe security solutions, SK Telecom and Samsung Electronics, have worked together to release the ‘Galaxy Quantum 4’, the fourth Samsung smartphone equipped with quantum technology, designed to protect customers’ information.

With features matching those of Samsung’s flagship smartphones of the S23 series – i.e. waterdrop camera with image stabilization (OIS) and nightography (night/low-light shooting), rear glass design, large capacity battery – along with strengthened quantum-safe technology, the Galaxy Quantum 4 will be a new choice for customers who value both high performance and security.

Like its predecessor, the Galaxy Quantum 4 is equipped with the world’s smallest (width 2.5mm x length 2.5 mm) Quantum Random Number Generator (QRNG) chipset, designed by ID Quantique; enabling trusted authentication and encryption of information. It allows smartphone holders to use an even wider number of applications and services in a safer and more secure manner by generating unpredictable true random numbers.

IDQ’s QRNG chip enhances the security of a very large number of services provided by the operator. QRNG protects the process from log-in/authentication/payment/unlock/OTP generation of service apps ranging from financial apps to social media apps and games offering a much higher level of trust to the users.

As an example, when an application provides authentication services, sensitive data such as fingerprints and facial images must be protected. Our QRNG, embedded in this new smartphone, can therefore be leveraged to generate encryption keys and, in conjunction with the keystore of the terminal, provide quantum enhanced security every time a user logs in to the app. The QRNG is also used to encrypt data stored in the external memory card.

As the previous version, the ‘Galaxy Quantum 4’ offers a differentiated security experience to customers by providing a ‘quantum indicator’ on the status bar so that customers can realize that they are using a quantum security service. Its price point is comparable to previous versions, but with increased performance and security.

“Protecting one’s private data is a priority for users. The Galaxy Quantum 4 is the latest in the Quantum series, which offers strong quantum security and premium performance. As a leading player in this area, we will continue to expand the use of quantum cryptography technology to provide users with greater security and safety,” said Moon Kab-in, Vice President and Head of Smart Device Center at SKT.

“Mobile phone users don’t want to get their data stolen. The Galaxy Quantum 4 includes top performances and more quantum-secured applications than ever before, bringing applications and services to a new level of security in the mobile phone industry” said Grégoire Ribordy, CEO and co-founder of ID Quantique.

Go to the original article...

Article on Machine Vision + AI Opportunities

Image Sensors World        Go to the original article...

From Semiconductor Engineering https://semiengineering.com/machine-vision-plus-ai-ml-opens-huge-opportunities/

Machine Vision Plus AI/ML Adds Vast New Opportunities

Traditional technology companies and startups are racing to combine machine vision with AI/ML, enabling it to “see” far more than just pixel data from sensors, and opening up new opportunities across a wide swath of applications.

In recent years, startups have been able to raise billions of dollars as new MV ideas come to light in markets ranging from transportation and manufacturing to health care and retail. But to fully realize its potential, the technology needs to address challenges on a number of fronts, including improved performance and security, and design flexibility.

Fundamentally, a machine vision system is a combination of software and hardware that can capture and process information in the form of digital pixels. These systems can analyze an image, and take certain actions based on how it is programmed and trained. A typical vision system consists of an image sensor (camera and lens), image and vision processing components (vision algorithm) and SoCs, and the network/communication components.

Both still and video digital cameras contain image sensors. So do automotive sensors such as lidar, radar, ultrasound, which deliver an image in digital pixel form, although not with the same resolution. While most people are familiar these types of images, a machine also can “see” can heat and audio signals data, and they can analyze that data to create a multi-dimensional image.
“CMOS image sensors have seen drastic improvement over the last few years,” said Ron Lowman, strategic marketing manager at Synopsys. “Sensor bandwidth is not being optimized for human sight anymore, but rather for the value AI it can provide. For instance, MIPI CSI, the dominant vision sensor interface, is not only increasing bandwidths, but also adding AI features such as Smart Region of Interest (SROI) and higher color depth. Although these color depth increases can’t be detected by the human eye, for machine vision it can improve the value of a service dramatically.”

Machine vision is a subset of the broader computer vision. “While both disciplines rely on looking at primarily image data to deduce information, machine vision implies ‘inspection type’ applications in an industry or factory setting,” said Amol Borkar, director of product management, marketing and business development, Tensilica Vision and AI DSPs at Cadence. “Machine vision relies heavily on using cameras for sensing. However, ‘cameras’ is a loaded term because we are typically familiar with an image sensor that produces RGB images and operates in the visible light spectrum. Depending on the application, this sensor could operate in infrared, which could be short wave, medium wave, long wave IR, or thermal imaging, to name a few variants. Event cameras, which are very hyper-sensitive to motion, were recently introduced. On an assembly line, line scan cameras are a slightly different variation from typical shutter-based cameras. Most current applications in automotive, surveillance, and medical rely on one or more of these sensors, which are often combined to do some form of sensor fusion to produce a result better than a single camera or sensor.”

Benefits
Generally speaking, MV can see better than people. The MV used in manufacturing can improve productivity and quality, lowering production costs. Paired with ADAS for autonomous driving, MV can take over some driving functions. Together with AI, MV can help analyze medical images.
The benefits of using machine vision include higher reliability and consistency, along with greater precision and accuracy (depending on camera resolution). And unlike humans, machines do not get tired, provided they receive routine maintenance. Vision system data can be stored locally or in the cloud, then analyzed in real-time when needed. Additionally, MV reduces production costs by detecting and screening out defective parts, and increases inventory control efficiency with OCR and bar-code reading, resulting in lower overall manufacturing costs.

Today, machine vision usually is deployed in combination with AI, which greatly enhances the power of data analysis. In modern factories, automation equipment, including robots, is combined with machine vision and AI to increase productivity.

How AI/ML and MV interact
With AI/ML, MV can self-learn and improve after capturing digital pixel data from sensors.
“Machine vision (MV) and artificial intelligence (AI) are closely related fields, and they often interact in various ways,” said Andy Nightingale, vice president of product marketing at Arteris IP. “Machine vision involves using cameras, sensors, and other devices to capture images or additional data, which is then processed and analyzed to extract useful information. Conversely, AI involves using algorithms and statistical models to recognize patterns and make predictions based on large amounts of data.”
This also can include deep learning techniques. “Deep learning is a subset of AI that involves training complex neural networks on large datasets to recognize patterns and make predictions,” Nightingale explained. ” Machine vision systems can use deep learning algorithms to improve their ability to detect and classify objects in images or videos. Another way that machine vision and AI interact is through the use of computer vision algorithms. Computer vision is a superset of machine vision that uses algorithms and techniques to extract information from images and videos. AI algorithms can analyze this information and predict what is happening in the scene. For example, a computer vision system might use AI algorithms to analyze traffic patterns and predict when a particular intersection will likely become congested. Machine vision and AI can also interact in the context of autonomous systems, such as self-driving cars or drones. In these applications, machine vision systems are used to capture and process data from sensors. In contrast, AI algorithms interpret this data and make decisions about navigating the environment.”

AI/ML, MV in autonomous driving
AI has an increasing number of roles in modern vehicles, but the two major roles are in perception and decision making.

“Perception is the process of understanding one’s surroundings through onboard and external sensor arrays,” said David Fritz, vice president of hybrid and virtual systems at Siemens Digital Industries Software. “Decision-making first takes the understanding of the surrounding state and a goal such as moving toward the destination. Next, the AI decides the safest, most effective way to get there by controlling the onboard actuators for steering, braking, accelerating, etc. These two critical roles address very different problems. From a camera or other sensor, the AI algorithms will use raw data from the sensors to perform object detection. Once an object is detected, the perception stack will classify the object, for example, whether the object is a car, a person, or an animal. The training process is lengthy and requires many training sets presenting objects from many different angles. After training, the AI network can be loaded into the digital twin or physical vehicle. Once objects are detected and classified decisions can be made by another trained AI network to control steering, braking, and acceleration. Using a high-fidelity digital twin to validate the process virtually has been shown to result in safer, more effective vehicles faster than simply using open road testing.”

How much AI/ML is needed is a question frequently asked by developers. In the case of modern factories, MV can be used to simply detect and pick out defective parts in an assembly line or employed to assemble automobiles. Doing the latter requires advanced intelligence and a more sophisticated design to ensure timing, precision, and calculation of motion and distance in the assembly process.
“Automation using robotics and machine vision has increased productivity in modern factories,” observed Geoff Tate, CEO of Flex Logix. “Many of these applications use AI. A simple application — for instance, detecting if a label is applied correctly — does not require a great deal of intelligence. On the other hand, a sophisticated, precision robot arm performing 3D motion requires much more GPU power. In the first application, one tile of AI IP will be sufficient, while the second application may need multiple tiles. Having flexible and scalable AI IPs would make designing robotics and machine vision much easier.”

Applications
Machine vision applications are limited only by one’s imagination. MV can be used in almost any industrial and commercial segment, so long as it requires vision and processing. Here is a partial list:
 Transportation (autonomous driving, in-cabin monitoring, traffic flow analysis, moving violation and accident detection);

  •  Manufacturing and automation (productivity analysis, quality management);
  •  Surveillance (detection of motion and intrusion monitor);
  •  Health care (imaging, cancer and tumor detection, cell classification);
  •  Agriculture (farm automation, plant disease and insect detection);
  •  Retail (customer tracking, empty shelf detection, theft detection), and
  •  Insurance (accident scene analysis from images).

There are many other applications. Consider drinking water or soft drink bottling. A machine vision system can be used to inspect fill levels, which typically is done by highly efficient robots. But robots occasionally make mistakes. MV can ensure the fill level is consistent and the labels are applied correctly.

Detecting any machine parts that deviate from measurement specification limits is another job for MV. Once the MV is trained on the specification, it can detect the parts that are outside the specification limits.

MV can detect uniform shapes such as squares or circles as well as odd-shaped parts, so it can be used to identify, detect, measure, count, and (with robots), pick and place.
Finally, combining AI, MV can perform tire assembly with precision and efficiency. Nowadays, OEMs automate vehicle assembly with robots. One of the processes is to install the four wheels to a new vehicle. Using MV, a robotic arm can detect the correct distance and apply just the right amount of pressure to prevent any damage.

Types of MV
MV technologies can be divided into one-dimensional (1D), two-dimensional (2D), and three-dimensional (3D).

1D systems analyze data one line at a time, comparing variations among groups. Usually it is used in production of items such as plastics and paper on a continual basis. 2D systems, in contrast, use a camera to scan line by line to form an area or a 2D image. In some cases, the whole area is scanned and the object image can then be unwrapped for detailed inspection. 

3D systems consist of multiple cameras or laser sensors to capture the 3D view of an object. During the training process, the object or the cameras need to be moved to capture the entire product. Recent technology can produce accuracy within micrometers. 3D systems produce higher resolution but are also more expensive.

Emerging MV startups and new innovations
Tech giants, including IBM, Intel, Qualcomm, and NVIDIA, have publicly discussed investments in MV. In addition, many startups are developing new MV solutions such as Airobotics , Arcturus Networks, Deep Vision AI , Hawk-Eye Innovations, Instrumental, lending AI, kinara, Mech-Mind, Megvii, NAUTO, SenseTime, Tractable, ViSenze, Viso, and others. Some of these companies have been able to raise funding in excess of $1 billion.

In transportation, insurance companies can use MV to scan photographs and videos of scenes of accidents and disasters for financial damage analysis. Additionally, AI-based MV can power safety platforms to analyze driver behavior.

In software, computer vision platforms can be created without the knowledge of coding. Other startups have developed the idea for MV authentication software. And in the field of sports, AI, vision, and data analysis could provide coaches the ability to understand how decisions are made by players during a game. Also, one startup devised a cost reduction idea for surveillance by combining AI and MV in unmanned, aerial drone design.

Both MV and AI are changing quickly, and will continue to increase in performance, including precision and accuracy, while high GPU and ML power will come down in cost, propelling new MV applications.

Arteris’ Nightingale noted there will be further improvements in accuracy and speed. “Machine vision systems will likely become more accurate and faster. This will be achieved through advancements in hardware, such as sensors, cameras, and processors, as well as improvements in algorithms and machine learning models,” he said, pointing to an increased use of deep learning, as well. “Deep learning has been a significant driver of progress in machine vision technology in recent years, and it is likely to play an even more substantial role in the future. Deep learning algorithms can automatically learn data features and patterns, leading to better accuracy and performance. There will be an enhanced ability to process and analyze large amounts of data, as machine vision technology can process and analyze large amounts of data quickly and accurately. We may also see advancements in machine vision systems that can process significantly larger datasets, leading to more sophisticated and intelligent applications.”

Further, MV and AI are expected to integrate with other technologies to provide additional high-performance, real-time applications.


“Machine vision technology is already integrated with other technologies, such as robotics and automation,” he said. “This trend will likely continue, and we may see more machine vision applications in health care, transportation, and security. As well, there will be more real-time applications. Machine vision technology is already used for real-time applications, such as facial recognition and object tracking. In the future, we may see more applications that require real-time processing, such as self-driving cars and drones.”

MV design challenges
Still, there are challenges in training an MV system. Its accuracy and performance depend on how well the MV is trained. Inspection can encompass parameters such as orientation, variation of the surfaces, contamination, and accuracy tolerances such as diameter, thickness, and gaps. 3D systems can perform better than 1D or 2D systems when detecting cosmetic and service variation effects. In other cases, when seeing an unusual situation, human beings can draw on knowledge from a different discipline, while MV and AI may not have that ability.

“Some of today’s key challenges include data flow management and control – especially with real-time latency requirements such as those in automotive applications — while keeping bandwidth to a minimum,” said Alexander Zyazin, senior product manager in Arm‘s Automotive Line of Business. “In camera-based systems, image quality (IQ) remains critical. It requires a hardware design to support ultra-wide dynamic range and local tone mapping. But it also requires IQ tuning, where traditionally subjective evaluation by human experts was necessary, making the development process lengthy and costly. The new challenge for MV is that this expertise might not result in the best system performance, as perception engines might prefer to see images differently to humans and to one another, depending on the task.”

In general, machines can do a better job when doing mundane tasks over and over again, or when recognizing an image with more patterns than humans can typically process. “As an example, a machine may do a better job recognizing an anomaly in a medical scan than a human, simply because the doctor may make a mistake, be distracted or tired,” said Thomas Andersen, vice president for AI and machine learning at Synopsys. “When inspecting high-precision circuits, a machine can do a much better job analyzing millions of patterns and recognizing errors, a task a human could not do, simply due to the size of the problem. On the other hand, machines have not yet reached the human skill of recognizing the complex scenes that can occur while driving a car. It may seem easy for a human to recognize and anticipate certain reactions, while the machine may be better in ‘simple’ situations that a human easily could deal with, but did not due to a distraction, inattention or incapacitation – for example auto stop safety systems to avoid an imminent collision. A machine can always react faster than a human, assuming it interprets the situation correctly.”

Another challenge is making sure MV is secure. With cyberattacks increasing constantly, it will be important to ensure no production disruption or interference from threat actors.

“Security is critical to ensuring the output of MV technology isn’t compromised,” said Arm’s Zyazin. “Automotive applications are a good example of the importance of security in both hardware and software. For instance, the information processed and extracted from the machine is what dictates decisions such as braking or lane-keep assist, which can pose a risk to those inside the vehicle if done incorrectly.”

Conclusion
MV designs include a mixture of chips (processors, memories, security), IPs, modules, firmware, hardware and software. The rollout of chiplets and multi-chip packaging will allow those systems to be combined in novel ways more easily and more quickly, adding new features and functions and improving the overall efficiency and capabilities of these systems.

“Known good die (KGD) solutions can provide cost and space efficient alternatives to packaged products with limited bonding pads and wires,” said Tetsu Ho, DRAM manager at Winbond. That helps improve design efficiency, provides enhanced hardware security performance, and especially time-to-market for product launch. These die go through 100% burn-in and are tested to the same extent as discrete parts. KGD 2.0 is needed to assure end-of-line yield in 2.5D/3D assembly and 2.5D/3D multichip devices to realize improvements in PPA, which means bandwidth performance, power efficiency, and area as miniaturization, driven by the explosion of technologies such as edge-computing AI.”

This will open new options for MV in new an existing markets. It will be used to support humans in autonomous driving, help robots perform with precision and efficiency in manufacturing, and perform surveillance with unmanned drones. In addition, MV will be able to explore places that are considered dangerous for humans, and provide data input and analysis for many fields, including insurance, sports, transportation, defense, medicine, and more.

Go to the original article...

Article on Machine Vision + AI Opportunities

Image Sensors World        Go to the original article...

From Semiconductor Engineering https://semiengineering.com/machine-vision-plus-ai-ml-opens-huge-opportunities/

Machine Vision Plus AI/ML Adds Vast New Opportunities

Traditional technology companies and startups are racing to combine machine vision with AI/ML, enabling it to “see” far more than just pixel data from sensors, and opening up new opportunities across a wide swath of applications.

In recent years, startups have been able to raise billions of dollars as new MV ideas come to light in markets ranging from transportation and manufacturing to health care and retail. But to fully realize its potential, the technology needs to address challenges on a number of fronts, including improved performance and security, and design flexibility.

Fundamentally, a machine vision system is a combination of software and hardware that can capture and process information in the form of digital pixels. These systems can analyze an image, and take certain actions based on how it is programmed and trained. A typical vision system consists of an image sensor (camera and lens), image and vision processing components (vision algorithm) and SoCs, and the network/communication components.

Both still and video digital cameras contain image sensors. So do automotive sensors such as lidar, radar, ultrasound, which deliver an image in digital pixel form, although not with the same resolution. While most people are familiar these types of images, a machine also can “see” can heat and audio signals data, and they can analyze that data to create a multi-dimensional image.
“CMOS image sensors have seen drastic improvement over the last few years,” said Ron Lowman, strategic marketing manager at Synopsys. “Sensor bandwidth is not being optimized for human sight anymore, but rather for the value AI it can provide. For instance, MIPI CSI, the dominant vision sensor interface, is not only increasing bandwidths, but also adding AI features such as Smart Region of Interest (SROI) and higher color depth. Although these color depth increases can’t be detected by the human eye, for machine vision it can improve the value of a service dramatically.”

Machine vision is a subset of the broader computer vision. “While both disciplines rely on looking at primarily image data to deduce information, machine vision implies ‘inspection type’ applications in an industry or factory setting,” said Amol Borkar, director of product management, marketing and business development, Tensilica Vision and AI DSPs at Cadence. “Machine vision relies heavily on using cameras for sensing. However, ‘cameras’ is a loaded term because we are typically familiar with an image sensor that produces RGB images and operates in the visible light spectrum. Depending on the application, this sensor could operate in infrared, which could be short wave, medium wave, long wave IR, or thermal imaging, to name a few variants. Event cameras, which are very hyper-sensitive to motion, were recently introduced. On an assembly line, line scan cameras are a slightly different variation from typical shutter-based cameras. Most current applications in automotive, surveillance, and medical rely on one or more of these sensors, which are often combined to do some form of sensor fusion to produce a result better than a single camera or sensor.”

Benefits
Generally speaking, MV can see better than people. The MV used in manufacturing can improve productivity and quality, lowering production costs. Paired with ADAS for autonomous driving, MV can take over some driving functions. Together with AI, MV can help analyze medical images.
The benefits of using machine vision include higher reliability and consistency, along with greater precision and accuracy (depending on camera resolution). And unlike humans, machines do not get tired, provided they receive routine maintenance. Vision system data can be stored locally or in the cloud, then analyzed in real-time when needed. Additionally, MV reduces production costs by detecting and screening out defective parts, and increases inventory control efficiency with OCR and bar-code reading, resulting in lower overall manufacturing costs.

Today, machine vision usually is deployed in combination with AI, which greatly enhances the power of data analysis. In modern factories, automation equipment, including robots, is combined with machine vision and AI to increase productivity.

How AI/ML and MV interact
With AI/ML, MV can self-learn and improve after capturing digital pixel data from sensors.
“Machine vision (MV) and artificial intelligence (AI) are closely related fields, and they often interact in various ways,” said Andy Nightingale, vice president of product marketing at Arteris IP. “Machine vision involves using cameras, sensors, and other devices to capture images or additional data, which is then processed and analyzed to extract useful information. Conversely, AI involves using algorithms and statistical models to recognize patterns and make predictions based on large amounts of data.”
This also can include deep learning techniques. “Deep learning is a subset of AI that involves training complex neural networks on large datasets to recognize patterns and make predictions,” Nightingale explained. ” Machine vision systems can use deep learning algorithms to improve their ability to detect and classify objects in images or videos. Another way that machine vision and AI interact is through the use of computer vision algorithms. Computer vision is a superset of machine vision that uses algorithms and techniques to extract information from images and videos. AI algorithms can analyze this information and predict what is happening in the scene. For example, a computer vision system might use AI algorithms to analyze traffic patterns and predict when a particular intersection will likely become congested. Machine vision and AI can also interact in the context of autonomous systems, such as self-driving cars or drones. In these applications, machine vision systems are used to capture and process data from sensors. In contrast, AI algorithms interpret this data and make decisions about navigating the environment.”

AI/ML, MV in autonomous driving
AI has an increasing number of roles in modern vehicles, but the two major roles are in perception and decision making.

“Perception is the process of understanding one’s surroundings through onboard and external sensor arrays,” said David Fritz, vice president of hybrid and virtual systems at Siemens Digital Industries Software. “Decision-making first takes the understanding of the surrounding state and a goal such as moving toward the destination. Next, the AI decides the safest, most effective way to get there by controlling the onboard actuators for steering, braking, accelerating, etc. These two critical roles address very different problems. From a camera or other sensor, the AI algorithms will use raw data from the sensors to perform object detection. Once an object is detected, the perception stack will classify the object, for example, whether the object is a car, a person, or an animal. The training process is lengthy and requires many training sets presenting objects from many different angles. After training, the AI network can be loaded into the digital twin or physical vehicle. Once objects are detected and classified decisions can be made by another trained AI network to control steering, braking, and acceleration. Using a high-fidelity digital twin to validate the process virtually has been shown to result in safer, more effective vehicles faster than simply using open road testing.”

How much AI/ML is needed is a question frequently asked by developers. In the case of modern factories, MV can be used to simply detect and pick out defective parts in an assembly line or employed to assemble automobiles. Doing the latter requires advanced intelligence and a more sophisticated design to ensure timing, precision, and calculation of motion and distance in the assembly process.
“Automation using robotics and machine vision has increased productivity in modern factories,” observed Geoff Tate, CEO of Flex Logix. “Many of these applications use AI. A simple application — for instance, detecting if a label is applied correctly — does not require a great deal of intelligence. On the other hand, a sophisticated, precision robot arm performing 3D motion requires much more GPU power. In the first application, one tile of AI IP will be sufficient, while the second application may need multiple tiles. Having flexible and scalable AI IPs would make designing robotics and machine vision much easier.”

Applications
Machine vision applications are limited only by one’s imagination. MV can be used in almost any industrial and commercial segment, so long as it requires vision and processing. Here is a partial list:
 Transportation (autonomous driving, in-cabin monitoring, traffic flow analysis, moving violation and accident detection);

  •  Manufacturing and automation (productivity analysis, quality management);
  •  Surveillance (detection of motion and intrusion monitor);
  •  Health care (imaging, cancer and tumor detection, cell classification);
  •  Agriculture (farm automation, plant disease and insect detection);
  •  Retail (customer tracking, empty shelf detection, theft detection), and
  •  Insurance (accident scene analysis from images).

There are many other applications. Consider drinking water or soft drink bottling. A machine vision system can be used to inspect fill levels, which typically is done by highly efficient robots. But robots occasionally make mistakes. MV can ensure the fill level is consistent and the labels are applied correctly.

Detecting any machine parts that deviate from measurement specification limits is another job for MV. Once the MV is trained on the specification, it can detect the parts that are outside the specification limits.

MV can detect uniform shapes such as squares or circles as well as odd-shaped parts, so it can be used to identify, detect, measure, count, and (with robots), pick and place.
Finally, combining AI, MV can perform tire assembly with precision and efficiency. Nowadays, OEMs automate vehicle assembly with robots. One of the processes is to install the four wheels to a new vehicle. Using MV, a robotic arm can detect the correct distance and apply just the right amount of pressure to prevent any damage.

Types of MV
MV technologies can be divided into one-dimensional (1D), two-dimensional (2D), and three-dimensional (3D).

1D systems analyze data one line at a time, comparing variations among groups. Usually it is used in production of items such as plastics and paper on a continual basis. 2D systems, in contrast, use a camera to scan line by line to form an area or a 2D image. In some cases, the whole area is scanned and the object image can then be unwrapped for detailed inspection. 

3D systems consist of multiple cameras or laser sensors to capture the 3D view of an object. During the training process, the object or the cameras need to be moved to capture the entire product. Recent technology can produce accuracy within micrometers. 3D systems produce higher resolution but are also more expensive.

Emerging MV startups and new innovations
Tech giants, including IBM, Intel, Qualcomm, and NVIDIA, have publicly discussed investments in MV. In addition, many startups are developing new MV solutions such as Airobotics , Arcturus Networks, Deep Vision AI , Hawk-Eye Innovations, Instrumental, lending AI, kinara, Mech-Mind, Megvii, NAUTO, SenseTime, Tractable, ViSenze, Viso, and others. Some of these companies have been able to raise funding in excess of $1 billion.

In transportation, insurance companies can use MV to scan photographs and videos of scenes of accidents and disasters for financial damage analysis. Additionally, AI-based MV can power safety platforms to analyze driver behavior.

In software, computer vision platforms can be created without the knowledge of coding. Other startups have developed the idea for MV authentication software. And in the field of sports, AI, vision, and data analysis could provide coaches the ability to understand how decisions are made by players during a game. Also, one startup devised a cost reduction idea for surveillance by combining AI and MV in unmanned, aerial drone design.

Both MV and AI are changing quickly, and will continue to increase in performance, including precision and accuracy, while high GPU and ML power will come down in cost, propelling new MV applications.

Arteris’ Nightingale noted there will be further improvements in accuracy and speed. “Machine vision systems will likely become more accurate and faster. This will be achieved through advancements in hardware, such as sensors, cameras, and processors, as well as improvements in algorithms and machine learning models,” he said, pointing to an increased use of deep learning, as well. “Deep learning has been a significant driver of progress in machine vision technology in recent years, and it is likely to play an even more substantial role in the future. Deep learning algorithms can automatically learn data features and patterns, leading to better accuracy and performance. There will be an enhanced ability to process and analyze large amounts of data, as machine vision technology can process and analyze large amounts of data quickly and accurately. We may also see advancements in machine vision systems that can process significantly larger datasets, leading to more sophisticated and intelligent applications.”

Further, MV and AI are expected to integrate with other technologies to provide additional high-performance, real-time applications.


“Machine vision technology is already integrated with other technologies, such as robotics and automation,” he said. “This trend will likely continue, and we may see more machine vision applications in health care, transportation, and security. As well, there will be more real-time applications. Machine vision technology is already used for real-time applications, such as facial recognition and object tracking. In the future, we may see more applications that require real-time processing, such as self-driving cars and drones.”

MV design challenges
Still, there are challenges in training an MV system. Its accuracy and performance depend on how well the MV is trained. Inspection can encompass parameters such as orientation, variation of the surfaces, contamination, and accuracy tolerances such as diameter, thickness, and gaps. 3D systems can perform better than 1D or 2D systems when detecting cosmetic and service variation effects. In other cases, when seeing an unusual situation, human beings can draw on knowledge from a different discipline, while MV and AI may not have that ability.

“Some of today’s key challenges include data flow management and control – especially with real-time latency requirements such as those in automotive applications — while keeping bandwidth to a minimum,” said Alexander Zyazin, senior product manager in Arm‘s Automotive Line of Business. “In camera-based systems, image quality (IQ) remains critical. It requires a hardware design to support ultra-wide dynamic range and local tone mapping. But it also requires IQ tuning, where traditionally subjective evaluation by human experts was necessary, making the development process lengthy and costly. The new challenge for MV is that this expertise might not result in the best system performance, as perception engines might prefer to see images differently to humans and to one another, depending on the task.”

In general, machines can do a better job when doing mundane tasks over and over again, or when recognizing an image with more patterns than humans can typically process. “As an example, a machine may do a better job recognizing an anomaly in a medical scan than a human, simply because the doctor may make a mistake, be distracted or tired,” said Thomas Andersen, vice president for AI and machine learning at Synopsys. “When inspecting high-precision circuits, a machine can do a much better job analyzing millions of patterns and recognizing errors, a task a human could not do, simply due to the size of the problem. On the other hand, machines have not yet reached the human skill of recognizing the complex scenes that can occur while driving a car. It may seem easy for a human to recognize and anticipate certain reactions, while the machine may be better in ‘simple’ situations that a human easily could deal with, but did not due to a distraction, inattention or incapacitation – for example auto stop safety systems to avoid an imminent collision. A machine can always react faster than a human, assuming it interprets the situation correctly.”

Another challenge is making sure MV is secure. With cyberattacks increasing constantly, it will be important to ensure no production disruption or interference from threat actors.

“Security is critical to ensuring the output of MV technology isn’t compromised,” said Arm’s Zyazin. “Automotive applications are a good example of the importance of security in both hardware and software. For instance, the information processed and extracted from the machine is what dictates decisions such as braking or lane-keep assist, which can pose a risk to those inside the vehicle if done incorrectly.”

Conclusion
MV designs include a mixture of chips (processors, memories, security), IPs, modules, firmware, hardware and software. The rollout of chiplets and multi-chip packaging will allow those systems to be combined in novel ways more easily and more quickly, adding new features and functions and improving the overall efficiency and capabilities of these systems.

“Known good die (KGD) solutions can provide cost and space efficient alternatives to packaged products with limited bonding pads and wires,” said Tetsu Ho, DRAM manager at Winbond. That helps improve design efficiency, provides enhanced hardware security performance, and especially time-to-market for product launch. These die go through 100% burn-in and are tested to the same extent as discrete parts. KGD 2.0 is needed to assure end-of-line yield in 2.5D/3D assembly and 2.5D/3D multichip devices to realize improvements in PPA, which means bandwidth performance, power efficiency, and area as miniaturization, driven by the explosion of technologies such as edge-computing AI.”

This will open new options for MV in new an existing markets. It will be used to support humans in autonomous driving, help robots perform with precision and efficiency in manufacturing, and perform surveillance with unmanned drones. In addition, MV will be able to explore places that are considered dangerous for humans, and provide data input and analysis for many fields, including insurance, sports, transportation, defense, medicine, and more.

Go to the original article...

GPixel (Changguang Chenxin) files for an IPO

Image Sensors World        Go to the original article...

Original article (in Chinese): https://finance.eastmoney.com/a/202307032768592891.html

English translation using Google Translate:

In this IPO, Changguang Chenxin intends to raise 1.557 billion yuan to invest in machine visionSerialized CMOS image sensor in the fieldR&D and industrialization projects for scientific instruments, R&D and industrialization projects for serialized CMOS image sensors in the field of scientific instruments, R&D and industrialization projects for serialized CMOS image sensors in the field of professional imaging , serialized CMOS image sensors for medical imaging Sensor research and development and industrialization projects, high-end CMOS image sensor research and development center construction projects and supplementary working capital.

According to the prospectus, Changguang Chenxin focuses on the research and development, design, testing and sales of high-performance CMOS image sensors, as well as related customized services.

The company includes customers D, Teledyne, Vieworks, Adimec and other overseas manufacturers, Hikvision Robotics, Huarui Technology, Xintu Optoelectronics, Eco OptoelectronicsAnd other domestic manufacturers, as well as scientific research institutes such as the Changchun Institute of Optics and Mechanics of the Chinese Academy of Sciences, the Shanghai Institute of Technology of the Chinese Academy of Sciences, the Xi’an Institute of Optics and Mechanics of the Chinese Academy of Sciences, and the National Astronomical Observatory of the Chinese Academy of Sciences.

In terms of performance , from 2020 to 2022, the company's operating income will be 198 million yuan, 411 million yuan, and 604 million yuan; the net profit attributable to the parent during the same period will be 59.3872 million yuan, -33.1685 million yuan, and -83.1481 million yuan.

It is worth noting that Changguang Chenxin has overseas business risks. In the context of global cooperation in the integrated circuit supply chain, overseas procurement and overseas sales are an important part of the company's business activities. During the reporting period, the company's overseas procurement accounted for more than 80%, and overseas sales accounted for more than 30%.

In addition, the company also has a high proportion of inventory and the risk of falling prices. At the end of each reporting period, the book values ​​of inventories were 80.1680 million yuan, 224 million yuan and 304 million yuan respectively, accounting for 23.59%, 40.94% and 29.05% of the total assets respectively, maintaining a relatively high level overall.

Go to the original article...

Report predicts large growth in CCD image sensor market

Image Sensors World        Go to the original article...

[(July 5, 2023): There is a strong suspicion that this is a machine generated article and so its veracity is questionable.]

Experts Predict Stunning Growth for Global Image Sensor Market, Reaching USD 55.8 Billion by 2032




market.us recently published a research report on, the "Global Image Sensor Market By Technology, By Type, By Application, By Region and Companies - Industry Segment Outlook, Market Assessment, Competition Scenario, Trends, and Forecast 2023-2032". According to the report, the Global Image Sensors Market size was valued at USD 26.1 billion in 2022 and is projected to reach USD 55.8 billion by 2032, growing at a CAGR of 8.1% from 2023-2032. 

Rising security spending across public places worldwide combined with technology designed to bolster anti-terror equipment that also prevents security breaches are expected to drive this sector of industry forward.

The global image sensors market is poised for significant growth as technological advancements and expanding applications continue to fuel demand. With an increasing need for high-quality imaging solutions across industries such as automotive, consumer electronics, healthcare, and security, the image sensors market is expected to reach new milestones in the coming years.

Key Takeaways:

  • In 2022, the 2D segment emerged as the top revenue generator in the Global Image Sensors Market.
  • The Automotive Sector segment is dominating the market in terms of application and is expected to grow significantly from 2023 to 2032.
  • The Asia-Pacific Region held the largest revenue share of 41% in 2022, establishing its dominance in the market.
  • Europe secured the second position in revenue share in 2022 and is projected to experience substantial growth from 2023 to 2032.

Sensors have quickly become an indispensable element of modern vehicles, Advanced Driver Assistance Systems (ADAS), medical devices and automated production technologies. Sensors have also become more affordable, robust, precise, specific, frequent smarter and communicative. Their benefits will make them attractive options for deployment in future smart infrastructure systems; due to superior image quality and sensitivity of Charge-Coupled Device (CCD) technology this was previously the dominant solution used.

Since 2004, due to technological developments, CMOS image sensors have outshone CCD image sensors in terms of volume of shipment since 2004. CCD sensors utilize high voltage analog circuitry while CMOS uses less power and has smaller dimensions; CCD remains more popular due to increased revenue generated for growth in image sensors market growth.

Firstly, the continuous evolution of camera technologies and the proliferation of smartphones have revolutionized the consumer electronics sector. The demand for high-resolution imaging capabilities, augmented reality (AR) applications, and enhanced camera features in smartphones has been a major driving force behind the growth of the image sensors market.

Additionally, the automotive industry has witnessed a rapid integration of advanced driver-assistance systems (ADAS) and autonomous driving technologies. Image sensors play a crucial role in enabling these systems by providing accurate and real-time information for object detection, lane departure warnings, and adaptive cruise control. The increasing adoption of electric vehicles and the rising trend of in-car entertainment systems further contribute to the demand for image sensors in the automotive sector.

Furthermore, the healthcare industry has embraced the use of image sensors in medical devices such as endoscopes, surgical cameras, and X-ray machines. These sensors facilitate precise imaging, aiding medical professionals in diagnostics, minimally invasive surgeries, and patient monitoring. The growing emphasis on telemedicine and remote patient monitoring is also expected to drive the demand for image sensors in the healthcare sector.

In the realm of security and surveillance, image sensors have become indispensable components in surveillance cameras, facial recognition systems, and biometric scanners. The need for enhanced security measures across residential, commercial, and public sectors, coupled with the increasing adoption of smart city initiatives, is propelling the image sensors market forward.

To cater to the evolving market demands, leading companies in the image sensors industry are heavily investing in research and development activities to develop advanced sensor technologies. Innovations such as backside-illuminated (BSI) sensors, stacked CMOS sensors, and time-of-flight (ToF) sensors are gaining prominence, enabling improved image quality, higher resolutions, and faster data processing.

Top Trends in Global Image Sensors Market

Many vendors are now adopting CMOS image sensor technology, signalling its rapid advancement into low-cost camera designs. Although compared with CCD sensors for image quality at similar price points, CMOS sensors have grown increasingly popular due to their on-chip functionality in low-cost consumer markets like consumer electronics, automotive security, surveillance and others.

Consumer electronics has seen an explosion of demand for smartphones equipped with both rear- and front-facing cameras, driven largely by autonomous vehicles equipped with Advanced Driver Assistance Systems (ADAS) that enhance driver safety. Furthermore, as CMOS images can be utilized as security applications even under low light or dim light lighting conditions - their usage has skyrocketed as security applications become ever more critical in business operations.

Sony Corporation of Japan holds an unparalleled position in the CMOS sensor market and was the pioneer for commercializing automotive cameras equipped with their sensors. To increase production capacity of stacked image sensors for automotive cameras, Sony invested USD 895 Million (105 Billion JPY).

SmartSens is an industry leader when it comes to CMOS image sensors. Recently they unveiled the SC550XS ultra-high resolution 50MP image sensor featuring 1.0 micrometer pixels - using SmartSens' proprietary technologies including SmartClarity-2, SFCPixel and PixGainHDR technologies to produce superior picture quality while using 22nm HKMG Stack process for outstanding imaging performance.

Metalenz, an international start-up that develops meta-optic lens technology, recently unveiled an innovation that embeds polarization sensing capabilities directly into mobile and consumer devices - improving healthcare management features and ultimately revolutionising healthcare management features.

Competitive Landscape

Sony Corporation
Samsung Electronics Co. Ltd
ON Semiconductor Corporation 
and STMicroelectronics Co. Ltd
They are leading the imaging sensor market share.

Image sensor manufacturers are continuously making innovations to their products that offer more robust, accurate sensing at lower costs - Time-Of-Fight technology (TOF) has emerged as a game-changer here.

Recent Trends of Image Sensor Market

Realme has recently announced the availability of their 9 Pro Series smartphones equipped with Sony's IMX766 image sensors in Europe in February 2022. Each sensor measures 1/1.56", offering large pixels for photography with optical image stabilisation (OIS), along with an aperture size of 0.88 which facilitates taking clear photos even at long range distance.

Sony Interactive Entertainment LLC (SIE), purchased Bungie Inc in January 2022 as an independent videogame developer that had long collaborated with them - they had helped produce iconic titles like Halo and Destiny with them! SIE now gained access to Bungie's technical knowledge as well as world-class live games; increasing SIE's potential reach of billions of gamers around the globe.

Go to the original article...

css.php