03 May Image Sensors: The Eyes Of AI And Intelligent Systems
With the advent of the smartphone we have witnessed one of the most powerful tools that mankind has ever seen. However, one of the lesser understood and talked about phenomena accompanying the smartphone is the rise of the image sensor. By integrating progressively more and more powerful image sensors into smartphones, we have taken photography and the information age to the next level. Today I wanted to share some thoughts on the image sensor—what all it has enabled, and where it might be heading in the future.
Bringing vision to AI
Without this technology, we wouldn’t have Facebook live or the countless videos that give people access to what’s really happening on the ground. The smartphone has essentially turned everyone into a photojournalist, allowing the world to nearly immediately see what is happening in people’s everyday lives. There are now billions of smartphones, with cameras, around the world. Image sensors aren’t limited to the smartphone only—even some of the cheapest ‘feature’ phones in the world now have cameras. The ubiquity of these cameras translates to a very large potential dataset that can be harnessed for applications like big data, machine learning and ultimately AI.
Right now, we are seeing an exponential expansion of the speed and accuracy of artificial intelligence. The ‘smart’ devices we are equipping with machine intelligence and artificial intelligence—drones, robots, self-driving cars and even phonese—need to be outfitted with image sensors in order to best know what’s going on around them and operate safely in the real world. Sure, they have access to location data and audio via microphones but neither is as powerful as a real-time camera. This is why camera sensors are only becoming more important—they are essential for the image information gathering needed to train machine learning and artificial intelligence. Even once devices are trained, they still need camera sensors to provide context for what they are looking at in order to properly act on the information they’ve been trained on.
My computer has a webcam, my phones have multiple image sensors, my drone has multiple image sensors, my vacuum has an image sensor, my car has image sensors. While there are plenty of cameras in everything that we use today, and a lot of it is powered by intelligent computer vision algorithms, the next step in the evolution of these devices is to add machine learning and artificial intelligence and to improve those algorithms dynamically over time. Essentially where we’re at now is existing image sensors in everyday devices need to be imbued with machine learning, and existing ‘smart’ devices need more, and higher quality image sensors. While this marriage is already underway, it’s only going to ramp up in the coming years.
As this happens, I foresee that there will only be increased demand for image sensors across the world in varying levels of quality. Obviously, the higher-end image sensors will be used for video and photography, but there are going to be plenty of ML and AI algorithms that identify objects with relatively low image sensor data. Even for these applications, I believe that there will be a drive to continually improve camera sensor capabilities in order to improve the quality of the data that is being fed into these algorithms. Better quality data helps drive more accurate results, which raises confidence in the decisions being made by these devices. There may be a peak where image quality is good enough, but I’m not sure if we’re quite there yet. Long story short, don’t underestimate the importance of the image sensor—it is only becoming more essential to the world of tomorrow.