Intelligent cameras are a step closer, following research from a team of academics at the Universities of Manchester and Bristol.
They point out that current systems have a problem, in that they combine sensors for recording images, with computing hardware designed to accelerate graphics for video games.
This means that the AI systems have to read irrelevant data, such as the detail of leaves on roadside trees. This clogs up the system, consumes power and takes up processing time.
The papers from the researchers have shown how sensing and learning can be combined to create novel cameras for AI systems.
“To create efficient perceptual systems we need to push the boundaries beyond the ways we have been following so far,” said Walterio Mayol-Cuevas, Professor in Robotics, Computer Vision and Mobile Systems at the University of Bristol and principal investigator (PI).
“We can borrow inspiration from the way natural systems process the visual world – we do not perceive everything – our eyes and our brains work together to make sense of the world and in some cases, the eyes themselves do processing to help the brain reduce what is not relevant.”
By using Convolutional Neural Networks (CNN), which are a former of an AI algorithm for enabling visual understanding directly on the image plane, they can classify frames at thousands of times per second, without ever having to record the images or send them to be processed.
They believe that a camera in the future may only send high-level information to the rest of the system – such as the type of object or event taking place in front of the camera.
The work is possible due to the SCAMP architecture developed by Professor Piotr Dudek, Professor of Circuits and Systems and PI at the University of Manchester. The SCAMP is a camera-processor chip, described as a Pixel Processor Array, which has a processor embedded in each and every pixel. These can communicate with each other in parallel.
“Integration of sensing, processing and memory at the pixel level is not only enabling high-performance, low-latency systems, but also promises low-power, highly efficient hardware,” he explained.
“SCAMP devices can be implemented with footprints similar to current camera sensors, but with the ability to have a general-purpose massively parallel processor right at the point of image capture.”
Dr Tom Richardson, Senior Lecturer in Flight Mechanics, at the University of Bristol and a member of the project has been integrating the SCAMP architecture with lightweight drones.
“What is so exciting about these cameras is not only the newly emerging machine learning capability, but the speed at which they run and the lightweight configuration.
“They are absolutely ideal for high speed, highly agile aerial platforms that can literally learn on the fly!”
The research has been funded by the Engineering and Physical Sciences Research Council (EPSRC);