“`html
Imagine an artificial intelligence (AI) system that can watch and understand videos just like a human brain. Scientists at Scripps Research have made this possible with MovieNet, an advanced AI that processes videos similarly to how our brains interpret real-life scenes as they happen over time.
This AI model, inspired by the brain and detailed in a study published in the Proceedings of the National Academy of Sciences on November 19, 2024, can understand moving scenes by mimicking how neurons—brain cells—make sense of the world in real-time. While traditional AI is good at recognizing static images, MovieNet offers a new way for AI to understand complex, changing scenes, which is a big step forward for areas like medical diagnostics and autonomous driving, where noticing subtle changes over time is crucial. Additionally, MovieNet is not only more accurate but also more environmentally friendly than traditional AI.
“The brain doesn’t just see still frames; it creates an ongoing visual story,” explains senior author Hollis Cline, PhD, director of the Dorris Neuroscience Center and the Hahn Professor of Neuroscience at Scripps Research. “Recognizing static images has advanced a lot, but the brain’s ability to process moving scenes—like watching a movie—requires a more advanced form of pattern recognition. By studying how neurons capture these sequences, we’ve applied similar principles to AI.”
To develop MovieNet, Cline and lead author Masaki Hiramoto, a staff scientist at Scripps Research, studied how the brain processes real-world scenes in short sequences, similar to movie clips. They particularly looked at how tadpole neurons responded to visual stimuli.
“Tadpoles have an excellent visual system, and we know they can detect and respond to moving stimuli efficiently,” Hiramoto explains.
He and Cline identified neurons that respond to movie-like features, such as changes in brightness and image rotation, and can recognize objects as they move and change. These neurons, located in the brain’s visual processing region known as the optic tectum, piece together parts of a moving image into a coherent sequence.
Think of this process like a lenticular puzzle: each piece on its own might not make sense, but together they form a complete, moving image. Different neurons process various “puzzle pieces” of a real-life moving image, which the brain then integrates into a continuous scene.
The researchers also discovered that the tadpoles’ optic tectum neurons could identify subtle changes in visual stimuli over time, capturing information in dynamic clips of roughly 100 to 600 milliseconds instead of still frames. These neurons are highly sensitive to patterns of light and shadow, and each neuron’s response to a specific part of the visual field helps create a detailed map of a scene to form a “movie clip.”
Cline and Hiramoto trained MovieNet to mimic this brain-like processing, encoding video clips as a series of small, recognizable visual cues. This approach allowed the AI model to detect subtle differences among dynamic scenes.
To test MovieNet, the researchers showed it video clips of tadpoles swimming under various conditions. MovieNet achieved 82.3 percent accuracy in distinguishing normal from abnormal swimming behaviors, surpassing trained human observers by about 18 percent. It even outperformed existing AI models like Google’s GoogLeNet, which managed only 72 percent accuracy despite its extensive training and processing resources.
“This is where we saw real potential,” notes Cline.
The team found that MovieNet was not only more effective than current AI models at understanding changing scenes, but it also used less data and processing time. MovieNet’s ability to simplify data without losing accuracy sets it apart from conventional AI. By breaking down visual information into essential sequences, MovieNet effectively compresses data, like a zipped file that retains critical details.
Beyond its high accuracy, MovieNet is an eco-friendly AI model. Traditional AI processing consumes a lot of energy, leaving a significant environmental footprint. MovieNet’s reduced data requirements offer a greener alternative that conserves energy while maintaining high performance.
“By mimicking the brain, we’ve managed to make our AI far less demanding, paving the way for models that aren’t just powerful but sustainable,” says Cline. “This efficiency also makes it possible to scale up AI in fields where conventional methods are costly.”
Additionally, MovieNet has the potential to revolutionize medicine. As the technology progresses, it could become an essential tool for detecting subtle changes in early-stage conditions, such as identifying irregular heart rhythms or spotting initial signs of neurodegenerative diseases like Parkinson’s. For example, small motor changes related to Parkinson’s, often difficult for human eyes to detect, could be flagged early on by the AI, giving clinicians valuable time to intervene.
Furthermore, MovieNet’s ability to perceive changes in tadpole swimming patterns when exposed to chemicals could lead to more precise drug screening techniques, enabling scientists to study dynamic cellular responses instead of relying on static snapshots.
“Current methods miss critical changes because they can only analyze images captured at intervals,” states Hiramoto. “Observing cells over time allows MovieNet to track the subtlest changes during drug testing.”
Looking to the future, Cline and Hiramoto plan to continue refining MovieNet’s ability to adapt to different environments, enhancing its versatility and potential applications.
“Taking inspiration from biology will continue to be a fertile area for advancing AI,” says Cline. “By designing models that think like living organisms, we can achieve levels of efficiency that simply aren’t possible with conventional approaches.”
This research for the study “Identification of movie encoding neurons enables movie recognition AI,” was supported by funding from the National Institutes of Health (RO1EY011261, RO1EY027437, and RO1EY031597), the Hahn Family Foundation, and the Harold L. Dorris Neurosciences Center Endowment Fund.
“`
Source link