Nachrichten

Wie KI-Agenten uns helfen, die Evolution des menschlichen Sehens zu verstehen

For years, scientists have wondered why we humans have the kind of eyes we do today. The process that evolution took to shape our visual system is still largely a mystery. Researchers at MIT, however, took a step further in this journey of discovery by developing a novel computational framework with the help of artificial intelligence. It’s quite an ingenious process – think of it as digitally recreating evolution.

Recreating Evolution Digitally, AI in Action

This state-of-the-art computational framework acts as a scientific playground. AI agents are placed in virtual environments and tasked to evolve eyes over many generations. By assigning tasks—like identifying objects or making sense of the terrain, the AI agents evolve their visual systems respectively. The researchers can tweak the environmental conditions and task specifics, allowing them to study different eye evolutions under various circumstances.

What’s incredible is that when these AI agents were given different tasks, their evolution took varying paths. For instance, when tasked with navigation, the agents developed compound eyes just like those found in insects and crustaceans, which serve spatial awareness well. However, when tasked to distinguish between objects, the AI evolved to have camera-type eyes, featuring irises and retinas, akin to human eyes.

When it comes to construction, the computational simulator is inspired by the basic components of a camera. Sensors, lenses, apertures, and processors were transformed into variables that an AI could adapt and learn. The AI agents started with a simple photoreceptor and a neural network to process the visual input. Over time, they evolved their visual systems through a reward system based on task completion – a method mimicking natural selection, where favorable traits are sharpened and passed down.

Simulating Evolution, Designing Futuristic Eyes

One of the ground-breaking capabilities of this framework is the ability to mimic physical limitations found in nature through constraints such as the amount of visual pixels available. The AI agents then have to make trade-offs just as evolution would in nature… and result? A rich variety of eye designs, each perfectly suited to a specific environmental demand and task.

The genetic encoding system in the framework was used to simulate natural evolution. Morphological genes decide the location and perception of eyes; optical genes define the interaction of eyes with light including the number of photoreceptors, and neural genes influence learning capabilities. This digital evolution resonates with nature, showing how refined visual systems can develop from simple beginnings.

Unveiling the Future, Practical Takeaways

But there’s more to this study than feeding scientific curiosity. This research can equip engineers with the tools to design cameras and sensors for task-specific applications in robots, drones, or wearable devices. We could optimize performance whilst finding a perfect balance with constraints like energy consumption or manufacturing cost. Future explorations could see large language models integrated into the system to answer complex “what-if” scenarios.

“Even if we can’t reverse-engineer the process of evolution completely, we’ve created an environment that can allow us to recreate it in all these different ways,” says Kushagra Tiwary, a graduate student at the MIT Media Lab and one of the lead authors of the study. The study, a collaborative project including scientists from MIT, Stony Brook University, Rice University, and Lund University, was published in Science Advances.

For more about this fascinating study, take a look at the original post on MIT News.

Wie ist Ihre Reaktion?

Aufgeregt
0
Glücklich
0
Verliebt
0
Nicht sicher
0
Dummerchen
0

Kommentare sind geschlossen.