Czy sztuczna inteligencja może wyjść poza przewidywania i stać się prawdziwym zrozumieniem?
From Kepler to Newton: Evaluating AI’s Depth of Understanding
We’ve come a long way since the 17th century when Johannes Kepler revolutionized astronomy with his accurate mathematical laws predicting planetary motion. But, these models, while impressive, fell short of explaining why the planets moved in the manner they did. A deeper sense of understanding only arrived with Isaac Newton and his formulation of the laws of gravitation. Today, we find ourselves drawing parallels between this story and the current state of artificial intelligence.
AI systems have become proficient at making predictions in areas such as language, image recognition, and even scientific modelling, a fact that prompts one to vividly recall Kepler’s contributions. But are these systems truly understanding the world, or merely imitating patterns much like Kepler’s models that were short of Newton’s deep insights? This unsettling question is instigating a rising wave of curiosity among scientists.
Unmasking AI’s World Models
A team of researchers from MIT’s Laboratory for Information and Decision Systems (LIDS) and Harvard University took it upon themselves to venture into this intriguing mystery, exploring AI’s depth of understanding. Their objective? To determine if AI can craft internal models of the world—somewhat of a “world model” allowing for a generalization of their predictions. Keyon Vafa, a Harvard postdoc and lead author of the study, noted that the challenge was to ascertain if AI has been able to leap from generating accurate predictions to constructing world models like humans do.
Sendhil Mullainathan, an MIT professor and a senior author on the study, pointed out the major obstacle in their path: defining ‘understanding’ in the context of AI. They knew how to measure an algorithm’s predictive accuracy, but they required a sound method to evaluate its capacity to understand. To overcome this, the team came up with a metric called ‘inductive bias,’ a measure of how well an AI system’s inferences reflect real-world circumstances.
The team tried to unmask the depth of AI’s understanding under various scenarios of increasing complexity. For a simple one-dimensional lattice model, like a frog hopping between lily pads, AI performed excellently based on auditory cues. But as the complexity escalated, such as moving to two- or three-dimensional lattices, AI began struggling. Peter G. Chang, an MIT graduate student, stated that their model showed a strong inductive bias in lesser complexity systems, but it started to disconnect as the complexity grew.
‘Understanding’: The Next Leap for AI
The implications of this study are substantial, given AI’s growing role in scientific discovery. Predicting properties of compounds or understanding protein folding, demands more than pattern recognition—they necessitate an understanding of underlying principles. This reminded Keyon Vafa of a sobering reality—there’s still a long way to go even with something as basic as mechanical concepts.
‘Foundation models,’ large AI systems trained on vast datasets across domains, are stirring up a lot of excitement. These models are expected to amass domain-specific knowledge beneficial for new problems. But are we ready for this? Our study prompts us to reconsider, believes Chang. However, their research has provided an avenue to test whether AI is creating accurate world models, a tool invaluable for both developers and scientists.
In a world ruled by AI, the jump from prediction to understanding could be the next monumental leap, similar to the one from Kepler to Newton. As Chang aptly sums up, once we have a metric, we can optimize it effectively, indicating the promising path forward for AI.
Check out the original article at MIT News.