Physics is the most basic of sciences. It tries to explain everything in nature using simple terms. Physics helps us predict how things work in the universe. It tells us how much something will affect a system. Over centuries, many rules and laws have come and gone, but the fundamental principles have stayed strong. These laws help us understand and predict the behavior of various physical systems.
Recently, artificial intelligence (AI), especially Large Language Models (LLMs), have burst onto the scene. While many view these models as groundbreaking tools reflective of human reasoning, experts warn they lack a true understanding of complex topics like physics. In fact, many conversations dubbed “vibe physics” showcase this misconception. People mistakenly believe that chatting with these models can lead to genuine discoveries in physics. Such discussions often overlook the limitations of AI.
Research from 2025 highlights this misunderstanding. Study authors Keyon Vafa and colleagues aimed to test AI’s ability to infer fundamental laws of nature. They created small synthetic data sets and challenged LLMs to find underlying laws that might explain the observed patterns. Their findings were revealing.
One key issue is that while LLMs can recognize patterns in familiar data, they struggle when faced with new situations. For example, an AI trained on English text might fail at understanding other languages. Similarly, LLMs trained on data about certain celestial phenomena may falter when asked about entirely different systems. This limitation is clear when researchers asked neural networks to derive Newton’s laws of gravity. Despite being trained on these concepts, the AIs failed to produce accurate predictions.
One significant takeaway from the research is that understanding must go deeper than pattern recognition. It’s crucial for AIs to learn foundational models that can apply to various scenarios. The models in the study successfully predicted behaviors similar to Newton’s laws in limited contexts, but they couldn’t extend those rules to new, broader applications. This showcases the gap between data prediction and deriving fundamental truths.
Historically, this issue mirrors the journey of scientific discovery. Newton’s laws expanded on Kepler’s earlier work, which could predict planetary movements but didn’t cover all cases involving gravity. Kepler’s laws worked perfectly for planets but didn’t explain the larger phenomena Newton addressed, like the motion of rockets or projectiles.
Most importantly, trusting AI to provide accurate scientific insights can be dangerous. The “vibe physics” trend highlights a concerning issue: confidence in nonscientific answers based solely on how they sound. This is risky for anyone trying to grasp the truth about physics. In reality, formulating a genuine theory involves rigorous calculations and empirical evidence.
In conclusion, while AI can offer valuable insights when harnessed correctly, it shouldn’t be seen as a replacement for human expertise. True understanding requires more than just conversation—it needs a foundation built on facts and scientific rigor. Engaging with AI can be fun, but one must approach it critically, ensuring that reality isn’t sacrificed for appealing yet inaccurate ideas.
Source link

