As the echoes of AI advancements reverberated throughout 2023, with ChatGPT-3 leading the charge, a renowned scientist and former director of the Computer Science and Artificial Intelligence Laboratory at MIT, Rodney Brooks, is cautioning that 2024 might not be the golden age for artificial intelligence (AI) that many anticipate. Brooks, a seasoned expert in the field, has become known for his candid technological predictions, and his latest forecast suggests a potential return to an “AI winter.”
In his annual scorecard, Brooks acknowledges the extraordinary hype surrounding AI in recent years, particularly fueled by the accessibility and utility of technologies like ChatGPT-3. However, he draws parallels to the cyclical nature of the AI hype cycle, asserting that we may be heading toward another period of stagnation.
“Get your thick coats now. There may be yet another AI winter, and perhaps even a full-scale tech winter, just around the corner. And it is going to be cold,” warns Brooks.
Despite his cautionary outlook, Brooks is far from a pessimistic observer. With a history of accurate technological prophecies dating back to the 1970s, his skepticism is rooted in experience. He emphasizes that the current AI landscape, dominated by Large Language Models (LLMs) and chatbot systems, is following a familiar pattern seen throughout the 60+ years of AI history.
In particular, Brooks highlights that while LLMs like ChatGPT showcase impressive capabilities, they lack the potential to evolve into Artificial General Intelligence (AGI). According to him, these systems, while proficient in certain tasks, lack true imagination and substantive understanding.
“[I encourage] people to do good things with LLMs but to not believe the conceit that their existence means we are on the verge of Artificial General Intelligence,” cautions Brooks.
In a detailed interview with IEEE Spectrum, Brooks delves deeper into his critique, pointing out that even advanced LLMs often make mistakes when tasked with relatively simple coding challenges. He emphasizes that their prowess lies in mimicking the appearance of correct answers without possessing a genuine understanding of the world.
“It doesn’t have any underlying model of the world. It doesn’t have any connection to the world. It is correlation between language,” explains Brooks. “What the large language models are good at is saying what an answer should sound like, which is different from what an answer should be.”
Brooks challenges the prevailing belief that these language models represent a significant step toward AGI, asserting that they are sophisticated wordsmiths rather than truly intelligent beings. If his insights prove accurate, the anticipated developments in models like GPT-5 and beyond may not bring us closer to the coveted realm of Artificial General Intelligence.
As the AI community navigates the coming year, Brooks’s warnings serve as a sobering reminder to temper expectations and critically assess the trajectory of AI advancements, emphasizing that there is “much more to life than LLMs.”
Leave a Reply