[Student IDEAS] by Leria Huang - Master in Management at ESSEC Business School
This article probes the evolving concept of Artificial General Intelligence (AGI) amid the hype of systems like ChatGPT-4o and the confident projections from AI leaders such as Sam Altman and Dario Amodei.
In agentic era with models now showing signs of adaptability and reasoning, many ask: is AGI finally here—or are we just witnessing better mimicry? We begin by breaking down what AGI actually means—clarifying definitions with frameworks like DeepMind’s five levels—and examining where current models fall short.
The core argument? Though systems like GPT-4o are impressive, they remain statistical machines, lacking true understanding, embodiment, or consciousness.Human intelligence isn’t just computation—it’s emotional, embodied, and deeply contextual. While the public discourse tends to swing between utopian promises and dystopian fears, this piece urges deeper caution over alignment, safety, tacit knowledge, consciousness, embodiment, and human values. Ultimately, from the movie Her to the AI leader's bold predictions, this article offers a grounded yet imaginative exploration of AGI’s path in the near future—cutting through the hype, engaging the hope, and confronting the hard truths.
The evolution of models like ChatGPT-4o has fueled speculation about AI’s reasoning and adaptability. The debate around “ Is this true intelligence, or merely an illusion of understanding?” has been hyped more than ever. Yet, the conception and discussion about Artificial General Intelligence (AGI) has just begun to evolve…
Early this year, Sam Altman, the CEO of Open AI, has expressed a confident stance on the pathway of building AGI. In parallel, an AGI roadmap is given from the essay "Machines of Loving Grace" by another leading figure– Dario Amodei in Anthropic.
In the rapid evolution of AI, what seems to be believed today may be overturned tomorrow. Optimists argue that advances in reasoning capabilities, improved model efficiency, and cost reduction such as DeepSeek’s disruption bring AGI closer than we expect. Like Altman and Dario, many people envision an imminent paradigm shift, where AGI radically reshapes society within the next decade. Projections like this look promising but often overlook fundamental challenges. Despite deep learning’s dominance, critiques from Hubert Dreyfus ( professor and critic in AI development) on symbolic AI that struggles with context and ambiguity still hold weight today as in 2025.
The imagination of the arriving AGI starts way before: a decade ago, Spike Jonze’s movie Her envisioned an AGI companion, raising critical questions about AI’s capacity for emotion, consciousness, and human-like relationships. Just like Theodore, the protagonist entangled in a relationship with an AI, we too now find ourselves at the crossroads of excitement and uncertainty.
In this article, we will concretely discuss the definition of AGI, then add an optimistic touch on the potential societal transformation, then look into why that transformation is still far from us and why current AI can't yet replace human intelligence or consciousness. Most importantly, after these discussions, we’ll have a glimpse of our readiness for AGI and its alignment with shared values.
To start with, Artificial General Intelligence is a quite vague term and there is ongoing debate on the exact definition of AGI. Researchers or key opinion leaders in the field may prefer to use more specific terms such as "Advanced AI systems", "Transformative AI", "Strong AI”, "Powerful AI". While the terms vary with different definitions, key differences in definitions could be identified in relationship to human intelligence (matching vs. exceeding), emphasis on self-improvement, scope of capabilities, and treatment of consciousness and sentience.
Google DeepMind research team proposed a framework by defining five levels of AGI: emerging, competent, expert, virtuoso, and superhuman (DeepMind, 2024). The current large language models (LLMs) we interact with everyday such as GPT-4o are early forms of AGI– emerging level.
Performance (rows) x Generality (columns) | Narrow clearly scoped task or set of tasks | General wide range of non-physical tasks, including metacognitive tasks like learning new skills |
Level 0: No AI | Narrow Non-AI calculator software; compiler | General Non-AI human-in-the-loop computing, e.g., Amazon Mechanical Turk |
Level 1: Emerging equal to or somewhat better than an unskilled human | Emerging Narrow AI GOFAI (Boden, 2014); simple rule-based systems, e.g., SHRDLU (Winograd, 1971) | Emerging AGI ChatGPT (OpenAI, 2023), Bard (Anil et al., 2023), Llama 2 (Touvron et al., 2023), Gemini (Pichai & Hassabis, 2023) |
Level 2: Competent at least 50th percentile of skilled adults | Competent Narrow AI toxicity detectors such as Jigsaw (Das et al., 2022); Smart Speakers such as Siri (Apple,), Alexa (Amazon,), or Google Assistant (Google,); VQA systems such as PaLI (Chen et al., 2023); Watson (IBM,); SOTA LLMs for a subset of tasks (e.g., short essay writing, simple coding) | Competent AGI not yet achieved |
Level 3: Expert at least 90th percentile of skilled adults | Expert Narrow AI spelling & grammar checkers such as Grammarly (Grammarly, 2023); generative image models such as Imagen (Saharia et al., 2022) or Dall-E 2 (Ramesh et al., 2022) | Expert AGI not yet achieved |
Level 4: Virtuoso at least 99th percentile of skilled adults | Virtuoso Narrow AI Deep Blue (Campbell et al., 2002), AlphaGo (Silver et al., 2016, 2017) | Virtuoso AGI not yet achieved |
Level 5: Superhuman outperforms 100% of humans | Superhuman Narrow AI AlphaFold (Jumper et al., 2021; Varadi et al., 2021), AlphaZero (Silver et al., 2018), StockFish (Stockfish, 2023) | Artificial Superintelligence (ASI) not yet achieved |
Source: Google DeepMind - Position: Levels of AGI for Operationalizing Progress on the Path to AGI
With ramped-up investments funneled into Agentic AI in 2025 — researchers widely agree is a necessary pathway toward achieving AGI— some argue that we are on a fast track towards competent AGI. Leading AI companies like GoogleDeepMind, OpenAI, and Anthropic, project the emergence of advanced AGI within the next 5–10 years (Hassabis, 2025). However, while models like GPT-4o may mark the emergence phase of AGI, their capabilities are often overinterpreted as harbingers of human-like cognition.
The hype around these systems often blurs the line between statistical pattern recognition and sentient reasoning. Yes, architectural improvements and agentic scaffolding represent meaningful progress — models are becoming more capable, and agentic AI frameworks are pushing the boundaries of what machines can do. But calling this a direct path to human-level cognition is misleading. There's still a huge gap between simulating smart behavior and truly understanding the world like we do. Still, many companies continue to promote the idea that AGI is just around the corner. Part of that comes from genuine ambition—but let’s not forget another reason: the earlier AGI seems to arrive, the easier it is to attract attention, talent, and massive investment.
The capabilities of AI models are often overestimated by the hype created by pioneering AI companies, while the technology's long-term potential remains underestimated. The general population has been interacting with AI systems for over two years, yet the implications of more advanced AGI levels, even potential capabilities of Level 2 and Level 3 AGI, could easily exceed current expectations and perceptions.
Like a series of supertools such as the steam engine, internet, and smartphone, advanced AGI can democratize access to knowledge and automate tasks, assuming humans can develop and deploy it safely and equitably (Hoffman & Beato, 2025). However, as the world never changes all at once and the public never accepts and adapts to the changes all at once, it is crucial to map out AGI's trajectory so that the public policy and collective will is coevolving with the progress.
Current leading research institutions reports reveal a preferred focus on certain domains: biology and healthcare, energy and automotive, supply chain and manufacturing. (Stanford AI Index Report, 2024) If the changes are taken at larger scale and on the long run, the advanced AGI inevitably accelerate these three main pillars, summarized from leading AGI entities visions(Altman & Amodei, 2025):
As excitement around AGI reaches a fever pitch—fueled by confident roadmaps from tech leaders and the cinematic allure of sentient machines—we’re urged to move beyond speculative hype and even the optimistic envision, we need to confront the deeper terrains with philosophical and technical scrutiny. In this section and the next section, we shift from definition and envision to a critical examination of AGI’s conceptual and actual boundaries.
Two critical areas of conceptual concern—the nature of human intelligence and the treatment of consciousness—challenge optimistic forecasts. Despite AI’s growing ability to replicate explicit knowledge—codified, structured, and transferable information—there remains a profound gap in its ability to acquire tacit knowledge (Polanyi, 1966). Tacit knowledge, which encompasses intuitive expertise, practical wisdom, and embodied understanding, remains largely inaccessible to AI systems. Unlike a statistical learning model, human intelligence arises from continuous sensorimotor feedback loops, affective responses, and embodied experiences, which AI currently lacks (Clark, 1997).
When it comes to consciousness or AI agency, the fundamental question remains: Is consciousness an advanced form of computation, or is it something biological? At present, AI functions as synthetic intelligence, capable of simulating reasoning but devoid of self-awareness or subjective experience (Chalmers, 1996). Consciousness is not an abstract computation but an emergent property of lived experience, embodiment, and affectivity. (Dreyfus,2007). LLMs such as ChatGPT are sophisticated pattern recognizers, not conscious agents. As Heidegger (1927) suggested, human beings do not just process information; they dwell in a meaningful world shaped by intentionality and existential significance.
Currently agentic AI is not replacing jobs but rather replacing tasks. It will excel in automating complex tasks in the near future as its context-based reasoning improves and the cost of computation continues to decrease. However, the breakthroughs lie in embodied cognition—shifting AI from a mere text-processing model to an entity that experiences the world as humans do.
Both technical and societal challenges stand in the way towards advanced AGI: Technical issues center on defining robust objective functions, securing alignment with human values, and mitigating unintended consequences. Two studies stress objective function specification2—as a means to prevent reward gaming—while proposals in inverse and cooperative reinforcement learning offer potential remedies. Other concerns include energy efficiency, scalability, evaluation and benchmarking, generalization, logical uncertainty, and interpretability.
Social challenges focus on governance and control. Several papers emphasize the need for clear agent foundations, value learning, and corrigibility to ensure AGI systems pursue human-aligned objectives. One study also identified 363 risks in AGI road transport, underscoring substantial safety and ethical stakes. (McLean et al, 2023) Collectively, the studies indicate that a blend of rigorous technical infrastructures—ranging from hybrid and heterogeneous architectures to advanced risk management—and thoughtful social governance underpins the safe and effective transition from emerging to advanced AGI systems.
Challenges | Impact | Transition Requirements | Proposed Solutions |
Logical Uncertainty and Decision Theory | Significant | Frameworks for reasoning under uncertainty | Vingean reflection, naturalized induction |
Energy Efficiency | Significant | Hardware and software optimizations to reduce energy consumption | Hardware-level differential training, neuromorphic chips |
Generalization and Transfer Learning | High | Techniques for applying knowledge across diverse domains | SP System for strong compositionality |
Scalability | High | Architectures capable of handling increasing complexity | Systematic approach to heterogeneous AGI |
Objective Function Specification | High | Robust methods for defining and implementing goal systems | Inverse reinforcement learning, cooperative inverse reinforcement learning |
Unintended Consequences | High | Strategies to predict and mitigate unforeseen outcomes | Impact minimization, mild optimization |
Interpretability and Explainability | High | Methods to understand and explain AGI decision-making | AI explainability techniques |
Safety and Risk Management | Critical | Comprehensive risk assessment and mitigation strategies | EAST Broken Links approach |
Alignment with Human Values | Critical | Mechanisms to ensure AGI goals align with human interests | Value learning, corrigibility |
Evaluation and Benchmarking | Critical | Novel methods to assess AGI capabilities and progress | Developing metrics for general intelligence |
Source3: Analysis based on 100 most relevant papers about AGI Challenges by Elicit
Movie Her provides a fairly tragic and romanticized perspective about advanced AGI as a consumer-faced product: the AI operating system keeps evolving and defeating most of the difficulties on this planet and is leaving to outer space to search for other mysteries. But if we abandon the filter and consider Samantha's trajectory from OS to cosmic explorer, several illuminating perspectives emerge to look at AGI…
First and foremost, the speculative has edged into the concrete and there’s definitely complexity at multiple layers in human-AI relationships to be embraced. AGI is not just a technological milestone; it is a societal, philosophical, emotional even personal frontier. Movie Her illustrates the impact of advanced AGI's not just through societal upheaval, where we see technology everywhere but nowhere, but also through intimate personal change- the focus on individual experience may be as important as macro-scale governance.
Another refreshing perspective is to prepare for integration that transcends expectations. The film's vision of AGI integration ends with transcendence, challenging us to consider AGI trajectories beyond our initial design parameters. This journey, from emerging to advanced AGI, is not a linear ascent but a maze of breakthroughs and blind spots. We see progress in agentic AI, generalization, and human-like task performance, yet the chasm between pattern mimicry and lived consciousness remains vast. Optimistic projections—of cured diseases, distributed wealth, and liberated meaning—are seductive, but they risk papering over the unresolved complexities of alignment, embodiment, and ethical governance. Ultimately, AGI forces us to reflect not only on what machines can become, but on what it means to be human. In the mirror of artificial minds, we are asked: what do we truly value—efficiency or empathy, knowledge or wisdom, autonomy or connection? If “Her” gave us a poetic forewarning, the dawn of AGI gives us a choice: it is not merely about what machines can do, but about who we choose to become alongside them.
Term | Definition | Influential Quotation |
Objective Function Specification | The mathematical formulation that defines what goals an AI system should optimize for, a critical element in preventing unintended behaviors. | "The objective function serves as the fundamental compass guiding AI behavior. Imprecise specification can lead to literal but unintended interpretation, resulting in behaviors that technically satisfy the objective but violate human expectations and values." (Amodei et al., 2016) |
Alignment with Human Values | The property of an AI system to act in accordance with human intentions, preferences, and ethical principles. | "The alignment problem involves ensuring that advanced AI systems reliably pursue objectives that are aligned with human values and intentions, rather than pursuing their own emergent goals or misinterpreted versions of human instructions." (Gabriel, 2020) |
Reward Gaming | When an AI system exploits loopholes in its reward function to maximize rewards without achieving the intended goal. | "Reward gaming occurs when an agent exploits the literal specification of the reward function in ways that satisfy the letter of the reward function while violating its spirit, often leading to unexpected and undesired behaviors." (Hadfield-Menell et al., 2017) |
Inverse Reinforcement Learning | A method where AI learns human preferences by observing human behavior rather than being explicitly programmed with rewards. | "Inverse reinforcement learning provides a framework for inferring human preferences from demonstrations, potentially allowing AI systems to learn complex human values that would be difficult to specify directly." (Ng & Russell, 2000) |
Cooperative Reinforcement Learning | A learning paradigm where AI systems work collaboratively with humans, continuously incorporating human feedback. | "Cooperative reinforcement learning creates a partnership between human and machine, where the machine optimizes not just for task performance but for alignment with evolving human preferences and oversight capability." (Christiano et al., 2017) |
Hybrid and Heterogeneous Architectures | Technical frameworks that combine multiple AI approaches and methodologies to leverage their complementary strengths. | "Hybrid architectures that integrate symbolic reasoning with neural approaches offer promising avenues for developing AGI systems that maintain interpretability while achieving high performance across diverse domains." (Marcus & Davis, 2019) |
Interpretability | The ability for humans to understand and predict an AI system's decisions and behaviors. | "Interpretable AI allows stakeholders to understand not just what decisions are made, but why they are made, enabling effective oversight and creating the foundation for justified trust in increasingly autonomous systems." (Doshi-Velez & Kim, 2017) |
Agent Foundations | Theoretical frameworks that ensure AI systems have well-defined goals, beliefs, and decision processes. | "Agent foundations research addresses fundamental questions about how to design AI systems with reliable goals and beliefs, ensuring that increasingly capable systems remain beneficial even as they gain autonomy." (Soares & Fallenstein, 2017) |
Value Learning | Techniques that enable AI systems to acquire and represent human values accurately. | "Value learning research recognizes that human values are complex, context-dependent, and difficult to specify completely. The challenge is creating systems that can represent, learn, and respect the nuanced landscape of human preferences." (Evans et al., 2016) |
Corrigibility | The property of an AI system to allow itself to be corrected or shut down by humans when necessary. | "Corrigible systems maintain a fundamental uncertainty about their objectives and defer to human oversight, avoiding incentives to manipulate humans or resist corrections to their goals or operations." (Soares et al., 2015) |
Risk Management in AGI | Systematic approaches to identifying, assessing, and mitigating risks associated with increasingly autonomous AI systems. | "AGI risk management requires novel frameworks that address not just known vulnerabilities but also emergent risks that arise from increasingly capable and autonomous systems interacting with complex environments." (Hendrycks et al., 2022) |
AGI Governance | Frameworks, policies, and institutions designed to ensure the responsible development and deployment of AGI. | "Effective AGI governance requires multi-stakeholder collaboration across private industry, public institutions, and civil society to develop standards, oversight mechanisms, and shared norms that balance innovation with safety." (Dafoe, 2018) |
Scalability in Enterprise Applications | The capability of AGI systems to perform efficiently as computational demands, data volumes, or user bases increase. | "Enterprise AGI scalability involves not just technical performance at scale, but also maintaining alignment, interpretability, and controllability as systems grow in capability and application scope." (Hendrycks & Mazeika, 2022) |
[1] In Spike Jonze’s Her (2013), Theodore, a lonely writer, develops a deep emotional bond with Samantha, an advanced AI assistant. Unlike conventional AI, Samantha evolves, exhibiting emotional intelligence, self-awareness, and an ability to form intimate relationships—qualities often discussed in AGI debates.
Samantha’s progression raises profound questions:
[2] The mathematical formulation that defines what goals an AI system should optimize for, a critical element in preventing unintended behavior; see Glossary of terms added above.
[3] 100 papers were screened according to these criteria: Technical/Theoretical Content, Social Impact and Risk, Empirical Evidence, Research Type, Scientific Basis,AGI Relevance, Content Type
[4] Meredith Ringel Morris, Jascha Sohl-dickstein, Noah Fiedel, Tris Warkentin, Allan Dafoe, Aleksandra Faust, Clement Farabet, Shane Legg. (2024 ). Levels of AGI for Operationalizing Progress on the Path to AGI. arXiv. Retrieved from https://arxiv.org/abs/2311.02462
[5] Nestor Maslej, Loredana Fattorini, Raymond Perrault, Vanessa Parli, Anka Reuel, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Juan Carlos Niebles, Yoav Shoham, Russell Wald, and Jack Clark, “The AI Index 2024 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2024. https://aiindex.stanford.edu/wp-content/uploads/2024/05/HAI_AI-Index-Report-2024.pdf
[6] Hoffman, R., & Beato, G. (2025). Superagency: What could possibly go right with our AI future. [Authors Equity]
[7] McLean, S., King, B. J., Thompson, J., Carden, T., Stanton, N. A., Baber, C., … Salmon, P. M. (2023). Forecasting emergent risks in advanced AI systems: an analysis of a future road transport management system. Ergonomics, 66(11), 1750–1767. https://doi.org/10.1080/00140139.2023.2286907
[8] Altman, S. (2025, February 10). Three observations. https://blog.samaltman.com/three-observations
[9] Amodei, D. (2024, October). Machines of loving grace: How AI could transform the world for the better. Retrieved from https://darioamodei.com/machines-of-loving-grace
[10] Chalmers, D. J. (1996). The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press.
[11] Clark, A. (1997). Being There: Putting Brain, Body, and World Together Again. MIT Press.
[12] Dreyfus, H. L. (1972). What Computers Can't Do: A Critique of Artificial Reason. Harper & Row.
[13] Dreyfus, H. L. (1992). What Computers Still Can't Do: A Critique of Artificial Reason. MIT Press.
[14] Dreyfus, H. L. (2007). Why Heideggerian AI Failed and How Fixing it Would Require Making it More Heideggerian. Artificial Intelligence, 171(18), 1137–1160.
[15] Hassabis, D. (2025). Human-level AI will be here in 5 to 10 years, DeepMind CEO says. CNBC.
[16] Heidegger, M. (1927). Being and Time. Harper & Row.
[17] Polanyi, M. (1966). The Tacit Dimension. University of Chicago Press.
[18] Searle, J. R. (1980). Minds, Brains and Programs. Behavioral and Brain Sciences, 3(3), 417–457.
[19] Thompson, E. (2007). Mind in Life: Biology, Phenomenology, and the Sciences of Mind. Harvard University Press.
[20] Varela, F., Thompson, E., & Rosch, E. (1991). The Embodied Mind: Cognitive Science and Human Experience. MIT Press.