## Tags
- Part of: [[Artificial Intelligence]], [[Intelligence]]
- Related:
- Includes:
- Additional:
## Technical summaries
- Artificial general intelligence (AGI) is a theoretical type of [[artificial intelligence]] (AI) that falls within the lower and upper limits of human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial [[superintelligence]] (ASI), refers to types of intelligence that range from being only marginally smarter than the upper limits of human intelligence to greatly exceeding human cognitive capabilities by orders of magnitude. AGI is considered one of the definitions of strong AI.
- AGI, intelligence may be comparable to, match, differ from, or even appear alien-like relative to [[biological intelligence|human intelligence]], encompassing a spectrum of possible cognitive architectures and capabilities that includes the spectrum of human-level [[intelligence]].
## Main resources
- [Artificial general intelligence - Wikipedia](https://en.wikipedia.org/wiki/Artificial_general_intelligence)
<iframe src="https://en.wikipedia.org/wiki/Artificial_general_intelligence" allow="fullscreen" allowfullscreen="" style="height:100%;width:100%; aspect-ratio: 16 / 5; "></iframe>
- [Alan’s conservative countdown to AGI – Dr Alan D. Thompson – LifeArchitect.ai](https://lifearchitect.ai/agi/)
## Definitions
- [\[2311.02462\] Levels of AGI for Operationalizing Progress on the Path to AGI](https://arxiv.org/abs/2311.02462)
1. AGI according to OpenAI's charter: "highly autonomous systems that outperform humans at most economically valuable work", In practice most people currently deviate from the above definition to only mean digital work [- Karpathy on X](https://x.com/karpathy/status/1834641096905048165)
2. AGI (general definition): "Artificial General Intelligence (AGI) is an important and sometimes controversial concept in computing research, used to describe an AI system that is at least as capable as a human at most tasks."
3. AGI according to Legg and Goertzel: "a machine that is able to do the cognitive tasks that people can typically do"
4. AGI according to Shanahan: "artificial intelligence that is not specialized to carry out specific tasks, but can learn to perform as broad a range of tasks as a human"
5. AGI according to Marcus: "shorthand for any intelligence (there might be many) that is flexible and general, with resourcefulness and reliability comparable to (or beyond) human intelligence"
6. AGI according to Aguera y Arcas and Norvig: They suggest that state-of-the-art language models already are AGIs, arguing that generality is the key property of AGI, and that because language models can discuss a wide range of topics, execute a wide range of tasks, handle multimodal inputs and outputs, operate in multiple languages, and "learn" from zero-shot or few-shot examples, they have achieved sufficient generality.
7. Competent AGI (Level 2 in the paper's taxonomy): A system that has "at least 50th percentile of skilled adults" performance on a wide range of non-physical tasks, including metacognitive tasks.
8. ASI (Artificial Superintelligence): Defined in the paper's taxonomy as "Level 5: Superhuman" - a system that "outperforms 100% of humans" on a wide range of non-physical tasks, including metacognitive tasks.
[[Images/6ec7f6eb8791ef53417805032132fc4f_MD5.jpeg|Open: Pasted image 20240919014547.png]] (each system is a discrete points on these spectrums)
![[Images/6ec7f6eb8791ef53417805032132fc4f_MD5.jpeg]]
- [OpenAI Sets Levels to Track Progress Toward Superintelligent AI - Bloomberg](https://www.bloomberg.com/news/articles/2024-07-11/openai-sets-levels-to-track-progress-toward-superintelligent-ai?srnd=technology-vp)
[[Images/238d28e026fa58e7787b1e630bb3732e_MD5.jpeg|Open: Pasted image 20240919015646.png]]
![[Images/238d28e026fa58e7787b1e630bb3732e_MD5.jpeg]]
- [9 definitions of Artificial General Intelligence (AGI) and why they are flawed by Carlos E. Perez](https://x.com/IntuitMachine/status/1721845203030470956)
1. The Turing Test
Flaw: LLMs already passed turing tests, but you had to prompt engineer them to make them dumber, not as knowledgable, not as verbose, more causal, broken grammar etc. for nonnerds
Flaw: Focuses on fooling humans rather than intelligence, easy to game by producing human-like text without intelligence.
2. Strong AI - Consciousness
Limitation: No agreement on measuring machine consciousness. Focus on vague concepts rather than capabilities.
3. Human Brain Analogy
Limitation: While loosely inspired by the brain, successful AI need not strictly mimic biology. Overly constrains mechanisms.
4. Human Cognitive Task Performance
Limitation: What tasks? Which people? Lacks specificity and measurement.
5. Ability to Learn Tasks
Strength: Identifies learning as important AGI ability.
Limitation: Still lacks concrete measurement.
6. Economically Valuable Work
Limitation: Misses non-economic values of intelligence like creativity. Requires deployment.
7. Flexible & General - Coffee Test
Strength: Concrete example tasks.
Limitation: Proposed tasks may not fully define AGI.
8. Artificial Capable Intelligence
Strength: Emphasizes complex, multi-step real-world tasks.
Limitation: Focuses narrowly on profitability.
9. LLMs as Generalists
Limitation: Lacks performance criteria - generality alone insufficient.
An AGI definition based on 6 principles
1. Focus on capabilities, not processes
Avoid requiring things like human-like thinking or consciousness which are vague, controversial concepts. Focus just on demonstrated abilities.
An AI that passes the Turing Test by generating human-like text may not actually "think" like a human.
2. Focus on generality and performance
True intelligence requires both breadth of abilities (generality) and level of skill (performance).
An AI that achieves human-level performance playing chess has high performance on a narrow task.
3. Focus on cognitive and metacognitive tasks
Physical capabilities like robotics seem less central to intelligence than mental capabilities. But learning is important.
Example: An AI that can learn to carry out new tasks demonstrates an important cognitive ability.
4. Focus on potential, not deployment
Don't require real-world use, just demonstrate capabilities under testing conditions. This avoids non-technical hurdles.
Example: Waymo AI drives cars autonomously but isn't widely deployed due to legal issues. The capability still exists.
5. Focus on ecological validity
Choose benchmark tasks that actually represent skills humans value, not just easy to measure skills.
Example: Hold a natural conversation reflects general linguistic intelligence better than optimized dialogue.
6. Focus on the path to AGI, not a single endpoint
Motivation: A leveled taxonomy allows more nuanced discussion of progress and risks vs treating AGI as a threshold.
- [\[2406.04268\] Open-Endedness is Essential for Artificial Superhuman Intelligence](https://arxiv.org/abs/2406.04268): Open-Endedness: Continuous self-improvement, creativity, and the generation of novel solutions beyond human imagination or ability.
## Idealizations
- [[Intelligence#Idealizations]]
- ![[Intelligence#Idealizations]]
## Other definitions of intelligence
- [[Intelligence#Definitions]]
- ![[Intelligence#Definitions]]
## Future
- [[Computronium]]
- From [The Singularity Is Nearer - Wikipedia](https://en.wikipedia.org/wiki/The_Singularity_Is_Nearer) by [[Ray Kurzweil]]:
[[Images/4ee554bf075eb3a5879c61c1d14e1e51_MD5.jpeg|Open: Pasted image 20240919001041.png]]
![[Images/4ee554bf075eb3a5879c61c1d14e1e51_MD5.jpeg]]
## Brainstorming
Artificial general intelligence, AGI. Most of the mainstream sees it as AI that has human-like cognitive abilities. I prefer to see it as AI that is able to generalize better regardless of how a person is able to generalize and what other cognitive abilities human has, which I think makes more sense given the name. I would rather call the first one artificial human intelligence. And instead of "artificial" I would use machine/digital/silicon intelligence, because it is not an intelligence that is "artificial" in my opinion, but what is on a different substrate with different and variously similar mechanisms.
"
I have a lot of issues with the term "AGI". I would redefine it.
People say that we're heading towards artificial general intelligence (AGI), but by that most people actually usually mean machine human-level intelligence (MHI) instead, a machine that is performing human digital or/and physical tasks as good as humans. And by artificial superintelligence (ASI), people mean machine superhuman intelligence (MSHI), that is even better than humans at human tasks.
I think lot's of research goes towards very specialized machine narrow intelligences (MNI), which are very specialized and often superhuman in very specific tasks, such as playing games (AlphaZero), protein folding (AlphaFold), and a lot of research also goes towards machine general intelligence (MGI), which will be much more general than human intelligence (HI), because humans are IMO very specialized biological systems in our evolutionary niche, in our everyday tasks and mathematical abilities, and other organisms are differently specialized, even tho we still share a lot. Plus there is just some overlap between biological and machine intelligence.
And I wonder how if the emerging reasoning systems like o3 are becoming actually more similar to humans, or more alien compared to humans, as they might better adapt to novelty and be more general than previous AI systems, which might bring them closer to humans, but in slightly different ways than humans. They may be able to do selfcorrecting chain of thought search endlessly, which is better for a lot of tasks, and big part of this is big part of human cognition I think, but humans still work differently.
I think that generality of an intelligent system is a spectrum, and each system has differently general capabilities over different families of tasks than other ones, which we can see with all the current machine and biological intelligences, that are all differently general over different families of tasks. That's why "AGI" feels much more continuous than discrete to me, and over which families of tasks you generalize matters too I think.
The Chollet's definition of intelligence as the efficiency with which you operationalize past information in order to deal with the future, which can be interpreted as a conversion ratio, is really good I think, and his ARC-AGI benchmark, that tries to test for some degree of generality, trying to test for the ability to abstract over and recombine some atomic core knowledge priors, to prevent naive pattern memorization and retrieval being successful.
And I really wonder if scoring well on ARC-AGI actually generalizes outside the ARC domain to all sorts of tasks where humans are superior, or where humans are terrible but machines are superior, or where other biological systems are superior, or where everyone is terrible for now. I would suspect so, but maybe not? In software engineering, o1 seems to be better just sometimes? What's happening there? I want more benchmarks!
Pre-o1 LLMs are technically super surface level knowledge generalists, lacking technical depth, but having bigger overview of the whole internet than any human, knowing high level correlations of the whole internet, even tho their representations are more brittle than human brain's. But we're much better in agency, in some cases in generality, we can still do more abstract math more, etc., we're better in our evolutionary niche. But for example AlphaZero destroyed us in chess. But when I look at ARC-AGI scores, I see o3 as a system that can adapt to novelty better than previous models, but we can still do much better.
Also according to some old definitions of AGI, existing AI systems have been AGI for a long time, because it can have a general discussion about basically almost anything (except lacking narrow niche field specific knowledge and skills, lack of agency, lack of adapting to novelty like humans, etc.).
Or if we take the AIXI definition of AGI, then a fully general AGI is impossible in practice, as that's not computable, and you can only approximate it, since AIXI it considers all possible explanations (programs) for its observations and past actions and chooses actions that maximize expected future rewards across all these explanations, weighted by their simplicity (shortness) (Occam's razor) (Kolmogorov complexity).
And AIXI people argue that humans and AI systems try to approximate AIXI in their more narrow domains and take all sorts of cognitive shortcuts to be actually practical and not take infinite time and resources to decide.
And soon we might create some machine-biology hybrids as well. Then we should maybe start calling it carbon based intelligence (CI) and silicon based intelligence (SI) and carbon and silicon based intelligences (CSI).
I also guess it depends how you define the original words, such as generality. Let's say you are comparing the generality of AlphaZero, Claude, o1/o3, and humans. How would you compare them? Do all have zero generality, if we take the AIXI definiton of AGI for example, which is not computable?
AIXI definition of AGI would also imply that there is no AGI in our current universe and there can never be.
“
## Written by AI (may include factually incorrect information)
Here is a comprehensive map of various definitions and perspectives on artificial general intelligence (AGI):
## Core Concepts
- Human-level AI capable of performing any intellectual task
- AI with general problem-solving abilities across domains
- AI that can learn and adapt to new situations like humans
- AI with self-awareness, consciousness, and common sense reasoning
## Key Capabilities
- Abstract thinking and reasoning
- Transfer learning across domains
- Creativity and innovation
- Natural language understanding
- Sensory perception and motor skills
- Emotional intelligence and social cognition
- Autonomous learning and self-improvement
## Specific Definitions
**OpenAI:** "Highly autonomous systems that outperform humans at most economically valuable work"[3]
**TechTarget:** "Representation of generalized human cognitive abilities in software so that, faced with an unfamiliar task, the AGI system could find a solution"
**Investopedia:** "AI with a human level of cognitive function, including the ability to self-teach"[6]
**arXiv paper:** "Adaptation to open environments according to certain principles using limited resources"[3]
**Wikipedia:** "Intelligence comparable to, matching, differing from, or appearing alien-like relative to human intelligence"[2]
**AWS:** "Software with human-like intelligence and the ability to self-teach"[5]
**Coursera:** "Hypothetical form of artificial intelligence where a machine learns and thinks like a human does"[4]
## Theoretical Approaches
- Symbolic AI (logic and knowledge representation)
- Connectionist AI (neural networks)
- Hybrid approaches combining symbolic and connectionist methods
- Whole brain emulation
- Cognitive architectures
- Universal AI (theoretical foundations)
## Key Characteristics
- Domain-general capabilities
- Autonomous goal-setting and pursuit
- Adaptability to new environments
- Ability to handle uncertainty and ambiguity
- Generalization from limited data
- Causal reasoning and inference
- Meta-learning and introspection
## Evaluation Criteria
- Turing test and variations
- Cognitive decathlon tests
- Real-world problem-solving benchmarks
- Measures of creativity and innovation
- Adaptability to novel situations
- Social and emotional intelligence tests
## Philosophical Perspectives
- Strong AI vs Weak AI distinction
- Chinese Room argument
- Consciousness and qualia debates
- Ethics and rights of AGI systems
- Potential risks and existential threats
## Research Challenges
- Scalable reasoning and knowledge representation
- Unsupervised and few-shot learning
- Commonsense reasoning
- Causal inference
- Robustness and safety
- Interpretability and transparency
- Value alignment and control
## Timelines and Predictions
- Near-term (5-10 years)
- Mid-term (10-50 years)
- Long-term (50+ years)
- Never achievable
## Related Concepts
- Artificial Superintelligence (ASI)
- Transformative AI
- Narrow/Weak AI
- Artificial Consciousness
- Whole Brain Emulation
This map covers a wide range of perspectives on AGI, from specific technical capabilities to broader philosophical questions. The exact definition and path to AGI remain subjects of ongoing research and debate in the AI community.
Citations:
[1] https://www.techtarget.com/searchenterpriseai/definition/artificial-general-intelligence-AGI
[2] https://en.wikipedia.org/wiki/Artificial_general_intelligence
[3] https://arxiv.org/html/2404.10731v1
[4] https://www.coursera.org/articles/what-is-artificial-general-intelligence
[5] https://aws.amazon.com/what-is/artificial-general-intelligence/
[6] https://www.investopedia.com/artificial-general-intelligence-7563858
[7] https://www.scientificamerican.com/article/what-does-artificial-general-intelligence-actually-mean/