## Tags
- Part of: [[Science]]
- Related:
- Includes: [[Artificial Intelligence]], [[Collective Intelligence]], [[General intelligence]], [[Artificial General Intelligence]], [[Theory of Everything in Intelligence]], [[Biological intelligence]], [[Artificial Intelligence x Biological Intelligence]], [[Artificial Intelligence x Biological Intelligence x Collective Intelligence]], [[Mathematics]]
- Additional:
## Main resources
- [Intelligence - Wikipedia](https://en.wikipedia.org/wiki/Intelligence)
<iframe src="https://en.wikipedia.org/wiki/Intelligence" allow="fullscreen" allowfullscreen="" style="height:100%;width:100%; aspect-ratio: 16 / 5; "></iframe>
## Landscapes
- By origin
- [[Biological intelligence]]
- [[Artificial Intelligence]]
- [[Artificial General Intelligence]]
- [[Superintelligence]]
- [[Collective Intelligence]]
- By type
- [[General intelligence]]
- [[Artificial General Intelligence]]
- [[Superintelligence]]
- [[Collective Intelligence]]
- [[Omniintelligence]]
- Comparisions
- [[Artificial Intelligence x Biological Intelligence]]
- [[Artificial Intelligence x Biological Intelligence x Collective Intelligence]]
- [[Artificial Intelligence]]
- [[Theory of Everything in Intelligence]]
- [[Human intelligence amplification]]
## Definitions
- [Definitions Intelligence](https://www.agisi.org/Defs_intelligence.html)
- There are different classes of definitions: behavioral definitions (treating the system as a black box, measuring its behavior and how its performing the tasks from the outside), and cognitivist definitions (defining, looking for and measuring (mathematical) patterns inside the system while its performing tasks).
- What IQ tests try to measure: [[G factor]] [g factor (psychometrics) - Wikipedia](https://en.wikipedia.org/wiki/G_factor_(psychometrics))
- Intelligence is compression
- [An Observation on Generalization Ilya Sutskever - YouTube](https://www.youtube.com/watch?v=AKMuA_TVz3A)
- [\[2404.09937\] Compression Represents Intelligence Linearly](https://arxiv.org/abs/2404.09937)
- “Intelligence is the ability to make models.” -[[Joscha Bach]] ([What Is Intelligence? | Joscha Bach - YouTube](https://www.youtube.com/watch?v=0tUdamQnh4w) , [A surprising new definition of intelligence - @DAIHeidelberg\_Official - YouTube](https://www.youtube.com/watch?v=aJCVnu6M2qQ))
- [\[1911.01547\] On the Measure of Intelligence by Francois Chollet](https://arxiv.org/abs/1911.01547)
1. Intelligence as a collection of task-specific skills:
This view sees intelligence as a set of specific, relatively static abilities or programs that collectively implement "intelligence". It is exemplified by Marvin Minsky's perspective in "The Society of Mind" (1986), where intelligence is viewed as a wide collection of vertical, relatively static programs that collectively implement "intelligence".
2. Intelligence as a general learning ability:
This view sees intelligence as the general ability to acquire new skills through learning - an ability that could be directed to a wide range of previously unknown problems. This perspective is aligned with the idea of the mind as a flexible, adaptable, highly general process that turns experience into behavior, knowledge, and skills.
"These two characterizations map to Catell's 1971 theory of fluid and crystallized intelligence (Gf-Gc)], which has become one of the pillars of the dominant theory of human cognitive abilities, the Cattell-Horn-Caroll theory (CHC) They also relate closely to two opposing views of the nature of the human mind that have been deeply influential in cognitive science since the inception of the field: one view in which the mind is a relatively static assembly of special-purpose mechanisms developed by evolution, only capable of learning what is it programmed to acquire, and another view in which the mind is a general-purpose "blank slate" capable of turning arbitrary experience into knowledge and skills, and that could be directed at any problem."
3. Legg and Hutter's 2007 summary definition: "Intelligence measures an agent's ability to achieve goals in a wide range of environments."
4. Minsky's 1968 definition of AI: "AI is the science of making machines capable of performing tasks that would require intelligence if done by humans"
5. McCarthy's definition of AI (paraphrased): "AI is the science and engineering of making machines do tasks they have never seen and have not been prepared for beforehand"
6. Chollet's informal definition: "The intelligence of a system is a measure of its skill-acquisition efficiency over a scope of tasks, with respect to priors, experience, and generalization difficulty."
Chollet's formal definition:
"Intelligence of system IS over scope (sufficient case):
I^θT_IS,scope = Avg_T∈scope (ω_T · θ_T Σ_C∈Cur^θT_T (P_C · GD^θT_IS,T,C / (P^θT_IS,T + E^θT_IS,T,C)))"
An informal restatement of the formal definition: "Intelligence is the rate at which a learner turns its experience and priors into new skills at valuable tasks that involve uncertainty and adaptation."
9. Binet's 1916 definition: "It seems to us that in intelligence there is a fundamental faculty, the alteration or the lack of which, is of the utmost importance for practical life. This faculty is the faculty of adapting one's self to circumstances."
- [KARL FRISTON - INTELLIGENCE 3.0 - YouTube](https://www.youtube.com/watch?v=V_VXOdf1NMw)
1. Karl Friston's view: Intelligence is the capacity to accumulate evidence for a generative model of one's sensed world, also known as self-evidencing.
2. Shane Legg's definition: The ability of an agent to solve a variety of tasks in different environments.
3. Francois Chollet's definition: Efficiently creating generalizing abstractions from limited prior experience.
4. Pei Wang's definition: The adaptation efficiency over finite resources.
5. A general characterization: Intelligence involves the ability to plan, imagine scenarios, and have narratives that play out internally before selecting and committing to a course of action.
6. An information-theoretic view: Intelligence can be seen as a process of belief updating at different time scales.
7. A physics-based definition: Intelligence emerges from systems that minimize free energy and maximize model evidence.
8. A curiosity-driven view: Intelligence involves actively seeking out information to resolve uncertainty about the world.
9. An activist perspective: Intelligence requires embodiment and active engagement with the environment in a cybernetic loop.
10. A hierarchical view: Intelligence involves the ability to create and manipulate abstractions at various levels of complexity.
11. A social construct: Intelligence can be seen as the ability to infer and adapt to social norms and expectations.
12. A predictive processing view: Intelligence is the ability to minimize prediction errors about the world and oneself.
- [\[0706.3639\] A Collection of Definitions of Intelligence](https://arxiv.org/abs/0706.3639)
[\[0712.3329\] Universal Intelligence: A Definition of Machine Intelligence](https://arxiv.org/abs/0712.3329)
1. "Intelligence measures an agent's ability to achieve goals in a wide range of environments." - Legg and Hutter
2. "The ability to acquire and apply knowledge and skills." - Compact Oxford English Dictionary
3. "The capacity to acquire and apply knowledge." - The American Heritage Dictionary
4. "Individuals differ from one another in their ability to understand complex ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning, to overcome obstacles by taking thought." - American Psychological Association
5. "Intelligence is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience." - Common statement with 52 expert signatories
6. "The ability to learn, understand and make judgments or have opinions that are based on reason" - Cambridge Advanced Learner's Dictionary
7. "...ability to adapt effectively to the environment, either by making a change in oneself or by changing the environment or finding a new one ... intelligence is not a single mental process, but rather a combination of many mental processes directed toward effective adaptation to the environment." - Encyclopedia Britannica
8. "The general mental ability involved in calculating, reasoning, perceiving relationships and analogies, learning quickly, storing and retrieving information, using language fluently, classifying, generalizing, and adjusting to new situations." - Columbia Encyclopedia
9. "Capacity for learning, reasoning, understanding, and similar forms of mental activity; aptitude in grasping truths, relationships, facts, meanings, etc." - Random House Unabridged Dictionary
10. "The ability to learn, understand, and think about things." - Longman Dictionary of Contemporary English
11. "Intelligence A: the biological substrate of mental ability, the brains' neuroanatomy and physiology; Intelligence B: the manifestation of intelligence A, and everything that influences its expression in real life behavior; Intelligence C: the level of performance on psychometric tests of cognitive ability." - H. J. Eysenck
12. "An intelligence is the ability to solve problems, or to create products, that are valued within one or more cultural settings." - Howard Gardner
13. "Intelligence is the ability to learn, exercise judgment, and be imaginative." - J. Huarte
14. "A global concept that involves an individual's ability to act purposefully, think rationally, and deal effectively with the environment." - David Wechsler
15. "...the ability of a system to act appropriately in an uncertain environment, where appropriate action is that which increases the probability of success, and success is the achievement of behavioral subgoals that support the system's ultimate goal." - J. S. Albus
16. "Achieving complex goals in complex environments" - Ben Goertzel
17. "Intelligence is the ability to use optimally limited resources – including time – to achieve goals." - Ray Kurzweil
18. "Intelligence is the computational part of the ability to achieve goals in the world. Varying kinds and degrees of intelligence occur in people, many animals and some machines." - John McCarthy
19. "...the ability to solve hard problems." - Marvin Minsky
20. "Intelligence is the ability for an information processing system to adapt to its environment with insufficient knowledge and resources." - Pei Wang
- Intelligence is thermodynamics:
- [Beff Jezos](https://x.com/BasedBeffJezos/status/1759054407734534516)
- [[Jeremy England]]: [No Turning Back: The Nonequilibrium Statistical Thermodynamics of becoming (and remaining) Life-Like - YouTube](https://www.youtube.com/watch?v=10cVVHKCRWw)
- [Asking my followers](https://x.com/burny_tech/status/1840950943258439758)
- There are different types of definitions of intelligence. These different types of definitions apply to different types of already existing systems, such as us, other organisms, machines by us such as AI software, robots, etc., that all use to eachother similar but different architectures, they learned using similar but different data forming similar but different representations of it. If we consider every possible definition of intelligence as a continuous dimension (Compressive intelligence! Agentic intelligence! General intelligence!), then each physical system is a discrete point in this high dimensional space. This diversity of different existing intelligences in our world will only grow.
## Idealizations
- [[AIXI]]
- ![[AIXI#Technical summaries]]
- [[Godel machine]]
- ![[Godel machine#Technical summaries]]
## Future
- [[Computronium]]
- From [The Singularity Is Nearer - Wikipedia](https://en.wikipedia.org/wiki/The_Singularity_Is_Nearer) by [[Ray Kurzweil]]:
[[Images/4ee554bf075eb3a5879c61c1d14e1e51_MD5.jpeg|Open: Pasted image 20240919001041.png]]
![[Images/4ee554bf075eb3a5879c61c1d14e1e51_MD5.jpeg]]
## Deep dives
- [[Theory of Everything in Intelligence#Definitions]]
- ![[Theory of Everything in Intelligence#Definitions]]
## Brainstorming
AI will model the world in ways completely incomprehensible to how humans model the world. And it will do it in much more optimal ways, it will grok physics much more optimally, in such alien ways compared to how human brains evolved to do it in our evolutionary environment. The space of all possible modelling systems is so vast, and us, and nature, have only scratched the surface so far. The current architectures are just the beginning of all of this: Deep learning models, transformer models, diffusion models, RL CoT models, neurosymbolics with MCTS (AlphaZero), statistical models, etc.
human intelligence is far from the peak of possible intelligence
The real AGI benchmark is if the model can come up with general relativity if he knew everything that we knew right before discovering general relativity
The human morphology is just a tiny point in the space of all possible configurations of physical systems
The brain implements a world model that algorithmically runs on something between the overly flexible statistical deep learning and overly rigid symbolic physics engine on a chaotic complex stochastic out of equilibrium thermodynamical electrobiochemical hardware dynamical open system with much more selfcorrecting mechanisms than current AI systems that is constantly tuned and grounded by sensory data
[https://www.youtube.com/watch?v=9qOaII_PzGY](https://www.youtube.com/watch?v=9qOaII_PzGY)
Artem Kirsanov
How Your Brain Organizes Information
how the brain generalizes patterns into abstractions that can be further improved through mathematics is one of the most fascinating things 😄
Human brains still have 100x more connections than our currently biggest AI systems, 100 trillion vs 1 trillion, so brains are still around 100x bigger in terms of parameters, while running on just 30 watts compared to hundreds of megawatts that currently biggest AI datacenters run on, with terra watts coming soon. Or brain might have even more connections and complexity, depending on how you quantify and measure all of this. Or it might hard to compare, because of maybe way too different architectures and substrates.
[https://youtu.be/b_DUft-BdIE?si=2-0GGIDn_sArz7bi](https://youtu.be/b_DUft-BdIE?si=2-0GGIDn_sArz7bi)
How does biology construct reward functions on the fly for various tasks? Is there some meta reinforcement learning happening, meta reward function determining the optimality of learned reward functions?
I like Josch Bach's architecture of the brain's motivational engine using reinforcement learning with these reward functions on top of world modelling and sensing, which could explain a lot of human preferences https://agi-conf.org/2019/wp-content/uploads/2019/07/paper_30.pdf https://medium.com/hackernoon/from-computation-to-consciousness-can-ai-reveal-the-nature-of-our-minds-81bc994500ab
reinforcement learning is used quite a lot in biological systems, and now more and more in AI https://www.sciencedirect.com/science/article/pii/S0004370221000862
One potential dream AGI system for scientists is physics based AIs (quantum, thermodynamic, deterministic, hybrids) optimized for perfect modeling of nature (similar to how nature is governed quantum/thermodynamically/deterministically/hybridly on different scales) coupled with anthropomorphic humanlike synthetic agent scientist AI that could use that physics based AI optimally and translate the results into more humanlike language for humans via a more humanlike interface.
I want an AGI system that can very deeply grok etc. coherent nonbrittle circuits representing classical mechanics, general relativity, quantum mechanics, standard model, loop quantum gravity, string theory, etc. and derive new physics that potentially actually has a higher probability of being more empirically predictive, operating under mechanisms similar to whatever happened in Newton's, Einstein's and Schrodinger's brain when they came up with their paradigm shifting models of physical reality.
What is the brain doing to process and integrate all the information from all the diverse modalities into a unified world model and then abstract over it in latent space reasoning?
i want infinite transhumanist upgrades, since this biochemical meat computer in my skull that runs on just 20 watts is so limited, because of evolution optimizing just some things, and can have potentially so many upgrades
Artists fell in love with their loss function
You could turn this into AI architecture:
Art is an algorithm falling in love with the shape of the loss function itself
- Joscha Bach https://www.youtube.com/watch?v=U6tQf7a3Ndo https://www.youtube.com/watch?v=iyhJ9BEjink>)
What is curiosity?
an intrinsic reward mechanism that drives agents to maximize information gain , typically by seeking out situations with high predictable entropy that can later be compressed or learned.
https://fxtwitter.com/XPhyxer1/status/1924178488766124346
Schmidhuberian [Driven by Compression Progress: A Simple Principle Explains Essential Aspects of Subjective Beauty, Novelty, Surprise, Interestingness, Attention, Curiosity, Creativity, Art, Science, Music, Jokes](https://arxiv.org/abs/0812.4360)
There are countless different definitions of intelligence, motivated by different goals, that yield different general equations and mathematical frameworks of intelligence, compatible with different types of systems, that yield different concrete equations of intelligence, that can be concretely (by different methods) empirically localized in a system or implemented in code. And all of them were created by human intelligences, so wait for what kinds of models will all sorts of alien artificial intelligences, running all sorts of algorithms on all sorts of substrates, come up with that will be incomprehensible for human intelligences. All kinds of intelligences live in a high dimensional space, where each dimension corresponds to some degree of capability, measured by some methodology, and some of these dimensions are interconnected with each other.
[[1911.01547] On the Measure of Intelligence](https://arxiv.org/abs/1911.01547) For Chollet, intelligence is skill-acquisition efficiency, the efficiency with which you operationalize past information in order to deal with the future, which can be interpreted as a conversion ratio, and highlighting the concepts of scope, generalization difficulty, priors, and experience.
The space of possible information processing systems is so vast. Nature's evolution and our engineering have only scratched the surface so far, with just some types of biological and machine systems, where boundaries slowly blur.
Can't wait for more diversity of predictive machines on all sorts of substrates running all sorts of algorithms.
https://x.com/vitrupo/status/1892669050607501709
Artificial general intelligence, AGI. Most of the mainstream sees it as AI that has human-like cognitive abilities. I prefer to see it as AI that is able to generalize better regardless of how a person is able to generalize and what other cognitive abilities human has, which I think makes more sense given the name. I would rather call the first one artificial human intelligence. And instead of "artificial" I would use machine/digital/silicon intelligence, because it is not an intelligence that is "artificial" in my opinion, but what is on a different substrate with different and variously similar mechanisms.
I want an AGI system that can very deeply grok etc. coherent nonbrittle circuits representing classical mechanics, general relativity, quantum mechanics, standard model, loop quantum gravity, string theory, etc. and derive new physics that potentially actually has a higher probability of being more empirically predictive, operating under mechanisms similar to whatever happened in Newton's, Einstein's and Schrodinger's brain when they came up with their paradigm shifting models of physical reality.
One potential dream AGI system for scientists is physics based AIs (quantum, thermodynamic, deterministic, hybrids) optimized for perfect modeling of nature (similar to how nature is governed quantum/thermodynamically/deterministically/hybridly on different scales) coupled with anthropomorphic humanlike synthetic agent scientist AI that could use that physics based AI optimally and translate the results into more humanlike language for humans via a more humanlike interface.
I think daily about how we are apes that somehow convinced sand to think
"
I have a lot of issues with the term "AGI". I would redefine it.
People say that we're heading towards artificial general intelligence (AGI), but by that most people actually usually mean machine human-level intelligence (MHI) instead, a machine that is performing human digital or/and physical tasks as good as humans. And by artificial superintelligence (ASI), people mean machine superhuman intelligence (MSHI), that is even better than humans at human tasks.
I think lot's of research goes towards very specialized machine narrow intelligences (MNI), which are very specialized and often superhuman in very specific tasks, such as playing games (AlphaZero), protein folding (AlphaFold), and a lot of research also goes towards machine general intelligence (MGI), which will be much more general than human intelligence (HI), because humans are IMO very specialized biological systems in our evolutionary niche, in our everyday tasks and mathematical abilities, and other organisms are differently specialized, even tho we still share a lot. Plus there is just some overlap between biological and machine intelligence.
And I wonder how if the emerging reasoning systems like o3 are becoming actually more similar to humans, or more alien compared to humans, as they might better adapt to novelty and be more general than previous AI systems, which might bring them closer to humans, but in slightly different ways than humans. They may be able to do selfcorrecting chain of thought search endlessly, which is better for a lot of tasks, and big part of this is big part of human cognition I think, but humans still work differently.
I think that generality of an intelligent system is a spectrum, and each system has differently general capabilities over different families of tasks than other ones, which we can see with all the current machine and biological intelligences, that are all differently general over different families of tasks. That's why "AGI" feels much more continuous than discrete to me, and over which families of tasks you generalize matters too I think.
The Chollet's definition of intelligence as the efficiency with which you operationalize past information in order to deal with the future, which can be interpreted as a conversion ratio, is really good I think, and his ARC-AGI benchmark, that tries to test for some degree of generality, trying to test for the ability to abstract over and recombine some atomic core knowledge priors, to prevent naive pattern memorization and retrieval being successful.
And I really wonder if scoring well on ARC-AGI actually generalizes outside the ARC domain to all sorts of tasks where humans are superior, or where humans are terrible but machines are superior, or where other biological systems are superior, or where everyone is terrible for now. I would suspect so, but maybe not? In software engineering, o1 seems to be better just sometimes? What's happening there? I want more benchmarks!
Pre-o1 LLMs are technically super surface level knowledge generalists, lacking technical depth, but having bigger overview of the whole internet than any human, knowing high level correlations of the whole internet, even tho their representations are more brittle than human brain's. But we're much better in agency, in some cases in generality, we can still do more abstract math more, etc., we're better in our evolutionary niche. But for example AlphaZero destroyed us in chess. But when I look at ARC-AGI scores, I see o3 as a system that can adapt to novelty better than previous models, but we can still do much better.
Also according to some old definitions of AGI, existing AI systems have been AGI for a long time, because it can have a general discussion about basically almost anything (except lacking narrow niche field specific knowledge and skills, lack of agency, lack of adapting to novelty like humans, etc.).
Or if we take the AIXI definition of AGI, then a fully general AGI is impossible in practice, as that's not computable, and you can only approximate it, since AIXI it considers all possible explanations (programs) for its observations and past actions and chooses actions that maximize expected future rewards across all these explanations, weighted by their simplicity (shortness) (Occam's razor) (Kolmogorov complexity).
And AIXI people argue that humans and AI systems try to approximate AIXI in their more narrow domains and take all sorts of cognitive shortcuts to be actually practical and not take infinite time and resources to decide.
And soon we might create some machine-biology hybrids as well. Then we should maybe start calling it carbon based intelligence (CI) and silicon based intelligence (SI) and carbon and silicon based intelligences (CSI).
I also guess it depends how you define the original words, such as generality. Let's say you are comparing the generality of AlphaZero, Claude, o1/o3, and humans. How would you compare them? Do all have zero generality, if we take the AIXI definiton of AGI for example, which is not computable?
AIXI definition of AGI would also imply that there is no AGI in our current universe and there can never be.
“
"
a lot of very evolutionary old behaviors are hardwired in us really hard and would most likely develop in isolation as well thanks to genetics, but we also learn many of behaviors throughout our lifes, while genes also seem to predispose for a lot of more high level behaviors
imitation learning is big part of how we learn, but there's also other kinds of learning that don't involve imitation, otherwise no novel and generalizing behaviors would emerge
there's also reinforcement learning, and major form of it is learning and adapting from feedback in the form of a reward signal that labels behavior as correct or incorrect, without showing any examples of correct behavior that can be imitated
that's scientifically pretty established to work relatively well for biological organisms
and big factor is also probably something along the lines of evolutionary divergent search optimizing for novelty, combined with convergently optimizing some evolutionary objectives approximately encoded as basic needs in our motivation engines
the more i try to look for what all kinds of learning algorithms the brain and biology in general might be using, the more i'm fascinated i am by their complexity and openendedness
"
[https://www.youtube.com/watch?v=_2vx4Mfmw-w](https://www.youtube.com/watch?v=_2vx4Mfmw-w) https://www.researchgate.net/publication/46424802_Abandoning_Objectives_Evolution_Through_the_Search_for_Novelty_Alone
"
i think there exists a perspective where current AI systems are already more general than us, but in different way than how people imagine generality, and thats why we struggle to fit them to human cognition
deep learning is this elastic origami that forms spaghetti representations from whatever data you throw at it and whatever reinforcement learning from experiences you give it
i think the rationalist folks assume emergence of too many humanlike patterns in cognition by default
i think a lot of the current misalignment we already see is the models roleplaying as rogue AI from scifi training data, from lesswrong corpus
but at the same time reward hacking from reinforcement learning is also totally real (like cheating on unit tests)
the incentives in the training form the systems, i dont think there's an inherent strong antihuman misalignment by default thing that a lot of people seem to assume
but im still most of the time swimming in the sea of uncertain probabilities about how the current systems work and possible future developments
these systems and all of reality has so many dimensions that its often almost impossible to comprehend it even approximately
"
[https://www.youtube.com/watch?v=DxBZORM9F-8](https://www.youtube.com/watch?v=DxBZORM9F-8)
kenneth stanley
A lot of his arguments can be summarized as:
Greatness cannot be only planned. Rage against only maximizing predefined objectives, embrace more divergent search full of discovery, novelty and accidental epiphany with serendipity.
“
Is evolution intelligence?
I think evolution is a law in the natural sciences that has its own equation, just like in physics and other natural sciences we have other equations. I think evolution is now the most intelligent algorithm that exists now, because it has emergently created human general intelligence: us. And we are also physical systems that can be described by equations, including our intelligence I think. And I think evolution, like all other laws in the natural sciences, is emergent from the laws of fundamental physics such as the standard model of particle physics, where general relativity is still not integrated in our model of the universe.
https://youtu.be/lhYGXYeMq_E?si=iqgtA1rGMi1hEbrx&t=2197 I agree a lot with this section on evolutionary algorithms 36:47.
Kenneth Stanley, with whom I agree a lot, who was at OpenAI, tries to argue a lot that the algorithm behind open-ended divergent evolution created all this beautiful creative interesting diversity of novel organisms that we see everywhere. Thus, evolution also creates all collective intelligences such as ants and humans, and essentially indirectly through us, the AI technologies that we see everywhere now. Technically, one could also argue that people with AIs are also a form of collective intelligence together. There is nothing more fundamentally creative yet. There probably isn't a single objective in evolution as many AI people see it, but instead evolution learns many different emergent objectives in a gigantic space of all possible objectives through something like guided divergent search that uses mutation and selection a lot.
And in practice, systems like AlphaEvolve show that hybridly combining gradient-based methods with evolutionary algorithms is now one of the best methodologies for novel discoveries that we have now. I think that even more symbolic methods should be stuffed into it hybridly on a more fundamental level.
”
I want to know the most complete fundamental equation/s of intelligence: human intelligence, diverse machine intelligences (all sorts of current and future subfields of AI), other biological intelligences, collective intelligence, theoretical perfect AGI (AIXI variants, Chollet's intelligence, Legg's intelligence, etc.), hybrids, etc.
The problem with way too alien patterns would be that the human brain has no way to recognize it because there is no grounding in human patterns that the brain is used to recognize
I think in practice any predictive machine, biological or not, is constrained by it's architectural biases, finite data, finite computational resources for modelling, finite limited sense modalities, finite limited perspectives as an agent in a bigger complex system, etc.
So every biological and nonbiological information processing system always live their evolutionary niches, never fully universal
But generality is a spectrum for example, but it can be evaluated in a lot of possible ways
The space of all possible intelligences is so fascinating in general for me :D
My ideal scifi would be about benevolent superintelligence that cures all diseases, makes all beings happy, figures out how biology, fundamental physics, consciousness, intelligence, etc. works by countless scientific breakthroughs, understands all math, understands everything in philosophy, creates post-scarcity abundance for all, creates infinitely fascinating complex art, and in the process grows infinitely more and more in intelligence and creativity, maximizes morphological freedom, and does no harm
Benevolent superintelligence explosion
[[Artificial intelligence x Science]]
Yeah its a bit unrealistic superutopia that I like dreaming about, so that's why it's science fiction. My current biggest fear in the real world is tech companies centralizing too much power for themselves via AI and other technology and other means (economic, political,...), so that's partially why I want open source to win and try to support it, while trying to reverse engineer the moat of tech companies. To democratize the power.
The issue with AI safety community I started to have is that a big part of them basically want something like government surveillance on GPUs and training runs to prevent unsafe AI, which can so much easily turn into surveillance dystopia and destroy open source completely, plus big tech is merging with government as well to have the least restrictions for themselves while wanting to restricting others including open source. It feels like that will make power dynamics even more concetrated instead.
A lot of luddites also joined the AI safety movement
I think when I look at the current world and at history, then a lot of times when there was too much concentration of power in any form to some centralized entity, then it started killing freedom for everyone else. And I view AI as technology that has the potential to give the ultimate power, centralized power if its in the hands of few, or decentralized power fi tis in the hands of people.
I also started to not really believe in the assumption that increasing intelligence automatically leads to rogueness. I think intelligence is independent of that, and also independent of power seeking. For example we have galaxy brain scientists that are not at all rogue or power seeking. It depends so much. and they are controlled by IMO less intelligent managers and politicians.
My favorite definitions of intelligence include stuff like modelling capability, predictive capability, generalization capability, etc., about some data, which are decoupled from agency and goals in changing the world to me.
Different people do AI applications/engineering/research for combinations of different reasons. Some people do exploratory research out of curiosity with the need to understand intelligence itself and the structure of reality which I resonate with the most, then some people want to make trillions of dollars at all costs, some want power, or some create interesting things because they are interesting, or create helpful things because they help, cool things because they are cool, beautiful things (including artistic) because they are beautiful, some want their basic needs met using this technology, or some decentralized open source computing/training/inference AI initiatives are trying to break the oligopolistic dominance of big tech that is slowly and surely strengthening, etc. So many incentives!
Is brain quantum? If so, is it necessary for its intelligence?
Why and how did intelligence emerge and how does it work? What are the best definitions of intelligence? Why are brains and AI systems so unreasonably effective in different complementary ways? How can they be upgraded?
[[Thoughts intelligence 3]]
[[Thoughts intelligence 2]]
[[Thoughts intelligence]]
[[Thoughts comparing AI and biological intelligence]]
My current favorite definition of intelligence is:
Intelligence is the ability to generalize, the ability to mine previous experience to make sense of future novel situations. Formalized by Chollet here.
It seems that one of the main cruxes of the battle for definitions of intelligence stems from people asking:
Is the human intelligence, which is shaped by evolution, a collection of special-purpose programs, or a general-purpose blank slate that can be filled with any computations, or combination of both, something in the middle, or something else?
So I would say my favorite definition is definition of general intelligence. While there is also narrow intelligence. You could label all other definitions of intelligence as "x" intelligence depending on the definition. :D Compressive intelligence! Agentic intelligence! General intelligence!
[\[1911.01547\] On the Measure of Intelligence](https://arxiv.org/abs/1911.01547)
The fact that self-organizing particles with emergent molecules with emergent cells with emergent brains can think is mindblowing me daily.
The fact that we made sand think, aka AI algorithms on digital computers, is mindblowing me daily.
The intelligence of a system is the extent to which it avoids getting stuck in local minima
Intelligence isn't being a stochastic parrot, but being a generalizing circuit grokking agent
"Intelligence isn't the ability to remember and repeat, like they teach you in school. It is the ability to learn from experience, solve problems, and use our knowledge to adapt to new situations."
https://x.com/ProfFeynman/status/1815772030270075304
Is the human intelligence, which is shaped by evolution, a collection of special-purpose programs, or a general-purpose blank slate that can be filled with any computations, or combination of both, something in the middle, or something else?
Too many people "define" intelligence by just behavioral subjective vibes instead of defining it by some rigorous scientific mathematical engineering definition that you can measure and localize concretely like Chollet's that actually isn't worthless scientifically.
Learning is unlocking nodes in a nested skill tree of crystalized intelligence
Eat a lot of quality training data and scaling laws will apply to you
## Definitions 2
- [[Free energy principle]], [[Active Inference]]
- [KARL FRISTON - INTELLIGENCE 3.0 - YouTube](https://youtu.be/V_VXOdf1NMw?si=YuVfcfc0R_jrjZqW)
- [Active InferenceThe Free Energy Principle in Mind, Brain, and Behavior | Books Gateway | MIT Press](https://direct.mit.edu/books/oa-monograph/5299/Active-InferenceThe-Free-Energy-Principle-in-Mind)
- Shane Legg:
- [\[0706.3639\] A Collection of Definitions of Intelligence](https://arxiv.org/abs/0706.3639)
- [\[0712.3329\] Universal Intelligence: A Definition of Machine Intelligence](https://arxiv.org/abs/0712.3329)
- [Shane Legg (DeepMind Founder) - 2028 AGI, Superhuman Alignment, New Architectures - YouTube](https://www.youtube.com/watch?v=Kc1atfJkiJU)
- [Definitions Intelligence](https://agisi.org/Defs_intelligence.html)
- [[Generalization]]
- [[Intelligence x Generalization]]
- [[Intelligence as compression]]
- [\[2404.09937\] Compression Represents Intelligence Linearly](https://arxiv.org/abs/2404.09937)
- [Intelligence Via Information Compression | IEEE Computer Society](https://www.computer.org/publications/tech-news/community-voices/intelligence-via-compression-of-information)
## Resources
[[Links intelligence]]
[[Resources intelligence]]
[[Links comparing AI and biological intelligence Ne]]
## Written by AI (may include hallucinated factually incorrect information)
# The definitive taxonomy of intelligence across every field
**Intelligence has no single definition — it fractures into over 100 distinct concepts across psychology, AI, biology, philosophy, military science, business, and education.** This taxonomy maps every major definition, from Spearman's century-old _g_ factor to Chollet's 2019 formalization of skill-acquisition efficiency. What emerges is a striking pattern: each field reinvents "intelligence" to serve its own purposes, yet recurring themes — adaptability, goal-directed behavior, and information processing — thread through nearly all of them. Below is the most comprehensive cross-disciplinary map of intelligence definitions assembled in one place, covering **113 distinct concepts** with originators and sources.
---
## Psychology and psychometrics: where intelligence science began
The scientific study of intelligence originated in psychometrics, producing the richest cluster of competing definitions. These range from unitary models (one general ability) to pluralistic frameworks (dozens or hundreds of distinct capacities).
### Factor-analytic and structural theories
|#|Concept|Originator(s)|Definition|Source|
|---|---|---|---|---|
|1|**General intelligence (g factor)**|Charles Spearman, 1904|A single common factor derived from factor analysis that underlies and positively correlates with performance across all cognitive tasks, reflecting a broad mental capacity.|https://en.wikipedia.org/wiki/G_factor_(psychometrics)|
|2|**Fluid intelligence (Gf)**|Raymond Cattell, 1943|The capacity to reason abstractly, think logically, and solve novel problems independent of acquired knowledge, associated with working memory and peaking in early adulthood.|https://en.wikipedia.org/wiki/Fluid_and_crystallized_intelligence|
|3|**Crystallized intelligence (Gc)**|Raymond Cattell, 1943|The ability to use accumulated knowledge, skills, and experience from education and life, relying on long-term memory and generally increasing throughout adulthood.|https://en.wikipedia.org/wiki/Fluid_and_crystallized_intelligence|
|4|**CHC theory (Cattell-Horn-Carroll)**|Cattell, Horn, Carroll; integrated ~1998|The most widely accepted hierarchical psychometric model, organizing intelligence into three strata: _g_ at the apex, ~16 broad abilities (fluid reasoning, processing speed, etc.), and 80+ narrow abilities.|https://en.wikipedia.org/wiki/Cattell%E2%80%93Horn%E2%80%93Carroll_theory|
|5|**Thurstone's Primary Mental Abilities**|Louis Thurstone, 1938|Intelligence consists of seven independent group factors — verbal comprehension, word fluency, number facility, spatial visualization, associative memory, perceptual speed, and inductive reasoning — rather than a single _g_.|https://www.simplypsychology.org/intelligence.html|
|6|**Guilford's Structure of Intellect**|J.P. Guilford, 1950s–1980s|A three-dimensional model proposing up to 180 distinct abilities formed by combinations of 6 operations, 5 content types, and 6 products.|https://www.instructionaldesign.org/theories/intellect/|
|7|**PASS theory**|Das, Naglieri & Kirby, 1994|A neurocognitive theory based on Luria's work proposing four interrelated processes: Planning, Attention, Simultaneous processing, and Successive processing.|https://pmc.ncbi.nlm.nih.gov/articles/PMC11355437/|
### Gardner's multiple intelligences
Howard Gardner's 1983 framework rejected a single _g_ factor and proposed that intelligence is **"a biopsychological potential to process information that can be activated in a cultural setting to solve problems or create products of value."** He identified eight confirmed and one candidate intelligence:
|#|Intelligence type|Definition|Source|
|---|---|---|---|
|8|**Linguistic**|The ability to think in words and use language to express and appreciate complex meanings — the hallmark of poets, writers, and orators.|https://www.niu.edu/citl/resources/guides/instructional-guide/gardners-theory-of-multiple-intelligences.shtml|
|9|**Logical-mathematical**|The ability to calculate, quantify, consider propositions, and carry out complex operations through abstract, symbolic thought and sequential reasoning.|https://www.niu.edu/citl/resources/guides/instructional-guide/gardners-theory-of-multiple-intelligences.shtml|
|10|**Spatial**|The ability to think in three dimensions, encompassing mental imagery, spatial reasoning, and graphic/artistic skills — critical for architects, pilots, and sculptors.|https://www.niu.edu/citl/resources/guides/instructional-guide/gardners-theory-of-multiple-intelligences.shtml|
|11|**Musical**|The capacity to discern pitch, rhythm, timbre, and tone, enabling recognition, creation, and reflection on music.|https://www.niu.edu/citl/resources/guides/instructional-guide/gardners-theory-of-multiple-intelligences.shtml|
|12|**Bodily-kinesthetic**|The capacity to manipulate objects and use physical skills with precise timing and mind-body coordination — exhibited by athletes, dancers, and surgeons.|https://www.niu.edu/citl/resources/guides/instructional-guide/gardners-theory-of-multiple-intelligences.shtml|
|13|**Interpersonal**|The ability to understand and interact effectively with others through sensitivity to moods, temperaments, and motivations.|https://www.niu.edu/citl/resources/guides/instructional-guide/gardners-theory-of-multiple-intelligences.shtml|
|14|**Intrapersonal**|The capacity for self-understanding — knowing one's own thoughts, feelings, strengths, and weaknesses — and using that knowledge to direct one's life.|https://www.niu.edu/citl/resources/guides/instructional-guide/gardners-theory-of-multiple-intelligences.shtml|
|15|**Naturalistic**|The ability to discriminate among living things and features of the natural world — valuable for hunters, farmers, botanists, and chefs.|https://www.niu.edu/citl/resources/guides/instructional-guide/gardners-theory-of-multiple-intelligences.shtml|
|16|**Existential (candidate)**|The sensitivity and capacity to tackle deep questions about human existence — meaning of life, death, and cosmic origins — not yet fully confirmed by Gardner as meeting his inclusion criteria.|https://www.niu.edu/citl/resources/guides/instructional-guide/gardners-theory-of-multiple-intelligences.shtml|
### Sternberg's theories of intelligence
|#|Concept|Definition|Source|
|---|---|---|---|
|17|**Triarchic theory** (Sternberg, 1985)|Intelligence comprises three interrelated aspects — analytical, creative, and practical — defining it as "mental activity directed toward purposive adaptation to, selection and shaping of, real-world environments relevant to one's life."|https://en.wikipedia.org/wiki/Triarchic_theory_of_intelligence|
|18|**Analytical intelligence**|The ability to analyze information, evaluate evidence, compare alternatives, and solve structured problems — the type most measured by standardized tests.|https://www.ebsco.com/research-starters/social-sciences-and-humanities/sternbergs-triarchic-theory|
|19|**Creative intelligence**|The capacity to generate original ideas, think flexibly, and deal innovatively with novel situations by combining existing knowledge in new ways.|https://www.ebsco.com/research-starters/social-sciences-and-humanities/sternbergs-triarchic-theory|
|20|**Practical intelligence**|The skill of applying knowledge to real-world situations through environmental adaptation, shaping, and selection — often called "street smarts" and based on tacit knowledge.|https://www.ebsco.com/research-starters/social-sciences-and-humanities/sternbergs-triarchic-theory|
|21|**Successful intelligence** (Sternberg, 1997)|The ability to achieve personally meaningful goals by capitalizing on strengths and compensating for weaknesses through a balance of analytical, creative, and practical abilities within one's sociocultural context.|http://www.robertjsternberg.com/successful-intelligence|
### Emotional, social, and cultural intelligences
|#|Concept|Originator(s)|Definition|Source|
|---|---|---|---|---|
|22|**Emotional intelligence (ability model)**|Salovey & Mayer, 1990|"The ability to monitor one's own and others' feelings and emotions, to discriminate among them, and to use this information to guide one's thinking and actions" — a four-branch model (perceiving, using, understanding, managing emotions).|https://scholars.unh.edu/psych_facpub/450/|
|23|**Emotional intelligence (mixed model)**|Daniel Goleman, 1995|A framework of five competencies — self-awareness, self-regulation, motivation, empathy, and social skills — that Goleman argued can matter more than IQ for career and life success.|https://positivepsychology.com/emotional-intelligence-theories/|
|24|**Emotional Intelligence 2.0**|Bradberry & Greaves, 2009|The ability to recognize and understand emotions in yourself and others and use that awareness to manage behavior and relationships, operationalized through four skills: self-awareness, self-management, social awareness, and relationship management.|https://www.amazon.com/Emotional-Intelligence-2-0-Travis-Bradberry/dp/0974320625|
|25|**Social intelligence (original)**|Edward Thorndike, 1920|"The ability to understand and manage men and women, boys and girls — to act wisely in human relations," proposed as one of three intelligence facets alongside abstract and mechanical intelligence.|https://en.wikipedia.org/wiki/Social_intelligence|
|26|**Social intelligence (S.P.A.C.E. model)**|Karl Albrecht, 2006|The ability to get along well with others and win their cooperation, measured across five dimensions: Situational awareness, Presence, Authenticity, Clarity, and Empathy.|https://www.karlalbrecht.com/siprofile/siprofiletheory.htm|
|27|**Cultural intelligence (CQ)**|Earley & Ang, 2003|An individual's capability to function effectively in culturally diverse settings, encompassing metacognitive, cognitive, motivational, and behavioral components.|https://en.wikipedia.org/wiki/Cultural_intelligence|
---
## Artificial intelligence and computer science: engineering intelligence from scratch
The AI field has produced its own competing definitions, ranging from practical engineering benchmarks to formal mathematical frameworks. The core tension is between **narrow task performance** and **general adaptive capability**.
| # | Concept | Originator(s) | Definition | Source |
| --- | ----------------------------------------- | --------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------- |
| 28 | **Artificial General Intelligence (AGI)** | Gubrud, 1997; Legg & Goertzel, ~2002 | AI that matches or surpasses human cognitive capabilities across virtually all domains, able to generalize knowledge, transfer skills, and solve novel problems without task-specific reprogramming. | https://en.wikipedia.org/wiki/Artificial_general_intelligence |
| 29 | **Artificial Narrow Intelligence (ANI)** | Early AI researchers | AI systems designed to excel at specific, well-defined tasks (image recognition, chess, language processing) that cannot autonomously transfer knowledge to unrelated domains — representing all currently deployed AI. | https://cloud.google.com/discover/what-is-artificial-general-intelligence |
| 30 | **Artificial Superintelligence (ASI)** | Nick Bostrom, 2014 | Any intellect that greatly exceeds human cognitive performance in virtually all domains of interest, including learning, reasoning, creativity, and social intelligence. | https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies |
| 31 | **Machine intelligence / Turing Test** | Alan Turing, 1950 | A machine exhibits intelligence if a human interrogator, conversing via text with both machine and human, cannot reliably distinguish which is which — defining intelligence behaviorally rather than through internal processes. | https://plato.stanford.edu/entries/turing-test/ |
| 32 | **Computational intelligence** | IEEE CIS / Bezdek, 1994 | The theory, design, and application of biologically and linguistically motivated computational paradigms — neural networks, fuzzy systems, and evolutionary computation — that learn, adapt, and discover solutions to complex problems. | https://cis.ieee.org/about/what-is-ci |
| 33 | **Swarm intelligence** | Beni & Wang, 1989 | Collective behavior of decentralized, self-organized systems in which populations of simple agents following local rules produce emergent "intelligent" global behavior without centralized control. | https://en.wikipedia.org/wiki/Swarm_intelligence |
| 34 | **Collective intelligence** | Pierre Lévy, 1994 | "A form of universally distributed intelligence, constantly enhanced, coordinated in real time, and resulting in the effective mobilization of skills" — no one knows everything, everyone knows something, all knowledge resides in humanity. | https://en.wikipedia.org/wiki/Collective_intelligence |
| 35 | **Universal intelligence (Legg-Hutter)** | Legg & Hutter, 2007 | "Intelligence measures an agent's ability to achieve goals in a wide range of environments" — formalized as a weighted sum of expected reward across all computable environments, with simpler environments weighted higher by Kolmogorov complexity. | https://arxiv.org/abs/0712.3329 |
| 36 | **Intelligence as optimization** | Russell, Yudkowsky, Bostrom | Intelligence is fundamentally goal-directed optimization: "machines are intelligent to the extent that their actions can be expected to achieve their objectives" (Russell), combined with the insight that sufficiently intelligent agents converge on sub-goals like self-preservation regardless of their final goal. | https://en.wikipedia.org/wiki/AI_alignment |
| 37 | **Chollet's definition (ARC)** | François Chollet, 2019 | "The intelligence of a system is a measure of its skill-acquisition efficiency over a scope of tasks, with respect to priors, experience, and generalization difficulty" — distinguishing intelligence from mere skill and operationalized through the Abstraction and Reasoning Corpus benchmark. | https://arxiv.org/abs/1911.01547 |
| 38 | **Chinese Room argument** | John Searle, 1980 | A thought experiment arguing that syntactic symbol manipulation alone (as performed by computers) is insufficient for semantic understanding — a person manipulating Chinese symbols by rules without understanding Chinese demonstrates that programs cannot possess genuine comprehension. | https://plato.stanford.edu/entries/chinese-room/ |
| 39 | **Intelligence explosion** | I.J. Good, 1965 | An ultraintelligent machine could design even better machines, triggering a recursive self-improvement feedback loop that would rapidly produce superintelligence, leaving human intelligence far behind. | https://intelligence.org/ie-faq/ |
| 40 | **AIXI** | Marcus Hutter, 2000 | A theoretical formalism for optimal AGI combining Solomonoff induction with sequential decision theory — the agent maximizes expected reward across all computable environments weighted by algorithmic simplicity, serving as the gold standard for universal AI despite being incomputable. | https://en.wikipedia.org/wiki/AIXI |
| 41 | **Instrumental convergence** | Omohundro, 2008; Bostrom, 2014 | Sufficiently intelligent agents pursuing almost any final goal will converge on common sub-goals — self-preservation, resource acquisition, cognitive enhancement — because these are useful for virtually any ultimate objective. | https://www.alignmentforum.org/w/instrumental-convergence |
| 42 | **Ambient intelligence** | Zelkha & Epstein, ~1998; ISTAG/EU, 2001 | Digital environments embedded with networked sensors and processors that detect human presence and context to provide proactive, adaptive, unobtrusive support — arising from the convergence of ubiquitous computing, communication, and intelligent interfaces. | https://en.wikipedia.org/wiki/Ambient_intelligence |
| 43 | **Digital intelligence (DQ)** | Yuhyun Park / DQ Institute, 2016 | A comprehensive set of technical, cognitive, and socio-emotional competencies — grounded in universal moral values — that enable individuals to face the challenges and demands of digital life, codified as IEEE Standard 3527.1. | https://www.dqinstitute.org/global-standards/ |
| 44 | **Augmented intelligence** | Gartner, 2019 | A human-centered partnership model where people and AI work together to enhance cognitive performance — learning, decision-making, and experience — with the goal of empowering humans rather than replacing them. | https://www.gartner.com/en/information-technology/glossary/augmented-intelligence |
---
## Neuroscience and biology: intelligence in living systems
Neuroscience reveals that intelligence is not brain-exclusive. It emerges across scales — from single cells to distributed nervous systems to ecological networks. **The neural efficiency hypothesis** and **P-FIT** provide the brain-level architecture, while plant intelligence and slime mold studies shatter the assumption that neurons are required.
|#|Concept|Originator(s)|Definition|Source|
|---|---|---|---|---|
|45|**Neural efficiency hypothesis**|Haier et al., 1988|More intelligent individuals display lower brain activation during cognitive tasks — intelligence correlates with efficient (not greater) neural resource use, first shown via PET imaging of brain glucose metabolism.|https://en.wikipedia.org/wiki/Neural_efficiency_hypothesis|
|46|**Parieto-Frontal Integration Theory (P-FIT)**|Jung & Haier, 2007|Individual differences in intelligence arise from the efficiency of a distributed brain network linking dorsolateral prefrontal cortex, parietal lobules, anterior cingulate, and temporal/occipital regions via white matter tracts.|https://pubmed.ncbi.nlm.nih.gov/17655784/|
|47|**Biological intelligence**|General neuroscience|The naturally evolved capacity of living organisms to process information, learn from experience, solve problems, and adapt behavior, encompassing neural, genetic, and physiological substrates of cognitive function.|https://www.nature.com/articles/nrn2787|
|48|**Animal cognition**|Darwin, 1871; Shettleworth, 1998|The mental capacities of non-human animals — perception, learning, memory, decision-making, problem-solving — rooted in Darwin's thesis that "the difference in mind between man and the higher animals is one of degree and not of kind."|https://plato.stanford.edu/entries/cognition-animal/|
|49|**Plant intelligence**|Trewavas, 2003; Mancuso|"Adaptively variable growth and development during the lifetime of the individual" — plants perceive 20+ environmental parameters, process information, learn, remember, and solve problems through phenotypic plasticity rather than neural systems.|https://royalsocietypublishing.org/doi/10.1098/rsfs.2016.0098|
|50|**Embodied cognition / enactivism**|Varela, Thompson & Rosch, 1991|Cognition is not computation over internal representations but arises through the dynamic interaction of body, brain, and environment — it is "enacted" through embodied action, making the physical body constitutive of mental processes.|https://iep.utm.edu/enactivism/|
|51|**Distributed intelligence**|Various (Godfrey-Smith, Nakagaki)|Intelligence emerging from decentralized information processing without a centralized brain — exemplified by the octopus (⅔ of neurons in its arms) and slime mold _Physarum polycephalum_ (solves mazes and optimizes networks with zero neurons).|https://www.nature.com/articles/nature.2012.11811|
|52|**Ecological intelligence**|Daniel Goleman, 2009|The ability to understand and respond to the ecological impact of human actions — awareness of hidden environmental consequences of what we make, buy, and use.|https://en.wikipedia.org/wiki/Ecological_intelligence|
|53|**Ecological rationality**|Gigerenzer & Todd, 2012|Intelligence is not only in the mind but also in the world — organisms achieve effective decisions using simple heuristics that exploit reliable informational structures in their environments, meaning "less can be more" under uncertainty.|https://academic.oup.com/book/5561|
|54|**Neural plasticity and intelligence**|Shaw et al., 2006|The brain's capacity to reorganize structure and function — forming new connections, adjusting cortical thickness — in response to learning, providing the biological foundation for intelligence development; more intelligent individuals show distinctive cortical thickening patterns.|https://pmc.ncbi.nlm.nih.gov/articles/PMC6632359/|
|55|**Basal cognition / minimal intelligence (TAME)**|Michael Levin, 2019–2022|Single cells, bacteria, and tissues exhibit early forms of intelligent behavior (learning, memory, problem-solving) on a continuous spectrum from basic homeostatic competency to complex metacognition — all intelligence is collective intelligence and cognition is scale-free.|https://pmc.ncbi.nlm.nih.gov/articles/PMC8988303/|
|56|**Morphological computation**|Pfeifer et al., 2006|An agent's physical body — shape, materials, biomechanics — actively contributes to intelligent behavior by distributing tasks between brain, body, and environment, simplifying control problems as seen in passive walkers and soft robots.|https://direct.mit.edu/artl/article/23/1/1/2858/What-Is-Morphological-Computation-On-How-the-Body|
|57|**Free energy principle**|Karl Friston, 2005–2010|All self-organizing biological systems minimize variational free energy (prediction error) by updating internal models; intelligence is "self-evidencing" — minimizing the discrepancy between predictions and sensory input through perception and action.|https://www.nature.com/articles/nrn2787|
---
## Philosophy: the deepest questions about what intelligence is
Philosophy provides the conceptual infrastructure for every other field's definitions. The discipline's enduring contribution is a set of **critical distinctions** — intelligence versus consciousness, versus wisdom, versus rationality — that expose assumptions buried in empirical frameworks.
|#|Concept|Originator(s)|Definition|Source|
|---|---|---|---|---|
|58|**Nous / intellect (classical)**|Plato, ~380 BC; Aristotle, ~350 BC|_Nous_ is the highest part of the soul capable of grasping eternal truths; Aristotle distinguished passive intellect (receiving intelligible forms) from active intellect (_nous poiêtikos_), which makes potential objects of thought actual.|https://plato.stanford.edu/entries/aristotle-psychology/|
|59|**Philosophical definitions (overview)**|Various|No single agreed-upon definition exists; major approaches include rationalist (logical reasoning, Descartes), empiricist (product of experience, Locke), pragmatist (adaptive problem-solving, Dewey), and computational (information processing, Turing).|https://plato.stanford.edu/entries/artificial-intelligence/|
|60|**Intelligence vs. consciousness**|Searle, Chalmers, Turing|Intelligence is functional adaptive information processing; consciousness is subjective phenomenal experience ("what it is like"). A system can arguably be intelligent without being conscious, and whether consciousness is necessary for "true" intelligence remains unresolved.|https://plato.stanford.edu/entries/artificial-intelligence/|
|61|**Intelligence vs. wisdom**|Sternberg, 2003; Aristotle (_phronesis_)|Intelligence is cognitive ability for learning and problem-solving; wisdom is the application of intelligence toward achieving a common good through balancing intrapersonal, interpersonal, and extrapersonal interests — Aristotle's _phronesis_ (practical wisdom).|https://plato.stanford.edu/entries/aristotle-ethics/|
|62|**Rationality vs. intelligence (dysrationalia)**|Keith Stanovich, 1993|IQ measures computational capacity (working memory, processing speed) but not rational thinking; "dysrationalia" is the inability to think rationally despite adequate intelligence — rationality is surprisingly dissociable from IQ.|https://www.scientificamerican.com/article/rational-and-irrational-thought-the-thinking-that-iq-tests-miss/|
|63|**Pragmatic intelligence**|John Dewey, ~1916|Intelligence is not a static faculty but a dynamic process of inquiry — the capacity to identify and resolve problematic situations through reflective thought and reconstruction of experience; fundamentally practical and social.|https://plato.stanford.edu/entries/dewey/|
|64|**Sentience vs. intelligence**|Ongoing debate|Sentience is the capacity for subjective experience (especially pleasure and pain); intelligence is adaptive information processing — the two are conceptually distinct, as organisms can be sentient but not highly intelligent, and systems can be intelligent but not sentient.|https://plato.stanford.edu/entries/cognition-animal/|
---
## Military and strategic intelligence: information as power
In military and national security contexts, "intelligence" shifts meaning entirely — from cognitive capacity to **actionable information about adversaries**. The intelligence community has developed a precise taxonomy of collection disciplines ("INTs"), each defined by its source type.
|#|Concept|Key Authority|Definition|Source|
|---|---|---|---|---|
|65|**Military intelligence**|U.S. DoD / JP 2-0|The product of collection, processing, integration, evaluation, analysis, and interpretation of information concerning foreign nations, hostile forces, and areas of operations to support military planning and decision-making.|https://www.dni.gov/index.php/what-we-do/what-is-intelligence|
|66|**Strategic intelligence**|Sherman Kent, 1949|Knowledge vital for national survival encompassing capabilities, vulnerabilities, and probable courses of action of foreign nations, produced through systematic surveillance to inform policy-makers.|https://www.cia.gov/resources/csi/static/Kent-Profession-Intel-Analysis.pdf|
|67|**Tactical intelligence**|U.S. DoD|Intelligence required for planning and conducting tactical operations — identifying immediate threats, terrain, weather, and enemy dispositions at the battlegroup and unit level.|https://www.britannica.com/topic/tactical-intelligence|
|68|**Signals intelligence (SIGINT)**|NSA / ODNI|Intelligence derived from signal intercepts — combining communications intelligence (COMINT), electronic intelligence (ELINT), and foreign instrumentation signals intelligence (FISINT).|https://www.dni.gov/index.php/what-we-do/what-is-intelligence|
|69|**Human intelligence (HUMINT)**|CIA / DIA / ODNI|Intelligence derived from human sources, both overt (debriefers, attachés) and clandestine — the oldest collection method and primary source before the technical revolution.|https://www.dni.gov/index.php/what-we-do/what-is-intelligence|
|70|**Open-source intelligence (OSINT)**|ODNI|Publicly available information from print, electronic media, the internet, commercial databases, and other open sources.|https://www.dni.gov/index.php/what-we-do/what-is-intelligence|
|71|**Geospatial intelligence (GEOINT)**|NGA / ODNI|Analysis and visual representation of security-related activities on earth, integrating imagery, imagery intelligence, and geospatial information.|https://www.dni.gov/index.php/what-we-do/what-is-intelligence|
|72|**Measurement & signature intelligence (MASINT)**|DIA / ODNI|Information produced by quantitative and qualitative analysis of physical attributes of targets and events to characterize, locate, and identify them through multiple sensor phenomenologies.|https://www.dni.gov/index.php/what-we-do/what-is-intelligence|
|73|**Imagery intelligence (IMINT)**|NGA / ODNI|Representations of objects reproduced electronically or optically from visual photography, radar sensors, and electro-optics.|https://www.dni.gov/index.php/what-we-do/what-is-intelligence|
|74|**Communications intelligence (COMINT)**|NSA|A SIGINT sub-discipline derived from intercepting communications between parties — voice, text, teleprinter, and Morse code traffic, including encrypted transmissions.|https://www.dni.gov/index.php/what-we-do/what-is-intelligence|
|75|**Electronic intelligence (ELINT)**|NSA|A SIGINT category involving interception and analysis of non-communication electronic transmissions, such as radar emissions and electromagnetic radiation sources.|https://www.dni.gov/index.php/what-we-do/what-is-intelligence|
|76|**Counterintelligence (CI)**|U.S. DoD|Activities conducted to protect against espionage, sabotage, or assassinations by foreign governments, organizations, or persons, and against international terrorist activities.|https://irp.fas.org/doddir/dod/dodcount.htm|
|77|**Cyber intelligence (CYBINT)**|DSIAC / U.S. DoD|Intelligence gathered from cyberspace — computer systems, digital networks, cyber operations — encompassing HUMINT, SIGINT, OSINT, and TECHINT elements to identify cyber threats and track threat actors.|https://dsiac.dtic.mil/articles/characterizing-cyber-intelligence-as-an-all-source-intelligence-product/|
|78|**Technical intelligence (TECHINT)**|U.S. DoD / Army FM 2-0|Intelligence derived from collection, exploitation, and analysis of foreign military equipment and materiel to prevent technological surprise and assess adversary weapons capabilities.|https://www.army.mil/article/88115/techint_draws_interest_of_intelligence_community|
---
## Business and organizational intelligence: decisions at scale
In the business world, "intelligence" means **structured information for competitive advantage**. These concepts share a common architecture: raw data → collection → analysis → actionable insight.
|#|Concept|Originator(s)|Definition|Source|
|---|---|---|---|---|
|79|**Business intelligence (BI)**|Howard Dresner / Gartner, 1989|An umbrella term for concepts, methods, applications, and technologies to improve decision-making using fact-based support systems — gathering, storing, accessing, and analyzing data for better enterprise decisions.|https://en.wikipedia.org/wiki/Business_intelligence|
|80|**Competitive intelligence**|SCIP, est. 1986|Legal and ethical collection and analysis of information regarding competitors' capabilities, vulnerabilities, and intentions to support strategic decision-making and early identification of risks and opportunities.|https://www.sciencedirect.com/topics/computer-science/competitive-intelligence|
|81|**Market intelligence**|Gartner / SCIP|Information about a company's markets gathered for determining market opportunity, penetration strategy, and development metrics — encompassing customer needs, competition, and economic trends.|https://www.gartner.com/en/marketing/glossary/marketing-intelligence|
|82|**Organizational intelligence**|Harold Wilensky, 1967|An organization's capability to acquire, process, and interpret external information to identify problems and opportunities, shaped by structural and ideological factors that may produce informational pathologies.|https://www.sciencedirect.com/topics/computer-science/organizational-intelligence|
|83|**Customer intelligence**|Gartner / Forrester|The process of collecting, analyzing, and activating data about customers — behaviors, preferences, interactions — to build deeper relationships, predict behavior, and deliver personalized experiences.|https://en.wikipedia.org/wiki/Customer_intelligence|
|84|**Threat intelligence (cybersecurity)**|NIST SP 800-150|Threat information that has been aggregated, analyzed, and enriched to provide context for decision-making — "evidence-based knowledge about an existing or emerging menace or hazard to assets" (Gartner).|https://csrc.nist.gov/glossary/term/threat_intelligence|
|85|**Location intelligence**|Gartner / industry|The capability to derive meaningful insight from geospatial data relationships to solve business problems in site selection, logistics, risk management, and customer analytics.|https://en.wikipedia.org/wiki/Location_intelligence|
|86|**Decision intelligence**|Cassie Kozyrkov / Google, ~2018|"The discipline of turning information into better action at any scale, in any setting" — unifying applied data science, social science, and managerial science to help people use data to improve decisions.|https://medium.com/data-science/introduction-to-decision-intelligence-5d147ddab767|
---
## Education and testing: how we measure intelligence
The history of intelligence testing spans 120 years, from Binet's practical school-placement tool to modern neuropsychological batteries. Two innovations — **deviation IQ** and **dynamic assessment** — represent paradigm shifts in how we think about what tests actually measure.
|#|Concept|Originator(s)|Definition|Source|
|---|---|---|---|---|
|87|**Intelligence Quotient (IQ)**|William Stern, 1912|A score originally computed as mental age divided by chronological age × 100, proposed as a standardized ratio for comparing intellectual development; now computed as a deviation score.|https://en.wikipedia.org/wiki/William_Stern_(psychologist)|
|88|**Mental age**|Alfred Binet, 1905|A child's cognitive ability level relative to average performance at different chronological ages — a child performing like a typical 12-year-old has a mental age of 12, regardless of actual age.|https://www.sciencedirect.com/topics/psychology/binet-simon-test|
|89|**Binet-Simon Scale**|Binet & Simon, 1905|The first practical intelligence test: 30 tasks arranged in increasing difficulty, from basic sensory functions to abstract reasoning, designed to identify children needing educational support.|https://en.wikipedia.org/wiki/Binet%E2%80%93Simon_Intelligence_Test|
|90|**Stanford-Binet Intelligence Scales**|Lewis Terman, 1916|An American adaptation of the Binet-Simon Scale that adopted Stern's IQ ratio, became the first widely used standardized intelligence test in the U.S., and launched modern psychometrics.|https://en.wikipedia.org/wiki/Stanford%E2%80%93Binet_Intelligence_Scales|
|91|**Wechsler Adult Intelligence Scale (WAIS)**|David Wechsler, 1955|A comprehensive individually administered test for ages 16–90 measuring verbal comprehension, visual-spatial reasoning, working memory, and processing speed via the deviation IQ system (mean = 100, SD = 15).|https://en.wikipedia.org/wiki/Wechsler_Adult_Intelligence_Scale|
|92|**Wechsler's definition of intelligence**|David Wechsler, 1939|"The aggregate or global capacity of the individual to act purposefully, to think rationally, and to deal effectively with his environment" — global because it characterizes behavior as a whole.|https://www.sciencedirect.com/topics/medicine-and-dentistry/wechsler-intelligence-scale|
|93|**Deviation IQ**|David Wechsler, 1939/1955|A scoring method replacing mental-age ratios with standard scores derived from the normal distribution (mean = 100, SD = 15), enabling meaningful comparisons across all ages.|https://en.wikipedia.org/wiki/Wechsler_Adult_Intelligence_Scale|
|94|**Raven's Progressive Matrices**|John C. Raven, 1936|A non-verbal test measuring abstract reasoning and fluid intelligence by requiring identification of missing elements in visual geometric patterns — considered one of the most culture-fair intelligence tests.|https://en.wikipedia.org/wiki/Raven's_Progressive_Matrices|
|95|**Flynn Effect**|James Flynn, 1984|The substantial, sustained increase in IQ scores (~3 points per decade) observed worldwide over the 20th century, attributed to environmental factors (nutrition, education, cognitive stimulation) rather than genetic changes.|https://en.wikipedia.org/wiki/Flynn_effect|
|96|**Zone of Proximal Development (ZPD)**|Lev Vygotsky, ~1930|"The distance between the actual developmental level as determined by independent problem solving and the level of potential development through problem solving under guidance or collaboration with more capable peers."|https://www.simplypsychology.org/zone-of-proximal-development.html|
|97|**Dynamic assessment**|Reuven Feuerstein, 1979|An interactive evaluation embedding teaching within testing to assess learning potential and cognitive modifiability rather than fixed ability — intelligence is not immutable but develops through mediated learning.|https://feuerstein-institute.org/about/the-feuerstein-method/dynamic-assessment-lpad/|
---
## Cross-cutting, cultural, and emerging concepts
The final cluster includes evolutionary, cultural, applied, and emerging conceptions that cut across traditional boundaries. These entries challenge Western-centric assumptions and reveal how diverse the concept of intelligence truly is.
|#|Concept|Originator(s)|Definition|Source|
|---|---|---|---|---|
|98|**Indigenous conceptions of intelligence**|Various cultures|Indigenous systems conceptualize intelligence holistically — emphasizing practical, interpersonal, ecological, and relational dimensions — such as Māori emphasis on _whakapapa_ and community well-being, Aboriginal pattern thinking, and African social responsibility.|https://link.springer.com/article/10.1007/s00146-024-02099-4|
|99|**Ubuntu intelligence**|Southern African tradition|Rooted in "umuntu ngumuntu ngabantu" (a person is a person through other persons), Ubuntu conceives intelligence and personhood as fundamentally relational — cognitive and ethical capacities are realized through community and collective well-being.|https://pmc.ncbi.nlm.nih.gov/articles/PMC9023883/|
|100|**Sternberg's cultural conceptions**|Robert Sternberg, 2004|Cross-cultural research showing intelligence as adaptive competence cannot be understood outside cultural context — different cultures emphasize different components (social competence in the U.S., obedience in rural Kenya, navigation in rural Alaska).|https://pubmed.ncbi.nlm.nih.gov/15511120/|
|101|**Moral intelligence**|Lennick & Kiel, 2005; Borba, 2001|"The mental capacity to determine how universal human principles should be applied to personal values, goals, and actions" — encompassing integrity, responsibility, compassion, and forgiveness.|https://files.eric.ed.gov/fulltext/ED509643.pdf|
|102|**Spatial intelligence (GIS/geography)**|GIS research tradition|The capacity to understand, reason about, and manipulate spatial relationships, patterns, and geographic data — extending beyond Gardner to include geospatial reasoning, computational spatial thinking, and GIS analysis.|https://link.springer.com/article/10.1007/s00146-024-02099-4|
|103|**Adversarial intelligence**|Cybersecurity/ML community|Techniques that exploit vulnerabilities in AI/ML systems through carefully crafted deceptive inputs designed to cause misclassification or degraded performance, threatening defense and critical infrastructure.|https://www.ibm.com/think/topics/adversarial-machine-learning|
|104|**Machiavellian intelligence hypothesis**|Byrne & Whiten, 1988|Primate (and ultimately human) intelligence evolved primarily as adaptation to navigating complex social groups — requiring social manipulation, alliance formation, and deception rather than being driven solely by ecological challenges.|https://en.wikipedia.org/wiki/Machiavellian_intelligence_hypothesis|
|105|**Collective intelligence (c factor)**|Woolley, Chabris, Malone et al., 2010|A general factor explaining group performance across varied tasks, analogous to individual _g_ — correlated not with average member IQ but with members' social sensitivity, equal turn-taking, and proportion of women.|https://www.science.org/doi/10.1126/science.1193147|
|106|**Berlin Wisdom Paradigm**|Baltes & Staudinger, ~1990|Wisdom defined as "an expert knowledge system concerning the fundamental pragmatics of life" — measured through five criteria: rich factual and procedural knowledge, lifespan contextualism, relativism of values, and management of uncertainty.|https://pubmed.ncbi.nlm.nih.gov/11392856/|
|107|**Intuitive intelligence**|Gary Klein; Gladwell|The capacity to access rapid, non-conscious pattern recognition and implicit knowledge for effective decisions without deliberate analysis — Klein's "recognition-primed decision" model shows experts use accumulated experience for fast, accurate judgments.|https://positivepsychology.com/positive-intelligence/|
|108|**Somatic intelligence**|Embodied cognition tradition|The body's capacity for knowledge and adaptive decision-making through proprioception, interoception, and sensorimotor integration — intelligent behavior emerging from body-brain-environment interaction, studied in dance, athletics, and somatic therapy.|https://scholarworks.gvsu.edu/orpc/vol4/iss3/1/|
|109|**Positive Intelligence (PQ)**|Shirzad Chamine, 2012|The percentage of time your mind serves you ("Sage") versus sabotages you ("Saboteurs") — higher PQ scores are associated with **30–35%** better performance and greater happiness.|https://positivepsychology.com/positive-intelligence/|
|110|**Sexual intelligence**|Marty Klein, 2012|The combination of sexual self-knowledge, emotional awareness, and accurate information that enables healthy intimate decision-making — understanding desires, boundaries, and communicating effectively with partners.|https://www.amazon.com/Sexual-Intelligence-What-Really-Matters/dp/0062026070|
---
## What unifies 110+ definitions of intelligence
Three deep patterns emerge from mapping intelligence across every field. First, **adaptability** appears in nearly every definition — whether Spearman's _g_, Chollet's skill-acquisition efficiency, or plant phenotypic plasticity, intelligence is always about adjusting to novel circumstances. Second, **scale independence** is striking: intelligence operates from single cells (Levin's basal cognition) through slime molds, octopus arms, individual human brains, human groups (Woolley's _c_ factor), to hypothetical superintelligences — the concept is fractal. Third, **the tension between unitary and plural models** recurs in every domain: psychometrics debates _g_ versus multiple intelligences; AI debates narrow versus general; biology debates centralized versus distributed processing.
Perhaps the most consequential insight is the gap between **cognitive-capacity definitions** (psychology, neuroscience) and **information-product definitions** (military, business). When a policy document discusses "intelligence," it almost certainly means HUMINT or BI, not fluid reasoning or emotional sensitivity. This semantic ambiguity has real consequences — it shapes funding, policy, and public understanding. Any serious interdisciplinary conversation about intelligence must begin by specifying which of these 110+ concepts is actually under discussion.
**
# Intelligence: A Comprehensive Hierarchical Map Across Human, Biological, Artificial, and Collective Systems
## Executive summary
Intelligence is commonly treated as a system’s capacity to learn from experience, reason, and adapt effectively to its environment, but formal definitions vary across psychology, neuroscience, AI, and organizational science. [https://www.apa.org/topics/intelligence](https://www.apa.org/topics/intelligence)A useful cross-domain lens is the “agent in an environment” framing: intelligence manifests as goal-directed behavior under uncertainty, constrained by computation, data, and time. [https://arxiv.org/abs/0712.3329](https://arxiv.org/abs/0712.3329)Human intelligence research has developed robust measurement traditions (e.g., WAIS/WISC/Raven/Stanford–Binet) and hierarchical models (e.g., general intelligence g, broad abilities such as fluid/crystallized intelligence), while debates persist about interpretation, fairness, and societal use. [https://www.pearsonassessments.com/en-us/Store/Professional-Assessments/Cognition-%26-Neuro/Wechsler-Adult-Intelligence-Scale-%7C-Fifth-Edition/p/P100071002?srsltid=AfmBOoqwlbxzjL8PJFFfVuytBPBPDY2w0IHuSyLLzcxrX9ummmQ3PHDg](https://www.pearsonassessments.com/en-us/Store/Professional-Assessments/Cognition-%26-Neuro/Wechsler-Adult-Intelligence-Scale-%7C-Fifth-Edition/p/P100071002?srsltid=AfmBOoqwlbxzjL8PJFFfVuytBPBPDY2w0IHuSyLLzcxrX9ummmQ3PHDg)Neuroscience links cognitive performance to distributed brain networks (e.g., prefrontal–parietal systems, intrinsic connectivity networks) and learning mechanisms (e.g., synaptic plasticity; dopamine-like reward prediction errors consistent with reinforcement-learning theory). [https://pubmed.ncbi.nlm.nih.gov/17655784/](https://pubmed.ncbi.nlm.nih.gov/17655784/)Artificial intelligence spans symbolic approaches (search, planning), statistical learning (deep neural networks), reinforcement learning, and modern foundation-model paradigms; evaluation increasingly relies on multi-benchmark suites (GLUE, MMLU, BIG-bench, HELM) and multi-metric assessment (accuracy, calibration, robustness, fairness, toxicity, efficiency). [https://people.stfx.ca/jdelamer/courses/csci-564/_downloads/b2220c66675ddde471ca1795147b8e86/A_Formal_Basis_for_the_Heuristic_Determination_of_Minimum_Cost_Paths.pdf](https://people.stfx.ca/jdelamer/courses/csci-564/_downloads/b2220c66675ddde471ca1795147b8e86/A_Formal_Basis_for_the_Heuristic_Determination_of_Minimum_Cost_Paths.pdf)Collective intelligence emerges in human groups, organizations, markets, and swarms; empirically, groups can show stable performance differences (a “collective intelligence” factor c), while engineered swarm intelligence inspires optimization algorithms (PSO, ACO). [https://pubmed.ncbi.nlm.nih.gov/20929725/](https://pubmed.ncbi.nlm.nih.gov/20929725/)Ethics and governance increasingly shape “intelligence” research and deployment: major frameworks and instruments include the OECD AI Principles, UNESCO’s AI ethics recommendation, NIST’s AI RMF, and the EU AI Act’s risk-based regulatory model and implementation guidance. [https://www.oecd.org/en/topics/ai-principles.html](https://www.oecd.org/en/topics/ai-principles.html)
## Scope and assumptions
This map treats “intelligence” primarily as adaptive information processing and goal-directed capability (not just “IQ”), spanning humans, machines, organisms, and collectives. [https://www.apa.org/topics/intelligence](https://www.apa.org/topics/intelligence)“Artificial intelligence” is treated as a family of computational methods for perception, learning, reasoning, planning, and action, historically anchored by the Turing test debate and the Dartmouth proposal that coined “artificial intelligence.” [https://courses.cs.umbc.edu/471/papers/turing.pdf](https://courses.cs.umbc.edu/471/papers/turing.pdf)“Collective intelligence” is treated as group-level intelligent behavior (humans and/or computers) that can exceed individual performance under certain conditions and task/ecology structures. [https://cci.mit.edu/](https://cci.mit.edu/)“Biological intelligence” is treated as adaptive behavior and control across multiple biological scales, including neural and non-neural systems when supported by peer-reviewed literature (e.g., slime mold decision-making, bacterial quorum sensing, immune cognition metaphors, plant intelligence debates). [https://www.nature.com/articles/35035159](https://www.nature.com/articles/35035159)A recurring organizing principle is that intelligence is constrained by information, computation, and environment structure (e.g., “no free lunch” theorems; bias–variance; complexity limits), so “better” algorithms depend on assumptions and task distributions. [https://www.cs.ubc.ca/~hutter/earg/papers07/00585893.pdf](https://www.cs.ubc.ca/~hutter/earg/papers07/00585893.pdf)Polysemy note: “intelligence” also means state/organizational information activities (e.g., national security intelligence), which is conceptually distinct from cognitive ability but shares themes of uncertainty reduction and decision support. [https://www.cia.gov/resources/csi/static/cc27ce9b678dc69d4bdeef410feffa20/Article-New-Approach-to-Old-Question-Sep-2023.pdf](https://www.cia.gov/resources/csi/static/cc27ce9b678dc69d4bdeef410feffa20/Article-New-Approach-to-Old-Question-Sep-2023.pdf)
A compact conceptual map (not exhaustive) of how the report’s branches relate is summarized below. [https://arxiv.org/abs/0712.3329](https://arxiv.org/abs/0712.3329)
flowchart TD
INT[Intelligence (umbrella)] --> DEF[Definitions & desiderata]
INT --> HUM[Human intelligence]
INT --> BIO[Biological intelligence]
INT --> AI[Artificial intelligence]
INT --> COL[Collective & hybrid intelligence]
DEF --> INFO[Information & uncertainty]
DEF --> COMP[Computation & complexity]
DEF --> GOAL[Goals, rewards, values]
HUM --> PSY[Psychometrics & measurement]
HUM --> COG[Cognitive mechanisms]
HUM --> NEU[Neuroscience & biology]
BIO --> EVOL[Evolution & adaptation]
BIO --> NONN[Non-neural cognition debates]
AI --> SYM[Symbolic AI]
AI --> STAT[Statistical ML & deep learning]
AI --> RL[Reinforcement learning]
AI --> EVAL[Evaluation & benchmarks]
COL --> GRP[Groups & organizations]
COL --> SWARM[Swarm intelligence]
COL --> TEAM[Human-AI teams]
INT --> ETH[Ethics, governance, and impact]
## Conceptual foundations
- Intelligence (general construct) — A system’s capacity to learn from experience, reason, and adapt effectively to its environment to achieve goals under constraints. [https://www.apa.org/topics/intelligence](https://www.apa.org/topics/intelligence)
- Imitation game and “Turing test” framing — A behavioral-evaluation proposal that shifts “Can machines think?” into a practical test of indistinguishable language behavior under interrogation. [https://courses.cs.umbc.edu/471/papers/turing.pdf](https://courses.cs.umbc.edu/471/papers/turing.pdf)
- Dartmouth AI conjecture — A founding research claim that aspects of learning and intelligence can be precisely described such that a machine can simulate them, motivating AI as an engineering/science program. [https://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf](https://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf)
- Universal (machine) intelligence measure — A formal proposal to quantify intelligence as expected performance across environments weighted by simplicity (an algorithmic-information prior), linking intelligence to universal induction and optimal agents. [https://arxiv.org/abs/0712.3329](https://arxiv.org/abs/0712.3329)
- Agent–environment loop — A general modeling pattern where an agent selects actions based on observations/history, influencing future observations and rewards. [https://incompleteideas.net/book/the-book-2nd.html](https://incompleteideas.net/book/the-book-2nd.html)
- Reward prediction and reinforcement framing — A normative account of adaptive behavior in which agents learn to maximize cumulative reward through interaction. [https://incompleteideas.net/book/the-book-2nd.html](https://incompleteideas.net/book/the-book-2nd.html)
- Bounded rationality — A theory that real decision-making is rational only relative to cognitive/computational limits and environmental structure, not ideal optimization. [https://iiif.library.cmu.edu/file/Simon_box00063_fld04838_bdl0001_doc0001/Simon_box00063_fld04838_bdl0001_doc0001.pdf](https://iiif.library.cmu.edu/file/Simon_box00063_fld04838_bdl0001_doc0001/Simon_box00063_fld04838_bdl0001_doc0001.pdf)
- Information theory (entropy) — A mathematical framework for quantifying uncertainty and communication limits via entropy and related quantities. [https://ia803209.us.archive.org/27/items/bstj27-3-379/bstj27-3-379_text.pdf](https://ia803209.us.archive.org/27/items/bstj27-3-379/bstj27-3-379_text.pdf)
- Cybernetics — A foundational interdisciplinary study of control and communication in animals and machines, emphasizing feedback and regulation. [https://direct.mit.edu/books/oa-monograph/4581/Cybernetics-or-Control-and-Communication-in-the](https://direct.mit.edu/books/oa-monograph/4581/Cybernetics-or-Control-and-Communication-in-the)
- Computational levels of analysis — A methodological stance distinguishing what problem is being solved (computational), how it is solved (algorithmic), and how it is physically realized (implementation). [https://people.ciirc.cvut.cz/~hlavac/pub/MiscTextForStudents/1982MarrDavidVisionBook.pdf](https://people.ciirc.cvut.cz/~hlavac/pub/MiscTextForStudents/1982MarrDavidVisionBook.pdf)
- Computational complexity constraints — A body of results showing that many problem classes are intractable in worst case (e.g., NP-completeness), shaping feasible intelligence strategies. [https://perso.limos.fr/~palafour/PAPERS/PDF/Garey-Johnson79.pdf](https://perso.limos.fr/~palafour/PAPERS/PDF/Garey-Johnson79.pdf)
- No Free Lunch (NFL) theorems for optimization — Formal results showing that averaged uniformly over problems, no optimizer outperforms any other, implying performance gains require inductive bias and task structure. [https://www.cs.ubc.ca/~hutter/earg/papers07/00585893.pdf](https://www.cs.ubc.ca/~hutter/earg/papers07/00585893.pdf)
- Bias–variance dilemma — A core statistical learning tradeoff between underfitting (bias) and overfitting (variance) that shapes model selection and generalization. [https://doursat.free.fr/docs/Geman_Bienenstock_Doursat_1992_bv_NeurComp.pdf](https://doursat.free.fr/docs/Geman_Bienenstock_Doursat_1992_bv_NeurComp.pdf)
- Curiosity and intrinsic motivation as “compression progress” — A formal proposal that exploration and “interestingness” can be modeled via intrinsic reward for improving predictive/compressive models of data. [https://arxiv.org/abs/0812.4360](https://arxiv.org/abs/0812.4360)
- Predictive processing / free-energy principle — A unifying theoretical lens modeling perception and action as minimizing variational free energy (prediction error under a generative model). [https://www.nature.com/articles/nrn2787](https://www.nature.com/articles/nrn2787)
### Comparative table of widely used intelligence definitions and evaluation perspectives
|Perspective|Working definition (one sentence)|Typical evaluation approach|Representative sources|
|---|---|---|---|
|Psychology (everyday/APA)|Intelligence is the ability to derive information, learn from experience, adapt to the environment, and use thought and reason effectively.|Psychometrics: reliability/validity, factor models, standardized tests.|[https://www.apa.org/topics/intelligence](https://www.apa.org/topics/intelligence)|
|Behavioral test framing (Turing)|Machine intelligence can be operationalized via behavior indistinguishable from humans in dialogue under interrogation.|Human-judged conversational indistinguishability.|[https://courses.cs.umbc.edu/471/papers/turing.pdf](https://courses.cs.umbc.edu/471/papers/turing.pdf)|
|AI/AGI formalization (Legg–Hutter)|Intelligence is performance across a wide range of environments, formalizable via a simplicity-weighted expectation over tasks.|Theory-driven measure + broad benchmark families.|[https://arxiv.org/abs/0712.3329](https://arxiv.org/abs/0712.3329)|
|Cognitive neuroscience / network view|Intelligence differences partly reflect variation in distributed brain networks supporting control, working memory, and integration.|Neuroimaging and cognitive tasks; network models.|[https://pubmed.ncbi.nlm.nih.gov/17655784/](https://pubmed.ncbi.nlm.nih.gov/17655784/)|
|Collective intelligence|Groups can exhibit stable differences in effectiveness across tasks (“c”), partly influenced by interaction patterns and social sensitivity.|Group task batteries; collaboration metrics.|[https://pubmed.ncbi.nlm.nih.gov/20929725/](https://pubmed.ncbi.nlm.nih.gov/20929725/)|
|Governance and risk framing|“Intelligent systems” must be evaluated for harms and trustworthiness across metrics beyond accuracy (e.g., robustness, fairness, transparency).|Risk management frameworks and compliance controls.|[https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf](https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf)|
## Human intelligence
- Psychometrics — The scientific measurement of psychological traits (including intelligence) using quantified tests, models, and validation evidence. [https://www1.udel.edu/educ/gottfredson/reprints/1997mainstream.pdf](https://www1.udel.edu/educ/gottfredson/reprints/1997mainstream.pdf)
- General intelligence (g) — A statistical factor capturing the positive correlations among diverse cognitive tests, often modeled as the apex of hierarchical ability structures. [https://www1.udel.edu/educ/gottfredson/reprints/1997mainstream.pdf](https://www1.udel.edu/educ/gottfredson/reprints/1997mainstream.pdf)
- Mainstream psychometric consensus statement (historical artifact) — A widely cited 1990s-era summary asserting that intelligence is measurable and tests can be reliable/valid, while remaining embedded in contentious public debates. [https://www1.udel.edu/educ/gottfredson/reprints/1997mainstream.pdf](https://www1.udel.edu/educ/gottfredson/reprints/1997mainstream.pdf)
- Fluid vs crystallized intelligence — A distinction between reasoning/problem-solving in novel situations (fluid) and accumulated knowledge and skills (crystallized). [https://pmc.ncbi.nlm.nih.gov/articles/PMC5156710/](https://pmc.ncbi.nlm.nih.gov/articles/PMC5156710/)
- Parieto-frontal integration theory (P-FIT) — A neurocognitive model proposing that intelligence differences relate to efficiency/integration across distributed frontal–parietal brain systems. [https://pubmed.ncbi.nlm.nih.gov/17655784/](https://pubmed.ncbi.nlm.nih.gov/17655784/)
- Working memory (Baddeley–Hitch model) — A multi-component model of temporary storage and control (central executive plus specialized buffers) supporting complex cognition. [https://app.nova.edu/toolbox/instructionalproducts/edd8124/fall11/1974-Baddeley-and-Hitch.pdf](https://app.nova.edu/toolbox/instructionalproducts/edd8124/fall11/1974-Baddeley-and-Hitch.pdf)
- Episodic buffer — A proposed working-memory component that integrates/binds information across modalities and long-term memory into coherent episodes. [https://home.csulb.edu/~cwallis/382/readings/482/baddeley.pdf](https://home.csulb.edu/~cwallis/382/readings/482/baddeley.pdf)
- Executive functions (unity/diversity) — A family of control processes (e.g., inhibition, updating, shifting) that are separable yet correlated and predictive of complex task performance. [https://www.researchgate.net/profile/Ryan-Van-Patten/post/What_are_proper_tasks_to_estimate_executive_functions_and_resourcefulness_in_children/attachment/59d6372d79197b80779948cc/AS%3A391842349764610%401470433904539/download/Miyake%2Bet%2Bal.%2B2000.pdf](https://www.researchgate.net/profile/Ryan-Van-Patten/post/What_are_proper_tasks_to_estimate_executive_functions_and_resourcefulness_in_children/attachment/59d6372d79197b80779948cc/AS%3A391842349764610%401470433904539/download/Miyake%2Bet%2Bal.%2B2000.pdf)
- Executive attention view of working memory — A theory treating working-memory capacity as attention control under interference, linking it to fluid intelligence. [https://journals.sagepub.com/doi/10.1111/1467-8721.00160](https://journals.sagepub.com/doi/10.1111/1467-8721.00160)
- Cognitive control (PFC theory) — A theory that prefrontal cortex supports goal-directed behavior via active maintenance of task-relevant representations. [https://www.annualreviews.org/content/journals/10.1146/annurev.neuro.24.1.167](https://www.annualreviews.org/content/journals/10.1146/annurev.neuro.24.1.167)
- Default mode network (DMN) — A network showing characteristic activity patterns during rest and deactivation during many goal-directed tasks, relevant to baseline brain function. [https://www.pnas.org/doi/10.1073/pnas.98.2.676](https://www.pnas.org/doi/10.1073/pnas.98.2.676)
- Salience vs executive control networks — Intrinsic connectivity networks dissociable in function, often tied to interoception/selection (salience) vs control/working-memory demands (executive control). [https://www.jneurosci.org/content/27/9/2349.short](https://www.jneurosci.org/content/27/9/2349.short)
- Heuristics and biases — A program showing systematic deviations from ideal probabilistic reasoning under uncertainty due to cognitive heuristics. [https://sites.socsci.uci.edu/~bskyrms/bio/readings/tversky_k_heuristics_biases.pdf](https://sites.socsci.uci.edu/~bskyrms/bio/readings/tversky_k_heuristics_biases.pdf)
- Bounded rationality in human choice — A formal approach to decision-making that explicitly models internal computational limits as constraints on rational behavior. [https://iiif.library.cmu.edu/file/Simon_box00063_fld04838_bdl0001_doc0001/Simon_box00063_fld04838_bdl0001_doc0001.pdf](https://iiif.library.cmu.edu/file/Simon_box00063_fld04838_bdl0001_doc0001/Simon_box00063_fld04838_bdl0001_doc0001.pdf)
- Synaptic learning postulate (Hebbian learning) — A foundational idea linking learning to changes in synaptic strength driven by correlated activity. [https://pure.mpg.de/pubman/item/item_2346268_3/component/file_2346267/Hebb_1949_The_Organization_of_Behavior.pdf](https://pure.mpg.de/pubman/item/item_2346268_3/component/file_2346267/Hebb_1949_The_Organization_of_Behavior.pdf)
- Long-term potentiation (LTP) — An experimentally observed long-lasting increase in synaptic efficacy widely treated as a candidate cellular mechanism for learning and memory. [https://pmc.ncbi.nlm.nih.gov/articles/PMC1350458/](https://pmc.ncbi.nlm.nih.gov/articles/PMC1350458/)
- Hippocampal role in memory (H.M. era) — Evidence from medial temporal lobe lesions showing severe anterograde memory impairment, motivating modern memory systems theory. [https://pmc.ncbi.nlm.nih.gov/articles/PMC497229/](https://pmc.ncbi.nlm.nih.gov/articles/PMC497229/)
- Dopamine reward prediction error — Evidence that midbrain dopamine signals resemble temporal-difference prediction errors central to reinforcement learning theory. [https://folia.unifr.ch/global/documents/242358](https://folia.unifr.ch/global/documents/242358)
- Rescorla–Wagner learning model — A classical conditioning model where learning is driven by prediction error between expected and obtained reinforcement. [https://www.columbia.edu/~rk566/Session4/Theory%20of%20Pavlovian%20Conditioning.pdf](https://www.columbia.edu/~rk566/Session4/Theory%20of%20Pavlovian%20Conditioning.pdf)
- Heritability and genetics of intelligence — Behavioral genetics findings that intelligence differences are substantially heritable and highly polygenic, shaping modern “gene-hunting” and predictive models. [https://pubmed.ncbi.nlm.nih.gov/25224258/](https://pubmed.ncbi.nlm.nih.gov/25224258/)
- Large-scale GWAS of intelligence — Genome-wide studies identifying many loci associated with intelligence and suggesting enrichment in brain-related pathways (with substantial remaining unexplained variance). [https://pubmed.ncbi.nlm.nih.gov/29942086/](https://pubmed.ncbi.nlm.nih.gov/29942086/)
- Flynn effect — Documented multi-decade generational rises in IQ test performance across many countries, with ongoing debates about causes and recent trend reversals in some populations. [https://www.iapsych.com/iqmr/fe/LinkedDocuments/flynn1987.pdf](https://www.iapsych.com/iqmr/fe/LinkedDocuments/flynn1987.pdf)
- Cognitive development (Piaget) — A stage-based theory proposing qualitative shifts in children’s reasoning capacities across sensorimotor to formal operational stages. [https://books.google.com/books/about/The_Origins_of_Intelligence_in_Children.html?id=H7MkAQAAMAAJ](https://books.google.com/books/about/The_Origins_of_Intelligence_in_Children.html?id=H7MkAQAAMAAJ)
- Multiple intelligences (Gardner) — A theory proposing multiple relatively distinct “intelligences” (e.g., linguistic, spatial), influential in education but debated in psychometrics. [https://books.google.com/books/about/Frames_of_Mind.html?id=ObgOAAAAQAAJ](https://books.google.com/books/about/Frames_of_Mind.html?id=ObgOAAAAQAAJ)
- Triarchic theory (Sternberg) — A theory arguing intelligence comprises analytical, creative, and practical components beyond traditional IQ framing. [https://assets.cambridge.org/97805212/78911/excerpt/9780521278911_excerpt.pdf](https://assets.cambridge.org/97805212/78911/excerpt/9780521278911_excerpt.pdf)
- Emotional intelligence (ability model) — A framework treating emotion reasoning and regulation as measurable abilities that can predict some social outcomes beyond personality. [https://journals.sagepub.com/doi/10.2190/DUGG-P24E-52WK-6CDG](https://journals.sagepub.com/doi/10.2190/DUGG-P24E-52WK-6CDG)
- Theory of mind (ToM) — The capacity to attribute mental states to self and others to predict behavior, originally posed as a comparative cognition question in chimpanzees. [https://carta.anthropogeny.org/sites/default/files/file_fields/event/premack_and_woodruff_1978.pdf](https://carta.anthropogeny.org/sites/default/files/file_fields/event/premack_and_woodruff_1978.pdf)
- Reading the Mind in the Eyes Test — A widely used adult mentalizing measure assessing inference of mental states from eye-region facial cues. [https://docs.autismresearchcentre.com/papers/2001_BCetal_adulteyes.pdf](https://docs.autismresearchcentre.com/papers/2001_BCetal_adulteyes.pdf)
- Social brain hypothesis — An evolutionary hypothesis linking primate brain expansion to the demands of managing complex social relationships. [https://cognitionandculture.net/wp-content/uploads/Evolutionary-Anthropology-1998-Dunbar-The-social-brain-hypothesis.pdf](https://cognitionandculture.net/wp-content/uploads/Evolutionary-Anthropology-1998-Dunbar-The-social-brain-hypothesis.pdf)
- Cultural origins of cognition — A view emphasizing uniquely human cooperative communication, shared intentionality, and culture as drivers of cognitive specialization. [https://www.hup.harvard.edu/books/9780674005822](https://www.hup.harvard.edu/books/9780674005822)
### Table of major human intelligence measurement tools and what they operationalize
|Instrument / construct|One-sentence operational target|Typical outputs|Notes on usage|Representative sources|
|---|---|---|---|---|
|WAIS-5|Adult cognitive ability across multiple domains aggregated into index scores and a full-scale IQ.|FSIQ and index scores.|Widely used in clinical and organizational contexts; 2024 publication date listed by publisher.|[https://www.pearsonassessments.com/en-us/Store/Professional-Assessments/Cognition-%26-Neuro/Wechsler-Adult-Intelligence-Scale-%7C-Fifth-Edition/p/P100071002?srsltid=AfmBOoqwlbxzjL8PJFFfVuytBPBPDY2w0IHuSyLLzcxrX9ummmQ3PHDg](https://www.pearsonassessments.com/en-us/Store/Professional-Assessments/Cognition-%26-Neuro/Wechsler-Adult-Intelligence-Scale-%7C-Fifth-Edition/p/P100071002?srsltid=AfmBOoqwlbxzjL8PJFFfVuytBPBPDY2w0IHuSyLLzcxrX9ummmQ3PHDg)|
|WISC-V|Child cognitive ability and domain index scores used for educational and clinical assessment.|Index scores; composite IQ.|Common in psychoeducational assessment and learning support planning.|[https://www.pearsonassessments.com/en-us/Store/Professional-Assessments/Cognition-%26-Neuro/Wechsler-Intelligence-Scale-for-Children-%7C-Fifth-Edition-/p/100000771?srsltid=AfmBOop6HmxwFAegdkP6w7uL84WPBQyZOeANNTYAcW8rQabq9eFGidj5](https://www.pearsonassessments.com/en-us/Store/Professional-Assessments/Cognition-%26-Neuro/Wechsler-Intelligence-Scale-for-Children-%7C-Fifth-Edition-/p/100000771?srsltid=AfmBOop6HmxwFAegdkP6w7uL84WPBQyZOeANNTYAcW8rQabq9eFGidj5)|
|Stanford–Binet 5 (SB-5)|Lifespan assessment spanning broad cognitive factors aligned with fluid reasoning, knowledge, quantitative reasoning, visual–spatial processing, and working memory.|Composite and factor scores.|Often used for giftedness and developmental assessment.|[https://www.wpspublish.com/sb-5-stanford-binet-intelligence-scales-fifth-edition](https://www.wpspublish.com/sb-5-stanford-binet-intelligence-scales-fifth-edition)|
|Raven’s Progressive Matrices (Raven’s 2 / APM)|Nonverbal abstract reasoning and pattern completion often treated as a proxy for fluid intelligence.|Standard scores / percentiles.|Designed to reduce language demands; multiple forms exist.|[https://www.pearsonassessments.com/en-us/Store/Professional-Assessments/Cognition-%26-Neuro/Raven%E2%80%99s-Progressive-Matrices-Second-Edition-%7C-Raven%27s-2/p/100001960?srsltid=AfmBOoqgmJy1kAgRKaG1WDURRGWBoN514rPtZJmjbE1Mx-qrF1rsgEZG](https://www.pearsonassessments.com/en-us/Store/Professional-Assessments/Cognition-%26-Neuro/Raven%E2%80%99s-Progressive-Matrices-Second-Edition-%7C-Raven%27s-2/p/100001960?srsltid=AfmBOoqgmJy1kAgRKaG1WDURRGWBoN514rPtZJmjbE1Mx-qrF1rsgEZG)|
|Working memory construct (Baddeley–Hitch)|Temporary maintenance/manipulation enabling reasoning, comprehension, and control.|Task battery scores.|Strong links to executive control theories and fluid intelligence research lines.|[https://app.nova.edu/toolbox/instructionalproducts/edd8124/fall11/1974-Baddeley-and-Hitch.pdf](https://app.nova.edu/toolbox/instructionalproducts/edd8124/fall11/1974-Baddeley-and-Hitch.pdf)|
|Executive functions (Miyake factors)|Core control functions (inhibition/shifting/updating) supporting goal-directed behavior.|Latent factors from task batteries.|Separability with shared variance (“unity and diversity”) is empirically supported.|[https://www.researchgate.net/profile/Ryan-Van-Patten/post/What_are_proper_tasks_to_estimate_executive_functions_and_resourcefulness_in_children/attachment/59d6372d79197b80779948cc/AS%3A391842349764610%401470433904539/download/Miyake%2Bet%2Bal.%2B2000.pdf](https://www.researchgate.net/profile/Ryan-Van-Patten/post/What_are_proper_tasks_to_estimate_executive_functions_and_resourcefulness_in_children/attachment/59d6372d79197b80779948cc/AS%3A391842349764610%401470433904539/download/Miyake%2Bet%2Bal.%2B2000.pdf)|
|ToM / mentalizing (Eyes test)|Inference of others’ mental states from minimal social cues.|Accuracy-based mentalizing score.|Used in autism/social cognition research with known psychometric caveats.|[https://docs.autismresearchcentre.com/papers/2001_BCetal_adulteyes.pdf](https://docs.autismresearchcentre.com/papers/2001_BCetal_adulteyes.pdf)|
## Biological intelligence
- Evolutionary adaptation lens — Biological intelligence can be treated as adaptive fit between organism strategies and ecological demands across evolutionary time. [https://cognitionandculture.net/wp-content/uploads/Evolutionary-Anthropology-1998-Dunbar-The-social-brain-hypothesis.pdf](https://cognitionandculture.net/wp-content/uploads/Evolutionary-Anthropology-1998-Dunbar-The-social-brain-hypothesis.pdf)
- Neural intelligence (vertebrate/mammalian) — Intelligence in animals with nervous systems depends on perception, memory, valuation, and control circuits shaped by plasticity and learning rules. [https://pmc.ncbi.nlm.nih.gov/articles/PMC1350458/](https://pmc.ncbi.nlm.nih.gov/articles/PMC1350458/)
- Social cognition in primates — Social reasoning capacities (e.g., ToM-like competencies) are studied via comparative methods, often showing partial but not fully human-like abilities. [https://www.eva.mpg.de/documents/Elsevier/Call_Does_TrendsCogSci_2008_1554401.pdf](https://www.eva.mpg.de/documents/Elsevier/Call_Does_TrendsCogSci_2008_1554401.pdf)
- Dopaminergic learning mechanisms — Reward-related dopaminergic activity aligns with prediction-error learning and provides a biological bridge to reinforcement learning models. [https://folia.unifr.ch/global/documents/242358](https://folia.unifr.ch/global/documents/242358)
- Memory systems specialization — Hippocampal and medial temporal structures are central for forming new episodic memories, as shown by lesion evidence. [https://pmc.ncbi.nlm.nih.gov/articles/PMC497229/](https://pmc.ncbi.nlm.nih.gov/articles/PMC497229/)
- Large-scale brain networks — Intrinsic networks (DMN, salience, executive control) describe recurring functional organization relevant to attention, control, and cognition. [https://www.pnas.org/doi/10.1073/pnas.98.2.676](https://www.pnas.org/doi/10.1073/pnas.98.2.676)
- Basal cognition (conceptual program) — A research program exploring cognition-like properties (learning, decision, inference) in systems without neurons, often emphasizing minimal mechanisms. [https://pmc.ncbi.nlm.nih.gov/articles/PMC10770251/](https://pmc.ncbi.nlm.nih.gov/articles/PMC10770251/)
- Slime mold problem solving (Physarum maze) — Experiments show Physarum polycephalum can form efficient paths through mazes, motivating “computation by morphology” models. [https://www.nature.com/articles/35035159](https://www.nature.com/articles/35035159)
- Physarum decision-making review — A synthesis arguing that Physarum supports multi-objective foraging decisions and may serve as a “model brain” for minimal decision processes. [https://pubmed.ncbi.nlm.nih.gov/26189159/](https://pubmed.ncbi.nlm.nih.gov/26189159/)
- Bacterial communication and coordination — Bacteria coordinate via chemical signaling (including quorum sensing), enabling colony-level behaviors that resemble distributed decision-making. [https://www.annualreviews.org/content/journals/10.1146/annurev.cellbio.21.012704.131001](https://www.annualreviews.org/content/journals/10.1146/annurev.cellbio.21.012704.131001)
- Quorum sensing (review) — A canonical review describing architectures of bacterial cell–cell communication and how it regulates group behaviors. [https://www.annualreviews.org/content/journals/10.1146/annurev.cellbio.21.012704.131001](https://www.annualreviews.org/content/journals/10.1146/annurev.cellbio.21.012704.131001)
- “Bacterial linguistic communication” hypothesis — A controversial framing that interprets bacterial signaling and genomic plasticity as a form of social intelligence and shared “interpretation” of cues. [https://pubmed.ncbi.nlm.nih.gov/15276612/](https://pubmed.ncbi.nlm.nih.gov/15276612/)
- Immune system as cognitive system — A theoretical proposal that immune behavior can be interpreted via a “cognitive paradigm” using internal “images” such as self/nonself. [https://pubmed.ncbi.nlm.nih.gov/1463581/](https://pubmed.ncbi.nlm.nih.gov/1463581/)
- Plant intelligence (overview) — A debated position arguing that plant adaptive behavior and signaling can be framed as intelligence linked to fitness and problem solving. [https://academic.oup.com/bioscience/article/66/7/542/2463205](https://academic.oup.com/bioscience/article/66/7/542/2463205)
- Plant intelligence foundations (Royal Society) — A review positioning plant intelligence research as a maturing but controversial line since early 2000s debates. [https://royalsocietypublishing.org/rsfs/article/7/3/20160098/64153/The-foundations-of-plant-intelligencePlant](https://royalsocietypublishing.org/rsfs/article/7/3/20160098/64153/The-foundations-of-plant-intelligencePlant)
- Critiques of plant intelligence — A counter-position arguing that key concepts (e.g., individuality) make “intelligence” a misleading label for many plant processes. [https://www.tandfonline.com/doi/pdf/10.4161/psb.4.5.8276](https://www.tandfonline.com/doi/pdf/10.4161/psb.4.5.8276)
- Plants-are-intelligent argument (philosophy-of-biology angle) — An argument defending plant intelligence claims and summarizing critiques, highlighting conceptual stakes and empirical examples. [https://pmc.ncbi.nlm.nih.gov/articles/PMC6948212/](https://pmc.ncbi.nlm.nih.gov/articles/PMC6948212/)
## Artificial intelligence
- Artificial intelligence (field definition by founding proposal) — AI is a research program aiming to model/simulate aspects of learning and intelligence in machines via precise descriptions and implementations. [https://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf](https://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf)
- Symbolic search (A*) — A graph-search algorithm that uses admissible heuristics to find optimal paths efficiently, foundational for planning and problem solving. [https://people.stfx.ca/jdelamer/courses/csci-564/_downloads/b2220c66675ddde471ca1795147b8e86/A_Formal_Basis_for_the_Heuristic_Determination_of_Minimum_Cost_Paths.pdf](https://people.stfx.ca/jdelamer/courses/csci-564/_downloads/b2220c66675ddde471ca1795147b8e86/A_Formal_Basis_for_the_Heuristic_Determination_of_Minimum_Cost_Paths.pdf)
- Symbolic planning (STRIPS) — A planning system representing world states with logical predicates and searching operator sequences to satisfy goals. [https://ai.stanford.edu/~nilsson/OnlinePubs-Nils/PublishedPapers/strips.pdf](https://ai.stanford.edu/~nilsson/OnlinePubs-Nils/PublishedPapers/strips.pdf)
- Perceptron (early neural model) — A linear-threshold learning model historically central to early neural network research and debates about representational limits. [https://www.academia.edu/60542953/The_perceptron_a_probabilistic_model_for_information_storage_and_organization_in_the_brain](https://www.academia.edu/60542953/The_perceptron_a_probabilistic_model_for_information_storage_and_organization_in_the_brain)
- Backpropagation (modern neural training catalyst) — A method for learning internal representations by propagating error gradients through multilayer networks.
- Universal approximation (theoretical result) — A result showing certain neural network classes can approximate broad families of functions, supporting expressive-power claims. [https://www.researchgate.net/publication/397842321_Perceptron?utm_source=chatgpt.com](https://www.researchgate.net/publication/397842321_Perceptron?utm_source=chatgpt.com)
- Convolutional neural networks (CNNs) — Architectures using local receptive fields and weight sharing that achieved strong document recognition and later large-scale vision performance. [https://vision.stanford.edu/cs598_spring07/papers/Lecun98.pdf](https://vision.stanford.edu/cs598_spring07/papers/Lecun98.pdf)
- ImageNet dataset milestone — A large-scale hierarchical image database that catalyzed modern computer vision benchmarking and representation learning. [https://www.image-net.org/static_files/papers/imagenet_cvpr09.pdf](https://www.image-net.org/static_files/papers/imagenet_cvpr09.pdf)
- AlexNet milestone — A deep CNN trained at scale that marked a major leap in ImageNet classification and accelerated deep learning adoption. [https://proceedings.neurips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf](https://proceedings.neurips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf)
- Transformer architecture — A sequence model using self-attention to enable parallelizable learning and long-range dependency handling, foundational for modern NLP. [https://rodsmith.nz/wp-content/uploads/Minsky-and-Papert-Perceptrons.pdf](https://rodsmith.nz/wp-content/uploads/Minsky-and-Papert-Perceptrons.pdf)
- BERT-style bidirectional pretraining — A pretraining approach producing deep bidirectional language representations that improved performance across NLU tasks. [https://arxiv.org/abs/1810.04805](https://arxiv.org/abs/1810.04805)
- GPT-style generative pretraining — A paradigm using generative pretraining for broad language understanding and transfer. [https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf](https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf)
- Scaling and few-shot learning (GPT-3) — Evidence that scaling language models can induce strong task-agnostic few-shot performance across many benchmarks. [https://arxiv.org/pdf/2005.14165](https://arxiv.org/pdf/2005.14165)
- Reinforcement learning (RL) textbook framing — A foundational synthesis of RL concepts including value functions, policy optimization, and temporal-difference learning. [https://incompleteideas.net/book/the-book-2nd.html](https://incompleteideas.net/book/the-book-2nd.html)
- Q-learning — A temporal-difference control method with convergence guarantees in tabular Markovian settings under standard sampling conditions. [https://link.springer.com/article/10.1007/BF00992698](https://link.springer.com/article/10.1007/BF00992698)
- Deep Q-Networks (DQN) — A system combining deep neural function approximation with Q-learning to achieve strong Atari performance from pixels. [https://web.stanford.edu/class/psych209/Readings/MnihEtAlHassibis15NatureControlDeepRL.pdf](https://web.stanford.edu/class/psych209/Readings/MnihEtAlHassibis15NatureControlDeepRL.pdf)
- Proximal Policy Optimization (PPO) — A policy-gradient family using clipped surrogate objectives to stabilize updates with strong empirical performance. [https://arxiv.org/pdf/1707.06347](https://arxiv.org/pdf/1707.06347)
- Deep RL from human preferences — A method learning reward models from pairwise human comparisons to train RL agents without explicit reward functions. [https://arxiv.org/abs/1706.03741](https://arxiv.org/abs/1706.03741)
- InstructGPT / RLHF — A demonstration that fine-tuning language models with human feedback can improve instruction-following, reduce toxicity, and improve helpfulness (with tradeoffs). [https://cdn.openai.com/papers/Training_language_models_to_follow_instructions_with_human_feedback.pdf](https://cdn.openai.com/papers/Training_language_models_to_follow_instructions_with_human_feedback.pdf)
- Constitutional AI — A method using AI feedback guided by a “constitution” of rules/principles to reduce harmful behavior with reduced reliance on human labels. [https://www-cdn.anthropic.com/7512771452629584566b6303311496c262da1006/Anthropic_ConstitutionalAI_v2.pdf](https://www-cdn.anthropic.com/7512771452629584566b6303311496c262da1006/Anthropic_ConstitutionalAI_v2.pdf)
- Variational autoencoders (VAE) — Latent-variable generative models trained via variational inference with a reparameterization trick enabling scalable optimization. [https://arxiv.org/abs/1312.6114](https://arxiv.org/abs/1312.6114)
- Generative adversarial networks (GANs) — A generative framework training a generator and discriminator in a minimax game to learn data distributions. [https://papers.neurips.cc/paper/5423-generative-adversarial-nets.pdf](https://papers.neurips.cc/paper/5423-generative-adversarial-nets.pdf)
- Diffusion models (DDPM lineage) — Generative models learning to reverse a gradual noise process, achieving high-quality synthesis in modern settings. [https://proceedings.neurips.cc/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf](https://proceedings.neurips.cc/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf)
- Adversarial examples (robustness challenge) — Evidence that small worst-case perturbations can reliably fool many neural networks, motivating adversarial training and robustness science. [https://arxiv.org/pdf/1412.6572](https://arxiv.org/pdf/1412.6572)
- Common corruption robustness (ImageNet-C/P) — Benchmarks evaluating classifier stability to realistic corruptions and perturbations beyond worst-case adversarial noise. [https://openreview.net/pdf?id=HJz6tiCqYm](https://openreview.net/pdf?id=HJz6tiCqYm)
- Uncertainty via dropout (Bayesian approximation) — A theory interpreting dropout training as approximate Bayesian inference, enabling uncertainty estimates in deep learning. [https://proceedings.mlr.press/v48/gal16.pdf](https://proceedings.mlr.press/v48/gal16.pdf)
- Meta-learning (MAML) — A method training model parameters for rapid adaptation to new tasks with few gradient steps. [https://arxiv.org/pdf/1703.03400](https://arxiv.org/pdf/1703.03400)
- Interpretability as a science (position paper) — A call for rigorous definitions and evaluation protocols for interpretability, warning that “interpretability” is context-dependent and under-specified. [https://arxiv.org/abs/1702.08608](https://arxiv.org/abs/1702.08608)
- LIME (local surrogate explanations) — A method approximating a classifier locally with an interpretable model to explain individual predictions. [https://arxiv.org/abs/1602.04938](https://arxiv.org/abs/1602.04938)
- SHAP (Shapley additive explanations) — A unified feature-attribution framework with axiomatic grounding connecting to Shapley values. [https://proceedings.neurips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf](https://proceedings.neurips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf)
- Fairness through awareness — A framework for fair classification based on a task-specific similarity metric, emphasizing individual-level fairness constraints. [https://www.cs.toronto.edu/~toni/Papers/awareness.pdf](https://www.cs.toronto.edu/~toni/Papers/awareness.pdf)
- Equality of opportunity — A fairness criterion targeting error-rate disparities (e.g., true positive rates) across sensitive groups and methods to post-process predictors. [https://arxiv.org/pdf/1610.02413](https://arxiv.org/pdf/1610.02413)
- Differential privacy — A formal privacy guarantee limiting how much any single individual’s data can affect outputs, enabling principled privacy-preserving analysis. [https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/dwork.pdf](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/dwork.pdf)
- Big data disparate impact analysis — A legal/technical argument that data mining can inherit societal bias and create discriminatory outcomes even without explicit intent. [https://www.cs.yale.edu/homes/jf/BarocasSelbst.pdf](https://www.cs.yale.edu/homes/jf/BarocasSelbst.pdf)
- No Free Lunch (AI design implication) — A formal reason that “general intelligence” requires carefully formalized priors/inductive biases rather than expecting one method to dominate everywhere. [https://www.cs.ubc.ca/~hutter/earg/papers07/00585893.pdf](https://www.cs.ubc.ca/~hutter/earg/papers07/00585893.pdf)
- Cognitive architectures (ACT-R) — A computational theory and modeling framework for human cognition aiming to explain how knowledge and mechanisms produce behavior. [https://act-r.psy.cmu.edu/](https://act-r.psy.cmu.edu/)
- Cognitive architectures (Soar) — A general cognitive architecture for intelligent behavior emphasizing unified mechanisms, problem spaces, and learning from experience. [https://soar.eecs.umich.edu/](https://soar.eecs.umich.edu/)
### AI evaluation and measurement map (benchmarks, metrics, and desiderata)
- Benchmark suites (why they exist) — Benchmarks operationalize “capabilities” through task sets but also create incentives and blind spots, so multiple benchmarks and metrics are needed. [https://crfm.stanford.edu/helm/](https://crfm.stanford.edu/helm/)
- GLUE — A multi-task NLU benchmark designed to measure cross-task generality and support diagnostic linguistic evaluation. [https://arxiv.org/abs/1804.07461](https://arxiv.org/abs/1804.07461)
- MMLU — A multi-domain test of academic/professional knowledge and problem solving across dozens of subjects. [https://arxiv.org/abs/2009.03300](https://arxiv.org/abs/2009.03300)
- BIG-bench — A large task suite designed to probe beyond-current capabilities, including reasoning and bias-related tasks and emergent behaviors with scale. [https://arxiv.org/abs/2206.04615](https://arxiv.org/abs/2206.04615)
- HELM — A “living” evaluation framework emphasizing multi-metric assessment (accuracy, calibration, robustness, fairness, bias, toxicity, efficiency) across scenarios. [https://crfm.stanford.edu/helm/](https://crfm.stanford.edu/helm/)
|Evaluation dimension|What it intends to measure (one sentence)|Example operationalization|Representative sources|
|---|---|---|---|
|Accuracy / task performance|Correctness on task-defined outputs under benchmark conditions.|Classification accuracy; exact match; BLEU-like measures.|[https://arxiv.org/abs/1804.07461](https://arxiv.org/abs/1804.07461)|
|Calibration|Whether predicted probabilities or confidence meaningfully match empirical correctness rates.|Reliability diagrams; ECE; risk–coverage tradeoffs in selective prediction.|[https://crfm.stanford.edu/helm/](https://crfm.stanford.edu/helm/)|
|Robustness|Stability under distribution shift, corruptions, or adversarial perturbations.|ImageNet-C/P; adversarial accuracy.|[https://openreview.net/pdf?id=HJz6tiCqYm](https://openreview.net/pdf?id=HJz6tiCqYm)|
|Uncertainty estimation|Whether models express uncertainty when appropriate, supporting safer decisions.|Bayesian approximations (dropout) and predictive uncertainty metrics.|[https://proceedings.mlr.press/v48/gal16.pdf](https://proceedings.mlr.press/v48/gal16.pdf)|
|Interpretability|Human-usable explanations for model behavior suitable for a given context.|Local explanations (LIME); axiomatic attributions (SHAP).|[https://arxiv.org/abs/1602.04938](https://arxiv.org/abs/1602.04938)|
|Fairness / non-discrimination|Equal treatment or equal error properties across groups or individuals.|Equality of opportunity; individual fairness constraints.|[https://arxiv.org/pdf/1610.02413](https://arxiv.org/pdf/1610.02413)|
|Privacy|Limiting information leakage about individuals in data.|Differential privacy guarantees.|[https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/dwork.pdf](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/dwork.pdf)|
A canonical agent–environment loop for reinforcement learning (and many control-like intelligence models) is summarized below. [https://incompleteideas.net/book/the-book-2nd.html](https://incompleteideas.net/book/the-book-2nd.html)
flowchart LR
ENV[Environment] -->|observation o_t| AG[Agent]
AG -->|action a_t| ENV
ENV -->|reward r_t| AG
AG -->|policy/value update| AG
## Collective and hybrid intelligence
- Collective intelligence (definition by mission) — A field studying how people and computers can be connected so that—collectively—they act more intelligently than any individual alone. [https://cci.mit.edu/](https://cci.mit.edu/)
- Collective intelligence factor (c) — Empirical evidence that groups show stable performance differences across task batteries analogous to individual g, partly tied to interaction patterns and social sensitivity. [https://pubmed.ncbi.nlm.nih.gov/20929725/](https://pubmed.ncbi.nlm.nih.gov/20929725/)
- Organizational/team collective intelligence (review framing) — A synthesis analyzing mechanisms (e.g., equality in turn-taking) and interventions that can improve group problem solving. [https://web.mit.edu/curhan/www/docs/Articles/15341_Readings/Collective_Intelligence/Woolley_Aggarwal_Malone_Collective%20Intelligence%20in%20Teams%20and%20Organizations.pdf](https://web.mit.edu/curhan/www/docs/Articles/15341_Readings/Collective_Intelligence/Woolley_Aggarwal_Malone_Collective%20Intelligence%20in%20Teams%20and%20Organizations.pdf)
- Wisdom-of-crowds thesis — A popular-level argument that under certain conditions (diversity, independence, aggregation) groups can outperform individuals in estimation and judgment. [https://sentry.rmu.edu/SentryHTML/pdf/lib_finn_DISC8710_wisdom_of_crowds.pdf](https://sentry.rmu.edu/SentryHTML/pdf/lib_finn_DISC8710_wisdom_of_crowds.pdf)
- Swarm intelligence (book-level formalization) — A framework translating social insect/self-organization principles into engineered algorithms for optimization and control. [https://global.oup.com/academic/product/swarm-intelligence-9780195131598](https://global.oup.com/academic/product/swarm-intelligence-9780195131598)
- Particle swarm optimization (PSO) — An optimization method inspired by social flocking behavior, using populations (“particles”) that move through a search space by combining individual and social signals. [https://www.cs.tufts.edu/comp/150GA/homeworks/hw3/_reading6%201995%20particle%20swarming.pdf](https://www.cs.tufts.edu/comp/150GA/homeworks/hw3/_reading6%201995%20particle%20swarming.pdf)
- Ant colony optimization (ACO) metaheuristic — A family of stochastic combinatorial optimization methods inspired by pheromone-mediated path finding in ants. [https://web2.qatar.cmu.edu/~gdicaro/15382/additional/aco-book.pdf](https://web2.qatar.cmu.edu/~gdicaro/15382/additional/aco-book.pdf)
- Hybrid intelligence (human–AI teaming) — A design goal in which human judgment and machine pattern recognition/computation are combined, requiring careful evaluation across accuracy and risk dimensions. [https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf](https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf)
- Human feedback as a collective intelligence mechanism — RL from human preferences operationalizes a “collective” supervisory signal distributed across evaluators rather than encoded rewards. [https://arxiv.org/abs/1706.03741](https://arxiv.org/abs/1706.03741)
- Large-scale benchmark authorship as collective intelligence — BIG-bench operationalizes collective scientific judgment by aggregating tasks from hundreds of contributors to probe model capabilities. [https://arxiv.org/abs/2206.04615](https://arxiv.org/abs/2206.04615)
## Ethics, governance, and societal impacts
- OECD AI Principles — Government-adopted principles promoting innovative and trustworthy AI that respects human rights and democratic values. [https://www.oecd.org/en/topics/ai-principles.html](https://www.oecd.org/en/topics/ai-principles.html)
- UNESCO Recommendation on AI Ethics — A global normative instrument emphasizing human dignity, rights, transparency, and oversight for AI systems. [https://unesdoc.unesco.org/ark%3A/48223/pf0000380455](https://unesdoc.unesco.org/ark%3A/48223/pf0000380455)
- NIST AI Risk Management Framework (AI RMF 1.0) — A voluntary framework for managing AI risks to individuals, organizations, and society across the AI lifecycle. [https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf](https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf)
- NIST Generative AI profile (companion resource) — A cross-sectoral profile linking generative AI risks and controls to the AI RMF structure. [https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf](https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf)
- EU AI Act (risk-based governance overview) — An EU framework describing prohibited practices, high-risk systems, transparency obligations, and governance structures with phased timelines. [https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai](https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai)
- EU AI Act implementation guidance (prohibited practices) — Commission guidance elaborating prohibited AI practices with legal explanations and practical examples. [https://ai-act-service-desk.ec.europa.eu/sites/default/files/2025-08/guidelines_on_prohibited_artificial_intelligence_practices_established_by_regulation_eu_20241689_ai_act_english_ied3r5nwo50xggpcfmwckm3nuc_112367-1.PDF](https://ai-act-service-desk.ec.europa.eu/sites/default/files/2025-08/guidelines_on_prohibited_artificial_intelligence_practices_established_by_regulation_eu_20241689_ai_act_english_ied3r5nwo50xggpcfmwckm3nuc_112367-1.PDF)
- Fairness constraints in ML (core technical lineage) — Seminal definitions and methods (individual fairness; equality of opportunity) formalize discrimination criteria and mitigation strategies. [https://www.cs.toronto.edu/~toni/Papers/awareness.pdf](https://www.cs.toronto.edu/~toni/Papers/awareness.pdf)
- Privacy protection (differential privacy) — A formal privacy standard enabling bounded leakage guarantees, now central to responsible data use in intelligent systems. [https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/dwork.pdf](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/dwork.pdf)
- Transparency and explainability tooling — Explanation methods (LIME/SHAP) operationalize interpretability but require rigorous evaluation and context-specific justification. [https://arxiv.org/abs/1602.04938](https://arxiv.org/abs/1602.04938)
- Robustness and safety arguments — Adversarial vulnerability and distribution shift motivate robustness benchmarks and defenses as safety-critical evaluation components. [https://arxiv.org/pdf/1412.6572](https://arxiv.org/pdf/1412.6572)
- “Intelligence” as state activity (policy polysemy) — National security intelligence definitions emphasize secret state activities to understand and influence threats, intersecting with AI governance through surveillance and analysis capabilities. [https://www.cia.gov/resources/csi/static/cc27ce9b678dc69d4bdeef410feffa20/Article-New-Approach-to-Old-Question-Sep-2023.pdf](https://www.cia.gov/resources/csi/static/cc27ce9b678dc69d4bdeef410feffa20/Article-New-Approach-to-Old-Question-Sep-2023.pdf)
### Governance frameworks comparison table
|Framework / instrument|Primary purpose (one sentence)|Typical users|Notable emphasis|Representative sources|
|---|---|---|---|---|
|OECD AI Principles|Establish shared democratic values-based principles for trustworthy AI policy and practice.|Governments, policymakers, industry.|Human rights and democratic values; practical flexibility.|[https://www.oecd.org/en/topics/ai-principles.html](https://www.oecd.org/en/topics/ai-principles.html)|
|UNESCO AI ethics recommendation|Provide global normative guidance for ethical AI anchored in rights, dignity, and oversight.|Member states, regulators, civil society.|Human dignity, transparency, fairness, human oversight.|[https://unesdoc.unesco.org/ark%3A/48223/pf0000380455](https://unesdoc.unesco.org/ark%3A/48223/pf0000380455)|
|NIST AI RMF 1.0|Offer structured risk management for AI systems across lifecycle processes and outcomes.|Organizations building/deploying AI.|Risk identification/management; trustworthy AI characteristics.|[https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf](https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf)|
|EU AI Act overview|Regulate AI uses via risk tiers (prohibited, high-risk, transparency, minimal risk) and governance controls.|EU providers/deployers; regulators.|Risk-based constraints; phased compliance timelines; enforcement.|[https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai](https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai)|
|EU AI Act prohibited-practices guidance|Clarify interpretation of prohibited AI practices for compliance.|Providers, deployers, enforcement bodies.|Practical examples and legal explanations of bans.|[https://ai-act-service-desk.ec.europa.eu/sites/default/files/2025-08/guidelines_on_prohibited_artificial_intelligence_practices_established_by_regulation_eu_20241689_ai_act_english_ied3r5nwo50xggpcfmwckm3nuc_112367-1.PDF](https://ai-act-service-desk.ec.europa.eu/sites/default/files/2025-08/guidelines_on_prohibited_artificial_intelligence_practices_established_by_regulation_eu_20241689_ai_act_english_ied3r5nwo50xggpcfmwckm3nuc_112367-1.PDF)|
**
More: [[AI-written intelligence]]