## Tags
- Part of: [[Game theory]] [[Evolutionary game theory]] [[Artificial Intelligence]] [[Reinforcement learning]] [[Economics]] [[Free energy principle]] [[Machine learning]] [[Mathematics]]
- Related:
- Includes:
- Additional:
## Definitions
- Research direction that asks: Some [[Systems science|systems]] in the world seem to behave like “[[Agent|agents]]”: they make consistent decisions, and sometimes display complex goal-seeking behaviour. Can we develop a robust [[Mathematics|mathematical]] description of such systems and build provably aligned AI agents?
## Main resources
- [Artificial Intelligence @ MIRI](https://intelligence.org)
- <iframe src="https://intelligence.org" allow="fullscreen" allowfullscreen="" style="height:100%;width:100%; aspect-ratio: 16 / 5; "></iframe>
## Landscapes
- [Agent Foundations - AI Alignment Forum](https://www.alignmentforum.org/tag/agent-foundations)
- [Artificial Intelligence @ MIRI](/)
## Additional resources
- [Why Agent Foundations? An Overly Abstract Explanation — LessWrong](https://www.lesswrong.com/posts/FWvzwCDRgcjb9sigb/why-agent-foundations-an-overly-abstract-explanation)