## Tags
- Part of: [[Intelligence]], [[Science]] [[Engineering]] [[Computer science]] [[Technology]] [[Natural science]], [[Mathematics]] [[Formal science]]
- Related: [[Collective Intelligence]], [[General intelligence]], [[Artificial General Intelligence]], [[Theory of Everything in Intelligence]], [[Biological intelligence]]
- Includes:
- Additional:
## Definitions
- A [[Systems theory|system]] that is [[Intelligence|intelligent]] and constructed by humans.
- A branch of [[Computer science]] which develops and studies [[Intelligence|intelligent]] machines.
- Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software which enable machines to perceive their environment and uses learning and intelligence to take actions that maximize their chances of achieving defined [[goal|goals]].
## Main resources
- [Artificial intelligence - Wikipedia](https://en.wikipedia.org/wiki/Artificial_intelligence)
<iframe src="https://en.wikipedia.org/wiki/Artificial Intelligence" allow="fullscreen" allowfullscreen="" style="height:100%;width:100%; aspect-ratio: 16 / 5; "></iframe>
### Lectures
- Stanford machine learning [https://www.youtube.com/playlist?list=PLoROMvodv4rNyWOpJg_Yh4NSqI4Z4vOYy](https://www.youtube.com/playlist?list=PLoROMvodv4rNyWOpJg_Yh4NSqI4Z4vOYy) https://www.youtube.com/playlist?list=PLoROMvodv4rMiGQp3WXShtMGgzqpfVfbU , [Fetching Title#db0y](https://www.coursera.org/specializations/machine-learning-introduction)
- Stanford transformers [https://www.youtube.com/playlist?list=PLoROMvodv4rNiJRchCzutFw5ItR_Z27CM](https://www.youtube.com/playlist?list=PLoROMvodv4rNiJRchCzutFw5ItR_Z27CM)
- Stanford generative models including diffusion [https://www.youtube.com/playlist?list=PLoROMvodv4rPOWA-omMM6STXaWW4FvJT8](https://www.youtube.com/playlist?list=PLoROMvodv4rPOWA-omMM6STXaWW4FvJT8)
- Stanford deep learning [https://www.youtube.com/playlist?list=PLoROMvodv4rOABXSygHTsbvUz4G_YQhOb](https://www.youtube.com/playlist?list=PLoROMvodv4rOABXSygHTsbvUz4G_YQhOb)
- Stanford natural language processing with deep learning [https://www.youtube.com/playlist?list=PLoROMvodv4rMFqRtEuo6SGjY4XbRIVRd4](https://www.youtube.com/playlist?list=PLoROMvodv4rMFqRtEuo6SGjY4XbRIVRd4)
- [Search | MIT OpenCourseWare | Free Online Course Materials on Machine Learning](https://ocw.mit.edu/search/?q=machine%20learning), [Search | MIT OpenCourseWare | Free Online Course Materials on AI](https://ocw.mit.edu/search/?q=AI)
- Harvard AI [Harvard CS50’s Artificial Intelligence with Python – Full University Course - YouTube](https://www.youtube.com/watch?v=5NgNicANyqM&t=16s)
- [Neural Networks: Zero to Hero - YouTube](https://www.youtube.com/playlist?list=PLAqhIrjkxbuWI23v9cThsA9GvCAUhRvKZ)
- [What is a Transformer? Neel Nanda - YouTube](https://youtube.com/playlist?list=PL7m7hLIqA0hoIUPhC26ASCVs_VrqcDpAz&si=L5WmZ7a0LCC4ML6y)
### Books
- [fast.ai – fast.ai—Making neural nets uncool again](https://www.fast.ai/)
- [Dive into Deep Learning — Dive into Deep Learning 1.0.3 documentation](https://www.d2l.ai/)
- [Amazon.com: Machine Learning with PyTorch and Scikit-Learn: Develop machine learning and deep learning models with Python eBook : Raschka, Sebastian, Liu, Yuxi (Hayden), Mirjalili, Vahid, Dzhulgakov, Dmytro: Kindle Store](https://www.amazon.com/Machine-Learning-PyTorch-Scikit-Learn-learning-ebook/dp/B09NW48MR1)
- [Are there any books I should read to learn machine learning from scratch? : r/learnmachinelearning](https://www.reddit.com/r/learnmachinelearning/comments/13y4rzn/are_there_any_books_i_should_read_to_learn/)
- [best AI books - Hledat Googlem](https://www.google.com/search?q=best+AI+books&sca_esv=e14f95cbc2b145ff&sca_upv=1&sxsrf=ADLYWIKWrE3QSZ6sLX-ITX-nVDg3qWaDFg%3A1727604674151&ei=wif5ZpLqCJD97_UPvMyG6Qs&ved=0ahUKEwiS06j39OeIAxWQ_rsIHTymIb0Q4dUDCA8&uact=5&oq=best+AI+books&gs_lp=Egxnd3Mtd2l6LXNlcnAiDWJlc3QgQUkgYm9va3MyBRAAGIAEMgUQABiABDIFEAAYgAQyBRAAGIAEMgUQABiABDIFEAAYgAQyBRAAGIAEMgUQABiABDIFEAAYgAQyBRAAGIAESOoTUOIFWPYScAF4AZABAJgBf6ABrgWqAQM1LjK4AQPIAQD4AQGYAgigAoYGwgIKEAAYsAMY1gQYR8ICDRAAGIAEGLADGEMYigXCAg4QABiwAxjkAhjWBNgBAcICExAuGIAEGLADGEMYyAMYigXYAQHCAgoQIxiABBgnGIoFwgIKEAAYgAQYFBiHApgDAIgGAZAGE7oGBggBEAEYCZIHAzQuNKAHkCU&sclient=gws-wiz-serp)
- [best machine learning books - Hledat Googlem](https://www.google.com/search?q=best+machine+learning+books&sca_esv=e14f95cbc2b145ff&sca_upv=1&sxsrf=ADLYWILSLOI-HtGkXlMqkH5ml_uoQNnbJw%3A1727604694624&ei=1if5ZrzNJbPp7_UPhJSzqQg&ved=0ahUKEwi8kIqB9eeIAxWz9LsIHQTKLIUQ4dUDCA8&uact=5&oq=best+machine+learning+books&gs_lp=Egxnd3Mtd2l6LXNlcnAiG2Jlc3QgbWFjaGluZSBsZWFybmluZyBib29rczIGEAAYBxgeMgYQABgHGB4yBhAAGAcYHjIGEAAYBxgeMgYQABgHGB4yBhAAGAcYHjIGEAAYBxgeMgYQABgHGB4yBhAAGAcYHjIGEAAYBxgeSIkaUL0MWKoZcAN4AZABAJgBeKABzQuqAQM4Lje4AQPIAQD4AQGYAgqgAuEGwgIKEAAYsAMY1gQYR8ICDRAAGIAEGLADGEMYigXCAg4QABiwAxjkAhjWBNgBAcICExAuGIAEGLADGEMYyAMYigXYAQGYAwCIBgGQBhO6BgYIARABGAmSBwM1LjWgB4R8&sclient=gws-wiz-serp)
## Landscapes
#### By approach
- [[Symbolic AI]]
- ![[Symbolic AI#Definitions]]
- [[Logic-based AI]]
- [[Knowledge-based systems]]
- [[Expert systems]]
- [[Ontologies]]
- [[Semantic networks]]
- [[Statistical AI]]
- [[Machine learning]]
- ![[Machine learning#Definitions]]
- [[Supervised learning]]
- [[Unsupervised learning]]
- [[Semi-supervised learning]]
- [[Reinforcement learning]]
- [[Probabilistic AI]]
- [[Bayesian AI]]
- [[Quantum machine learning]]
- [[Thermodynamic AI]]
- [[Connectionist AI]]
- [[Neural networks]] and [[Deep Learning]]
- [[Feedforward neural networks]]
- [[Convolutional neural networks]] (CNNs)
- [[Recurrent neural networks]] (RNNs)
- [[Long short-term memory]] (LSTM)
- [[Transformer]]
- [[Graph neural networks]]
- [[Capsule networks]]
- [[Spiking neural networks]]
- [[Quantum neural networks]]
- [[Generative adversarial networks]] (GANs)
- [[Variational autoencoders]] (VAEs)
- [[Diffusion models]]
- [[Flow-based models]]
- [[Attention mechanisms]]
- [[Memory-augmented neural networks]]
- [[Neural turing machine]]
- [[Neural Cellular Automata ]]
- [[Scaling hypothesis]], [[Bitter Lesson]]
- [[Transfer learning]]
- [[Self-supervised learning]]
- [[Contrastive learning]]
- [[Hybrid AI]]
- ![[Hybrid AI#Definitions]]
- [[Neurosymbolic AI]]
- [[Evolutionary AI]]
- [[Genetic algorithms]]
- [[Evolutionary strategies]]
- [[Swarm intelligence]]
- [[Distributed AI]]
- [[Cognitive AI]]
- [[Cognitive architectures]]
- [[Embodied AI]]
- [[Robotics]]
- [[Distributed AI]]
- [[Multi-agent systems]]
- [[Quantum AI]]
- [[Quantum machine learning]]
- [[Quantum neural networks]]
- [[Quantum annealing]]
- [[Biologically-inspired AI]]
- [[Neuromorphic AI]]
- [[Spiking neural networks]]
- [[Reservoir computing]]
- [[Explainable AI ]]
#### Crossovers [[Omnidisciplionarity]]
- [[Artificial Intelligence x Biological Intelligence]]
- [[Artificial Intelligence x Biological Intelligence x Collective Intelligence]]
- [[Artificial intelligence x Science]]
- [[Artificial Intelligence x Mathematics]]
- [[AlphaProof]]
- [[Artificial Intelligence x Physics]]
- [[FermiNet]]
- [[Artificial Intelligence x Chemistry]]
- [[Artificial Intelligence x Biology]]
- [[AlphaFold]]
- [[AlphaProteo]]
- [[Artificial Intelligence x Neuroscience]]
- [[Artificial intelligence x Programming]]
- [[Artificial intelligence x Engineering]]
- [[AlphaChip]]
- [[Artificial intelligence x Healthcare]]
- [[Artificial intelligence x Psychotherapy]]
- [[Artificial intelligence x Finance]]
- [[Artificial Intelligence x Generalization]]
I love AI for science like biology and physics, mathematics, healthcare, education, technology development for good, understanding the nature of intelligence, increasing the standards of living for all, progress of civilization and so on. I want to see more of that please!
I want to see AI applied much more in science, technology, engineering, math, healthcare, altruistic usecases, etc. I want to see it as a tool that generates abundance for everyone. I want the technology to build better future for all. I want the technology to fight poverty and other world problems and risks. I want the research to help understand the nature of intelligence. I want the technology to empower all humans that don't want to see the world burn or are not dictators. I want the power of it be used for good. I want the power to not be concentrated. I want to see it developed safely and ethically in steerable way. I want people to get compensated properly. I'm trying to push that and help to work towards these goals more!
I think in various industries AI is already technologically disruptive. AI is everywhere right now, and there's more and more of it, not just GenAI. Stuff like AI for foundational research and engineering in science+math supercharges all sorts of engineering+technology across the board. More and more programmers are using some sort of coding copilot, which is useful, and most of them are not using SotA systems like Claude, Cursor Sh, Perplexity, Replit etc. because of not knowing or because of the points above often. Or lots of code monkey stuff or unit testing or simple web dev, etc. is being automated. It's contributing to nontrivial frontier AI research and development. It's used to design better chips and robots. Or for example lots of translators and certain types of writers are rip. Then many companies squeeze for easy profit at all costs image/video/text gen for for example PR or in entertainment and art industry, but that is IMO often recently giving the technology bad reputation as it's often profit over quality and ethics, which sucks, and this technology can be used in much better ways there with more quality and ethics, but the incentives have to be aligned better. Automated call centers and customer service (sometimes better, sometimes worse quality). Autonomous vehicles are now reality, robot dogs, automated drones and other machines are already used in surveilence, defence, and wars right now, I don't want that, but some are using them for good and useful stuff too, like all sorts of specialized robotics for automation in resource and technology production and for household usecases is in it's glory, and humanoid robotics is just emerging. Planning systems are also big in defence and wars (I don't want that). Healthcare is supercharged with for example disease classification from images (I love AI for healthcare!). Financial market is ML bots fighting, recommender systems are everywhere in social media (often useful, but also often curse), semantic search is everywhere (often useful), visual recognition and editing of photos is used often (often useful), optimizations of supply chains, better techniques for agriculture (we need more there), automated thread detection in cybersecurity, optimizations in energy sector, AI powered scams etc. exist, and I wanna regulate that harmful usecase. This exists with a lot of dual use technologies.
And I think that big factor limiting AI's impact inside industry, outside of academia, and outside of stuff like being superhuman in various games like Go, Chess, Dota, Poker, etc., are:
1) bureaucracy of integrating the technology is so slow compared to the progress of technology
2) People are learning to use the technology very slowly
3) issues around privacy, copyright, ethics in some contexts, and other legal issues
4) engineering around adapting the foundational systems for specific usecases is slower than the progress of the foundations systems
...
AI can be used for both bad, good, and neutral things. Let's maximize the good usecases!
#### Applications ([[AI engineering]])
- [[Artificial Intelligence#Crossovers|automating]] mundane tasks (dishes, laundry), [[Artificial intelligence x Healthcare|healthcare]] ([[AMIE]]), [[Artificial intelligence x Programming|programming]] (coding [[AI copilots]] such as GitHub copilot, [Cursor](https://www.cursor.com/), Replit, and [[autonomous software engineers]]), [[Artificial intelligence x Science|science]] ([[AlphaFold]]), physics ([[FermiNet]]), [[Artificial Intelligence x Mathematics|mathematics]] ([[AlphaProof]]), [[Artificial Intelligence x Engineering|technology]] development ([[AlphaChip]], [[virtual reality]]), [[chatbot]] assistants grounded in reality, [[education]], [[information searching]], minimizing various [[risks]] and [[crises]], [[transportation]], [[manufacturing]], [[security]], [[cybersecurity]], [[energy optimization]], [[supply chain optimization]], [[weather forecasting]], [[agriculture]], [[translation]], [[recommendations]], [[finance]], [[call centers]], [[entertainment]], [[legal services]], [[games]], [[robotics]] for good, etc. by [[prediction|predicting]], [[forecasting]], [[generation|generating]], [[classification]], [[analysis]], [[clustering]], [[segmentation|segmentating]] etc., with [[AI engineering]] methods using [[statistics|statistical]] models, [[deep learning]] models, [[generative AI]] models ([[Large language model|large language models]], image/sound/video models, [[multimodal]] models), [[classification]] models, [[reinforcement learning]] models, [[symbolic AI|expert systems]], etc. by [[building]] and [[training]] models, [[finetuning]], [[prompt engineering]], [[retrieval augmented generation]], [[agent]] and [[multiagent]] frameworks, etc. using [[PyTorch]], [[Keras]], [[Scikit-learn]], [[FastAI]], [[OpenAI]] or [[Anthropic]] API, [[Llama]] locally or deployed, [[Llamaindex]], [[Langchain]], [[Autogen]], [[LangGraph]], [[Vector database|vector databases]], etc.
#### [[AI engineering]] by application
- [[Generative AI]]
- [[Large language model]] (LLM)
- [[o1]]
- [[Text-to-image models]]
- [[Text-to-video models]]
- [[Text-to-3D models]]
- [[Music generation]]
- [[Code generation]]
- [[AlphaGo]]
- [[AlphaZero]]
#### More
- By skill:
- [\[2311.02462\] Levels of AGI: Operationalizing Progress on the Path to AGI](https://arxiv.org/abs/2311.02462)[[9bb2cfbcdbb8274393aa4b4fd2d4b604_MD5.jpeg|Open: Pasted image 20240115053147.png]]
![[9bb2cfbcdbb8274393aa4b4fd2d4b604_MD5.jpeg]]
- [[Artificial narrow intelligence]]
- [[Artificial General Intelligence]]
- [[Superintelligence]]
- [Outline of artificial intelligence - Wikipedia](https://en.wikipedia.org/wiki/Outline_of_artificial_intelligence)
- <iframe src="https://en.wikipedia.org/wiki/Outline_of_artificial_intelligence" allow="fullscreen" allowfullscreen="" style="height:100%;width:100%; aspect-ratio: 16 / 5; "></iframe>
- [[Algorithm|Algorithms]] and techniques
- [[Search algorithm]]
- [[Optimization search]]
- [[Logic]]
- [[Probabilistic methods for uncertain reasoning]]
- [[Bayesian network]]
- [[Bayesian inference]]
- [[Classification]]
- [[Artificial neural networks]]
- [[Robotics]]
- [[Neuromorphic engineering]]
- [[Cognitive architecture]]
- [[Multiagent system]]
- Applications
- Reasoning and problem solving
- [[Automating science]]
- [[Expert system]]
- [[Automated planning and scheduling]]
- [[Constraint satisfaction]]
- [[Automated theorem proving]]
- [[Knowledge representation]]
- [[Planning]]
- [[Learning]]
- [[Machine learning]]
- [[Natural language processing]]
- [[Image generation]]
- [[Audio generation]]
- [[Video generation]]
- [[Perception]]
- [[Robotics]]
- [[Control theory|Control]]
- [[Social intelligence]]
- [[Game playing]]
- [[Computational creativity]]
- [[Personal assistant]]
- [Map of Artificial Intelligence - YouTube](https://youtu.be/hDWDtH1jnXg?si=CP-4cX70dNz7U4tp)
<iframe title="Map of Artificial Intelligence" src="https://www.youtube.com/embed/hDWDtH1jnXg?feature=oembed" height="113" width="200" allowfullscreen="" allow="fullscreen" style="aspect-ratio: 1.76991 / 1; width: 100%; height: 100%;"></iframe>
- [All Machine Learning algorithms explained in 17 min - YouTube](https://www.youtube.com/watch?v=E0Hmnixke2g)
<iframe title="Map of Biology" src="https://www.youtube.com/embed/E0Hmnixke2g?feature=oembed" height="113" width="200" allowfullscreen="" allow="fullscreen" style="aspect-ratio: 1.76991 / 1; width: 100%; height: 100%;"></iframe>
- [[Images/98bcc7afe4e66c0f5d1d6b65fcc3e519_MD5.jpeg|Open: Pasted image 20241001055944.png]]
![[Images/98bcc7afe4e66c0f5d1d6b65fcc3e519_MD5.jpeg]]
- [[Images/2f712aa9f9992bf03afb1124508a8805_MD5.jpeg|Open: Pasted image 20241001064142.png]]
![[Images/2f712aa9f9992bf03afb1124508a8805_MD5.jpeg]]
- [[Images/e2c3bbe9b975694d5e7e4089ecc9ab12_MD5.jpeg|Open: Pasted image 20241001064410.png]]
![[Images/e2c3bbe9b975694d5e7e4089ecc9ab12_MD5.jpeg]]
- [Generative AI in a Nutshell - how to survive and thrive in the age of AI - YouTube](https://www.youtube.com/watch?v=2IK3DFHRFfw)
<iframe title="Generative AI in a Nutshell - how to survive and thrive in the age of AI" src="https://www.youtube.com/embed/2IK3DFHRFfw?feature=oembed" height="113" width="200" allowfullscreen="" allow="fullscreen" style="aspect-ratio: 1.76991 / 1; width: 100%; height: 100%;"></iframe>
- [GitHub - dair-ai/ML-YouTube-Courses: 📺 Discover the latest machine learning / AI courses on YouTube.](https://github.com/dair-ai/ML-YouTube-Courses)
- [Applications of artificial intelligence - Wikipedia](https://en.wikipedia.org/wiki/Applications_of_artificial_intelligence)
- [[AI engineering]]
- [[AI engineering##Landscapes]] ![[AI engineering##Landscapes]]
- Phenomena:
- [[Consciousness]]
- [[Artificial consciousness]]
- Related fields:
- [[Statistics]]
- [[Data science]]
- [[Neurotechnology]]
- [[Selfreplicating machines]]
- [[Singularity]]
- [[Recursive self-improvement]]
- [[Intelligence explosion]]
- [[Hive mind]]
- [[Robot swam]]
- [[Transhumanism]]
- [[Risks of artificial intelligence]]
- [[AI safety]]
- Theory
- [[Mechanistic interpretability]]
- [[Mathematical theory of artificial intelligence]]
- [[Explainable artificial intelligence]]
- [[Intelligence#Definitions]]
- ![[Intelligence#Definitions]]
- [[Intelligence#Idealizations]]
- ![[Intelligence#Idealizations]]
- [[Artificial General Intelligence#Definitions]]
- ![[Artificial General Intelligence#Definitions]]
- [[Artificial Intelligence x Biological Intelligence x Collective Intelligence]]
- [[Generalization]]
- [[Artificial Intelligence x Generalization]]
- [[Curiosity]]
- [[Agent]], [[Multiagent system]]
Let's make a benchmark testing for AI systems that can nicely do causal modeling, strong generalization, continuous learning, data & compute efficiency and stability/reliability in symbolic reasoning, agency, more complex tasks across time and space, long term planning, optimal bayesian inference etc. The ultimate benchmark would be giving Ai systems all the information that Newton, Maxwell, Boltzman, Einstein, Feynman, Edward Witten, Von Neumann etc. had before their discoveries in physics or other fields and then seeing if the system could come up with the same or isomorphic discoveries.
## State of the art and news
- [AI News • Buttondown](https://buttondown.com/ainews/archive/), various subreddits ([LocalLlama](https://www.reddit.com/r/LocalLLaMA/), [Machine Learning](https://www.reddit.com/r/MachineLearning/), [Singularity](https://www.reddit.com/r/singularity/)), [X](https://x.com), [AI explained](https://www.youtube.com/@aiexplained-official) , [bycloud](https://www.youtube.com/@bycloudAI), [ML street talk](https://www.youtube.com/c/machinelearningstreettalk), [Yannic Kilcher](https://www.youtube.com/@YannicKilcher), [Dwarkesh Patel](https://www.youtube.com/@DwarkeshPatel), [Astral Codex Ten | Scott Alexander | Substack](https://www.astralcodexten.com/), [Hacker News](https://news.ycombinator.com/), [AI Alignment Forum](https://www.alignmentforum.org/), [LessWrong](https://www.lesswrong.com/), 80K hours, Theo Jaffee, Inside View, Future of Life Institute, Lex Fridman, Cognitive Revolution "How AI Changes Everything", Wes Roth, latent.space, etc.
## Future
- [[Computronium]]
- From [The Singularity Is Nearer - Wikipedia](https://en.wikipedia.org/wiki/The_Singularity_Is_Nearer) by [[Ray Kurzweil]]:
[[Images/4ee554bf075eb3a5879c61c1d14e1e51_MD5.jpeg|Open: Pasted image 20240919001041.png]]
![[Images/4ee554bf075eb3a5879c61c1d14e1e51_MD5.jpeg]]
## Brainstorming
[[Thoughts AI technical 11]]
[[Thoughts AI technical 10]]
[[Thoughts AI technical 9]]
[[Thoughts AI technical 8]]
[[Thoughts AI technical 7]]
[[Thoughts AI technical 6]]
[[Thoughts AI technical 5]]
[[Thoughts AI technical 4.5]]
[[Thoughts AI technical 4]]
[[Thoughts AI technical 3]]
[[Thoughts AI technical 2]]
[[Thoughts AI technical]]
[[Thoughts future of AI politics geopolitics futurology]]
[[Thoughts intelligence]]
[[Thoughts intelligence 3]]
[[Thoughts intelligence 2]]
[[Thoughts futurology]]
[[Thoughts comparing AI and biological intelligence]]
[[Thoughts AI]]
[[Thoughts AI x physics]]
[[Thoughts AI science]]
[[Thoughts AI programming coding software engineerin]]
[[Thoughts AI nontechnical]]
[[Thoughts AI nontechnical 2]]
[[Thoughts AI mechinterp]]
You can throw all maths from [[Statistical mechanics]], [[differential geometry]], [[group theory]], linear algebra, statistics, probability, category theory, classical mechanics, topology, graph theory, geometry, functional analysis, signal processing, automata theory, algebra, etc. to understand the [[Mathematical theory of artificial intelligence]].
Interpretability by Anthropic etc. is one of my favorite fields that I love to dig deep into! I was at a workshop by one of the founders of the field, I tried to replicate his paper, I played with some of the interpretability techniques in code.
[An Extremely Opinionated Annotated List of My Favourite Mechanistic Interpretability Papers v2](https://www.alignmentforum.org/posts/NfFST5Mio7BCAQHPA/an-extremely-opinionated-annotated-list-of-my-favourite-1)
[Mapping the Mind of a Large Language Model](https://www.anthropic.com/research/mapping-mind-language-model)
[Chris Olah - Looking Inside Neural Networks with Mechanistic Interpretability Chris Olah 2023](https://www.youtube.com/watch?v=2Rdp9GvcYOE)
[Open Problems in Mechanistic Interpretability: A Whirlwind Tour | Neel Nanda 2023](https://www.youtube.com/watch?v=EuQjiNrK77M)
[I Am The Golden Gate Bridge & Why That's Important.](https://www.youtube.com/watch?v=QqrGt5GrGfw))
My current model of the biggest AI models currently is:
Deep learning systems, each with their own architecture, are a weird messy ecosystem of learned emergent interconnected circuits. Various circuits memorize and others generalize, which is on a spectrum. An example of a circuit is an induction head. [In-context Learning and Induction Heads](https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html)
These circuits are in superpositions [Toy Models of Superposition 2022](https://transformer-circuits.pub/2022/toy_model/index.html) or/and in various ways localized and distributed. They are differently fuzzy and differently stable to random perturbations. They compose to various meta circuits like Indirect Object Identification. [Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small](https://arxiv.org/abs/2211.00593)
Initial layers of the AI model encode more low level feature detectors and later layers form more composed complex concept detectors. For example edge detectors, color detectors, curve detectors etc. compose into snout detectors, fur detectors and eventually into dog detectors. [Zoom In: An Introduction to Circuits 2020](https://distill.pub/2020/circuits/zoom-in/), [Curve Detectors 2020](https://distill.pub/2020/circuits/curve-detectors/), [Visualizing Weights 2021](https://distill.pub/2020/circuits/visualizing-weights/)
On top of these layers you can do disentagling and decomposition of features and circuits using sparse autoencoders and other methods, which can be more fine grained or more coarse grained. This is done in mechanistic interpretability, which is a field that reverse engineers AI systems.
And I see LLMs as semantic vector search engines with weak generalization capabilities. They have an internal ecosystem of vector representations of features and heuristics that you can retrieve by prompt queries. ([Francois Chollet's description](https://x.com/fchollet/status/1709242747293511939)) They are retrieving compressed knowledge and (sometimes less, sometimes more fuzzy) vector programs that are more concrete or abstract with weak generalization capabilities and (sometimes better, sometimes worse) composition. They can technically memorize compressed vector representations of various concrete and abstract programs (heuristics) and knowledge to some level of granuality with weak generalization. But they can also encode almost arbitrary generalizing circuits when we enhance our reverse engineering knowledge and techniques for steering the training and inference process. The new reinforcement learning chain of thought paradigm in OpenAI's o1 [Learning to Reason with LLMs](Learning to Reason with LLMs) is going more towards retrieving reasoning heuristics and composing them. [Is o1-preview reasoning?](https://www.youtube.com/watch?v=nO6sDk6vO0g) It's paradoxical how they can compose some features, but on others they fail utterly lol. They're specialized intelligences in different ways compared to in what ways humans are specialized intelligences. [General Intelligence: Define it, measure it, build it](https://www.youtube.com/watch?v=nL9jEy99Nh0) , [[o1]] , [Alice in Wonderland: Simple Tasks Showing Complete Reasoning Breakdown in State-Of-the-Art Large Language Models](https://x.com/JJitsev/status/1842727628463128968)
You want a perfect sweet spot between memorization and generalization for optimal intelligence.
This paper is also great: "We observe empirically the presence of four learning phases: comprehension, grokking, memorization, and confusion. We find representation learning to occur only in a "Goldilocks zone" (including comprehension and grokking) between memorization and confusion. We find on transformers the grokking phase stays closer to the memorization phase (compared to the comprehension phase), leading to delayed generalization. The Goldilocks phase is reminiscent of "intelligence from starvation" in Darwinian evolution, where resource limitations drive discovery of more efficient solutions." [Towards Understanding Grokking: An Effective Theory of Representation Learning](https://arxiv.org/abs/2205.10343) , [Explaining grokking through circuit efficiency](https://arxiv.org/abs/2309.02390)
Also transformers, now one of the most popular neural network architectures, are technically Turing complete (only infinite memory is missing, but this is what neural Turing machines are trying to solve), so you can simulate any program you want [Attention is Turing Complete](https://www.jmlr.org/papers/volume22/20-302/20-302.pdf) and [Memory Augmented Large Language Models are Computationally Universal](https://arxiv.org/abs/2301.04589), lately chain of thought with LLMs is also more universal [Chain of Thought Empowers Transformers to Solve Inherently Serial Problems](https://twitter.com/denny_zhou/status/1835761801453306089).
Here they play with gates like XOR in scales [Toward A Mathematical Framework for Computation in Superposition](https://www.lesswrong.com/posts/2roZtSr5TGmLjXMnT/toward-a-mathematical-framework-for-computation-in).
Here they found emergent finite automata of HTML in weights
[Towards Monosemanticity: Decomposing Language Models With Dictionary Learning](https://transformer-circuits.pub/2023/monosemantic-features) which they then extended [Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet](https://transformer-circuits.pub/2024/scaling-monosemanticity/).
Here they found a specialized general trigonometric algorithm for a specialized task in weights [Progress measures for grokking via mechanistic interpretability, reverse engineering modular addition](https://arxiv.org/abs/2301.05217).
Here they found a causal board state chess in weights that can be manipulated [Chess-GPT's Internal World Model](https://adamkarvonen.github.io/machine_learning/2024/01/03/chess-world-models.html), here board state othello [Actually, Othello-GPT Has A Linear Emergent World Representation](https://www.neelnanda.io/mechanistic-interpretability/othello).
Here they play with causal graphs in weights [Sparse Feature Circuits: Discovering and Editing Interpretable Causal Graphs in Language Models](https://arxiv.org/abs/2403.19647v1).
Hydra effect shows how removing part of the neural network makes other part of the neural network “adapt”, later components shift behaviour to compensate for its loss. The Hydra Effect: Emergent Self-repair in Language Model Computations https://arxiv.org/abs/2307.15771
Here they use I guess the symbolic RASP programming language to understand what weights do and to implement algorithms [Thinking Like Transformers](https://arxiv.org/abs/2106.06981) and [What Algorithms can Transformers Learn? A Study in Length Generalization](https://arxiv.org/abs/2310.16028).
Here they analyze learned general symmetries. [A Toy Model of Universality: Reverse Engineering How Networks Learn Group Operations 2023](https://arxiv.org/abs/2302.03025)
Here they talk about reverse engineering OpenFold, open source version of AlphaFold protein folding AI system! [Mechanistic Interpretability - Stella Biderman | Stanford MLSys #70](https://www.youtube.com/watch?v=P7sjVMtb5Sg) , [Chemistry Nobel goes to developers of AlphaFold AI that predicts protein structures](https://www.nature.com/articles/d41586-024-03214-7)
The flexibility of deep learning is magical and absolutely necessary and useful for a lot of tasks, but in other tasks it can often be tragic if we don't reverse engineer it properly, and thus it can be less reliable, resillient, stable, steerable, etc. than we need, but that can be improved by reverse engineering and thus steering. There is less of this flexibility in symbolic AI and neurosymbolic AI, but that can be more efficient.
But current mainstream AI systems are slowly morphing into neurosymoblic AI.
Various math AIs like AlphaGeometry and AlphaProof uses LLM with symbolic Lean [AI achieves silver-medal standard solving International Mathematical Olympiad problems](https://deepmind.google/discover/blog/ai-solves-imo-problems-at-silver-medal-level/)
o1 for reasoning uses CoT RL with reward model, not just pure deep learning [Introducing OpenAI o1-preview](https://openai.com/index/introducing-openai-o1-preview/)
AlphaCode uses MCTS and sampling [Competitive programming with AlphaCode](https://deepmind.google/discover/blog/competitive-programming-with-alphacode/)
AlphaFold for protein folding used graph network (with attention), which is one type of inductive bias, technically can be seen as neurosymbolic. [AlphaFold 3 predicts the structure and interactions of all of life’s molecules](https://blog.google/technology/ai/google-deepmind-isomorphic-alphafold-3-ai-model/)
etc.
The field of trying to understand the mind of AI is exploding!
If the scaling hypothesis believers are right, as they have been to a certain degree so far, then [[superintelligence]] is coming soon. However, if they're wrong, all the hundreds of billions and potentially trillions of dollars invested could be viewed as one of the biggest bets that became one of the biggest wastes of resources in human history. [Can AI Scaling Continue Through 2030?](https://epochai.org/blog/can-ai-scaling-continue-through-2030) [X](https://x.com/EpochAIResearch/status/1826038729263219193), [$125B for Superintelligence? 3 Models Coming, Sutskever's Secret SSI, & Data Centers (in space)... - YouTube](https://youtu.be/QCcJtTBvSKk](https://youtu.be/QCcJtTBvSKk) Microsoft etc. wants to build 100 billion $ supercomputer for example. OpenAI [[o1]] showed new inference time scaling laws, so, we will see, how far will this go.
In another 6 months we will possibly have o1 (full), Orion/GPT-5, Claude 3.5 Opus, Gemini 2 (maybe with Alphaproof and Alphacode integrated), Grok 3, possibly Llama 4
The capability of AI systems I'm the most interested in is if you gave the system all of classical mechanics, if it could derive general relativity and quantum mechanics from it, which seems to be a stronger out of distribution generalization than the current types of systems can do, but I'm open to be mistaken. And give it all (most of) known empirical data from experiments before the phase shift and derive it from these too.
LLMs are such extremely fascinating systems relative to all the things they are capable of doing when they approximate the training data manifold by curve fitting with attention and interpolate on top of it with all sorts of vector program combinations. And it still boggles my mind how the models can sometimes generalize out of distribution a lot with just curve fitting by getting into generalizing short program circuit, that lies in often flat local minima, when they grok!
Models are the data
Memorization is the first step towards generalization
Weight decay in deep learning incentivizes sparse generalizing circuits instead of inefficient distributed lookup table memorizing circuits
Can all the missing capabilities and steering of AI systems be achieved in deep learning by incentivizing the emergent growth of them as grokked robust symbolic generalizing circuits encoded in matrix multiplications with nonlinearities?
It would be great to have mathematical steering model that makes AI models trained on any arbitrary structured (mathematical) data grok that mathematical structure as a generalizing circuit
Grokking in mechanistic interpretability of neural networks shows how learning symbolic algorithms using flexible nonsymbolic substrate comes as a sudden metastable phase shift into nonsymbolic's substrate's configuration of its parts mathematically corresponding to computing the symbolic algorithms
The implementation details of all sorts of matrix operations black magic in code of low level deep learning engineering is just such a fascinating wizardry
It's still weird that multiplicating and adding numbers together can compress information and generalize so well in deep learning
How to formally define deception/lying to localize it in AI systems using mire formal mathematical analytical methods instead of statistical vibes?
I tend to forget that so many tricks we use in deep learning in for example transformers are less than 10 years old, wtf
Even tho LLMs can (for most tasks they're trained on) do just weak generalization by interpolation on the training data manifold, it's still so extremely useful in so many ways, like for math and coding, reformulating things, reexplaining things (for example using examples), knowledge retrieval, synthetizing knowledge, structure knowledge, synthetizing stories, combining concepts etc.! It's unbelievable how relatively good and useful in practice they are at so many of these tasks!
Mechanistic interpretability is function deapproximation
Here are additional extracted thoughts about AI mathematics, theory and engineering, continuing from most all-encompassing to most concrete:
Bitter lesson: Is all we need hidden in trainable structure of training data?
The model is the data, and if we feed it a ton of data from tons of modalities (not just human text, but also for example all sorts of synthetic data from physics simulations, etc.), might be possible to design data such that we get a lot of emergent generalizing technically superintelligent circuits
If you overfit on the entire world, you are basically done.
Machines are superhuman at many many dimensional manipulation and visualization
We will create more and more predictive models about how deep learning works
Black box AI models will be reverse engineered
Reverse engineering AI systems is the most interesting and the most important thing
Technical AI redteaming is machine learning whitehacking
For some tasks we will need unconstrained creative open-ended alien intelligence to solve it, so we cannot fully steer all AI systems. Complete reverse engineering and formal verification might not even be possible, because the systems are evolutionary chaotic fuzzy statistical madness like organisms are to some extend, which will most likely be never fully interpretable and controllable, but only approximately, which is still useful, but only sometimes, but where we need it we should have it.
Would mechanistic interpretability find out that Sora approximates wonky navier stokes equations for fluid dynamics?
Would mechanistic interpretability find out that AlphaFold approximates current or better symbolic equations for protein folding?
Hallucinations in LLMs are lowering with a lot of new research and engineering techniques, but probably it will be always effective to ground it externally realtime, unless the weights are somehow constantly updated and we reverse engineer the models in mechanistic interpretability and to good enough approximation figure out how exactly everything is stored and encoded in the weights of the model and manipulate the internals for perfect representations of facts and programs and do effective less faultty reasoning over them that minimizes hallucinations as much as possible to good enough level.
You can tell when deep learning code was written by metamathemagician or empirical alchemist engineer
Do you do frequent normalizations in your mental frameworks or do your gradients love to explode at slight perturbations?
GELU activation function adds in some gel to prevent dead neurons that ReLU suffers from
AI: Wins silver medal in international math olympiad, something that has been considered as an absolute AI win for a long time
People, desensitized from the recent AI hype: Nothing ever happens Yawn
AI gives unconstrained creativity
"AI is just a fad" says while he uses tools that use machine learning algorithms everywhere he steps without even realizing it
Inside of you are million dynamically on the fly constructed experts forming higher order experts
I tend to forget that so many tricks we use in deep learning in for example transformers are less than 10 years old, wtf
AI for the benefit of all sentient beings
Growing robust neural circuits in my garden
Stochastic parrots can fly so high
We will steer superintelligence
Fullbody strength training on caffine, creatine, protein, with Leopold's situational awareness of imminent superintelligence in first ear, Karpathy's GPT-2 from scratch in second ear, Stanford lectures on machine learning and transformers in third ear, Jeremy Howard's fastai practical deep learning for coders in fourth ear, Francois Chollet's algorithmic information theoretic model of general intelligence in fifth ear, Dive into Deep Learning in sixth ear, machine learning with pytorch and Sckit-learn in seventh ear, Deeplearning.AI's agentic LLM workflows in eighth ear, The AI Timeline, Latest AI Research Explained Simply in nineth ear, button down AI news in tenth ear, AI explained youtube channel in eleventh ear, bycloud AI news in twelveth ear, Wes Roth AI news in thirteenth ear, David Shapiro AI future in fourteenth ear, /r/singularity in fifteenth ear, /r/MachineLearning in sixteenth ear, /r/LocalLLaMA in seventeenth ear, Neel Nanda's reverse engineering of transformers in seventeenth ear, Arena mechanistic interpretability in eighteenth ear
Grokking in reverse engineering of AI systems is the ultimate nerdsnipe
Mixture of agents made of mixture of agents made of mixture of agents made of mixture of agents made of mixture of agents made of mixture of agents made of made of mixture of agents made of mixture of agents made of mixture of agents made of mixture of agents made of mixture of agents made of made of mixture of agents made of mixture of agents made of mixture of agents made of mixture of agents made of mixture of agents made of made of mixture of agents made of mixture of agents made of mixture of agents made of mixture of agents made of mixture of agents made of made of mixture of agents made of mixture of agents made of mixture of agents made of mixture of agents made of mixture of agents made of...
Approximating differentiable curvefitted solution approximating all functions using grokked fourier series algorithm?
Fourier series approximating any differential curvefitted solution?
Duality?
Taylor series approximations? Spline interpolation? Gaussian mixture models? Support vector machines? Decision trees? Random forests? Wavelets?
General universal approximators of arbitrary functions?
Generalized approximation theorem?
Space of all possible general universal approximators?
GraphRAG sounds promising, I just tested it for the first time. Can't wait for other neurosymbolic approaches fundamentally embedded into the architecture or using LLMs in a composite system! Better interpretability of neurosymbolics will also make better steerability and generalization and therefore more novel thoughts!
Mainstream LLM benchmarks suck and are full of contamination. AI explained has private noncontaminated reasoning benchmark. You can see how the models are actually getting better, and that were not really "stuck at GPT-4 level intelligence for over a year now".
The implementation details of all sorts of matrix operations black magic in code of low level deep learning engineering is just such a fascinating wizardry
One of my favorite ways of learning math with language models is prompting them to go step by step using examples through the various mathematical equations transforming data
Memorizing the benchmarks is all you need
AI systems need more centers from the brain implemented other than just language and visual centers
Soon we'll be duplicating and merging layers in biological systems too and duplicating and merging biological and nonbiological systems together
Autists (depth-first search)
Schizos (breadth-first search)
Autismophrenia, depth search of the breath of all possible topics in parallel
Technically you can make LLMs learn new things by putting what you said into short term memory (context window, which disappears with a new chat when you use some wrapper over the raw model) or long term memory (into a (vector) database or "into neurons" by training, but that's not being really done in practice yet)
the math of neural networks is 10000000 simplifications in a minute
I find it cool that the form of that Xavier initialization is not empirical guessing, but actually there is some mathematical derivation behind it
For deep learning systems mechanistic intepretability is a good approach in my opinion, because when we find featuras and circuits, we are able to do causal interventions, and thus steer the model
The typology of features and cricuits has been explored a lot in CNNs before (1) and is now starting to be explored in transformers in language (2). We have only recently been able to decipher superposition more (3).
1: [Chris Olah - Looking Inside Neural Networks with Mechanistic Interpretability Chris Olah 2023](https://www.youtube.com/watch?v=2Rdp9GvcYOE), [Zoom In: An Introduction to Circuits 2020](https://distill.pub/2020/circuits/zoom-in/), [Curve Detectors 2020](https://distill.pub/2020/circuits/curve-detectors/), [Visualizing Weights 2021](https://distill.pub/2020/circuits/visualizing-weights/)
2: [Open Problems in Mechanistic Interpretability: A Whirlwind Tour | Neel Nanda 2023](https://www.youtube.com/watch?v=EuQjiNrK77M), [An Extremely Opinionated Annotated List of My Favourite Mechanistic Interpretability Papers v2 2024]([An Extremely Opinionated Annotated List of My Favourite Mechanistic Interpretability Papers v2 'Äî AI Alignment Forum](https://www.alignmentforum.org/posts/NfFST5Mio7BCAQHPA/an-extremely-opinionated-annotated-list-of-my-favourite-1))
3: [Toy Models of Superposition 2022](https://transformer-circuits.pub/2022/toy_model/index.html), [Towards Monosemanticity: Decomposing Language Models With Dictionary Learning 2023](https://transformer-circuits.pub/2023/monosemantic-features/index.html), [Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet 2024](https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html)
I think you'll find out what youre looking for by whichever tools you're currently using. There are more specific and more general, simpler and more complex, etc., features and circuits depending on what kind of architecture and training data you have. You can find fur detectors in image models trained on animals. Finite state automata of HTML on code are found in models trained on code. Induction heads are more common and simple circuit in attention block in transformers across different training data. Indirect object recognition is a more complex circuit. E.g. [An Extremely Opinionated Annotated List of My Favourite Mechanistic Interpretability Papers v2 2024]([An Extremely Opinionated Annotated List of My Favourite Mechanistic Interpretability Papers v2 'Äî AI Alignment Forum](https://www.alignmentforum.org/posts/NfFST5Mio7BCAQHPA/an-extremely-opinionated-annotated-list-of-my-favourite-1)) One of the more universal attempts is: [A Toy Model of Universality: Reverse Engineering How Networks Learn Group Operations 2023](https://arxiv.org/abs/2302.03025)
For deep learning systems mechanistic intepretability is a good approach in my opinion, because when we find features and circuits we are able to do causal interventions, and so steer the model (the golden gate bridge Claude meme came about when Claude 3 Sonnet LLM variant was steered with sparse autoencoders and was absolutely obsessed with golden gate bridge and didn't talk about anything else for all questions :D or you can put on max happiness, hatred, love, different values, better code etc. [Mapping the Mind of a Large Language Model 2024](https://www.anthropic.com/news/mapping-mind-language-model), [I Am The Golden Gate Bridge & Why That's Important.](https://www.youtube.com/watch?v=QqrGt5GrGfw)). Similarly, I steered LLM through sparse autoencoder in Neel Nand's workshop. :D But the existing methods are still not sufficient, 100% efficient and interpeting everything.
Architectures that will change through development and learning will change some features and circuits and not others depending on how generic they are and what stage of training you are in. They can reverse engineer realtime while training, so they can, for example, explore circuit formation phases and see different phase shifts, which is mega cool, for example with this paper I tried: [
Progress measures for grokking via mechanistic interpretability, reverse-engineering transformers learned on modular addition with learned emergent generalizing trigonometic functions circuit 2023](https://arxiv.org/abs/2301.05217)
I'm all for trying to hardcode inductive biases (circuits) in AI systems, but it's also interesting to reverse engineer what features and circuits are emergently learned by deep learning, which can be many times more efficient, or impossible to hardcode by humans. Insights from reverse engineering deep learning systems can potentially be used to design new more intepretable and steerable architectures from scratch. Symbolic and neurosymbolic systems wouldn't need this reverse engineering so much because they would be more interpretable right out of the gate, but no one has successfully scaled them yet, so there is definitely some reason why black box (more white box over time as we reverse engineer them) deep learning is state of the art in so many tasks.
The flexibility of deep learning is magical and absolutely necessary and useful for a lot of tasks, but in other tasks it can often be tragic if we don't reverse engineer it properly, and thus it can be less reliable, resillient, stable, steerable, etc. than we need, but that can be improved by reverse engineering and thus steering. There is less of this flexibility in symbolic AI and neurosymbolic AI.
We get to sample the AI capabilities exponential just once a couple of years because it takes a while to build the supercomputers and train models on top of them
Is AI overhyped in the short term and underestimated in the long term?
I think the current AI boom might crash because of wayyy too early too big overly inflated expectations but then then AI will basically quickly boom again in a few years when new systems get released that are orders of magnitude scaled or algorithmically improved or with smarter data engineering or all or something else. A lot of the current inflated expectations will turn out to be true quickly soon in few years anyway, but so many of them are so early. And some exponentials are sampled too discretely. I think this will happen again and again. Booms and crashes will be closer and closer to eachother. Faster and faster, more compressed, closer to eachother overtime gartner hype cycles. A global exponential made of closer and closer local sigmoids. This is how I see the current technological singularity.
Are we getting to the point where AI is too (under certain definitions of intelligence) intelligent for the regular folk so AI companies have to nerf it to increase its usage lol.
LLMs are just the beginning of AI
Will AGI be bayesian?
Memorizing the benchmarks is all you need
"Learn to use AI" is the new "Learn to code"
## Deep dives
Now the biggest limitations in current AI systems are probably: to create more complex systematic coherent reasoning, planning, generalizing, search, agency (autonomy), memory, factual groundedness, online/continuous learning, software and hardware energetic and algoritmic efficiency, human-like ethical reasoning, or controllability, into AI systems, which they have relatively weak for more complex tasks, but we are making progress in this, either through composing LLMs in multiagent systems, scaling, higher quality data and training, poking around how they work inside and thus controlling them, through better mathematical models of how learning works and using these insights, or modified or overhauled architecture, etc.... or embodied robotics is also getting attention recently... and all top AGI labs are working/investing in these things to varying degrees. Here are some works:
Survey of LLMs: [[2312.03863] Efficient Large Language Models: A Survey](<https://arxiv.org/abs/2312.03863>), [[2311.10215] Predictive Minds: LLMs As Atypical Active Inference Agents](<https://arxiv.org/abs/2311.10215>), [A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications](<https://arxiv.org/abs/2402.07927>)
Reasoning: [Human-like systematic generalization through a meta-learning neural network | Nature](<https://www.nature.com/articles/s41586-023-06668-3>), [[2305.20050] Let's Verify Step by Step](<https://arxiv.org/abs/2305.20050>), [[2302.00923] Multimodal Chain-of-Thought Reasoning in Language Models](<https://arxiv.org/abs/2302.00923>), [[2310.09158] Learning To Teach Large Language Models Logical Reasoning](<https://arxiv.org/abs/2310.09158>), [[2303.09014] ART: Automatic multi-step reasoning and tool-use for large language models](<https://arxiv.org/abs/2303.09014>), [AlphaGeometry: An Olympiad-level AI system for geometry - Google DeepMind](<https://deepmind.google/discover/blog/alphageometry-an-olympiad-level-ai-system-for-geometry/>) (Devin AI programmer [Cognition | Introducing Devin, the first AI software engineer](https://www.cognition-labs.com/introducing-devin) ) (Automated Unit Test Improvement using Large Language Models at Meta [[2402.09171] Automated Unit Test Improvement using Large Language Models at Meta](https://arxiv.org/abs/2402.09171) ) (GPT-5: Everything You Need to Know So Far [GPT-5: Everything You Need to Know So Far - YouTube](https://www.youtube.com/watch?v=Zc03IYnnuIA) ), (Self-Discover: Large Language Models Self-Compose Reasoning Structures [[2402.03620] Self-Discover: Large Language Models Self-Compose Reasoning Structures](https://arxiv.org/abs/2402.03620) [x.com](https://twitter.com/ecardenas300/status/1769396057002082410) ) , (How to think step-by-step: A mechanistic understanding of chain-of-thought reasoning [x.com](https://twitter.com/fly51fly/status/1764279536794169768?t=up6d06PPGeCE5fvIlE418Q&s=19) [[2402.18312] How to think step-by-step: A mechanistic understanding of chain-of-thought reasoning](https://arxiv.org/abs/2402.18312) ), [Magic](http://magic.dev) , (The power of prompting [Your request has been blocked. This could be
due to several reasons.](https://www.microsoft.com/en-us/research/blog/the-power-of-prompting/) ), Flow engineering ( https://www.codium.ai/blog/alphacodium-state-of-the-art-code-generation-for-code-contests/ ), Stable Cascade ( [Introducing Stable Cascade — Stability AI](https://stability.ai/news/introducing-stable-cascade) ), ( RankPrompt: Step-by-Step Comparisons Make Language Models Better Reasoners [[2403.12373] RankPrompt: Step-by-Step Comparisons Make Language Models Better Reasoners](https://arxiv.org/abs/2403.12373) )
Robotics: [Mobile ALOHA - A Smart Home Robot - Compilation of Autonomous Skills - YouTube](<[Mobile ALOHA - A Smart Home Robot - Compilation of Autonomous Skills - YouTube](https://www.youtube.com/watch?v=zMNumQ45pJ8>),) [Eureka! Extreme Robot Dexterity with LLMs | NVIDIA Research Paper - YouTube](<[Eureka! Extreme Robot Dexterity with LLMs | NVIDIA Research Paper - YouTube](https://youtu.be/sDFAWnrCqKc?si=LEhIqEIeHCuQ0W2p>),) [Shaping the future of advanced robotics - Google DeepMind](<https://deepmind.google/discover/blog/shaping-the-future-of-advanced-robotics/>), [Optimus - Gen 2 - YouTube](<[Optimus - Gen 2 | Tesla - YouTube](https://www.youtube.com/watch?v=cpraXaw7dyc>),) [Atlas Struts - YouTube](<https://www.youtube.com/shorts/SFKM-Rxiqzg>), [Figure Status Update - AI Trained Coffee Demo - YouTube](<[Figure Status Update - AI Trained Coffee Demo - YouTube](https://www.youtube.com/watch?v=Q5MKo7Idsok>),) [Curiosity-Driven Learning of Joint Locomotion and Manipulation Tasks - YouTube](<[Curiosity-Driven Learning of Joint Locomotion and Manipulation Tasks - YouTube](https://www.youtube.com/watch?v=Qob2k_ldLuw>))
Multiagent systems: [[2402.01680] Large Language Model based Multi-Agents: A Survey of Progress and Challenges](<https://arxiv.org/abs/2402.01680>) (AutoDev: Automated AI-Driven Development [[2403.08299] AutoDev: Automated AI-Driven Development](https://arxiv.org/abs/2403.08299) )
Modified/alternative architectures: [Mamba (deep learning architecture) - Wikipedia](<https://en.wikipedia.org/wiki/Mamba_(deep_learning_architecture)>), [[2305.13048] RWKV: Reinventing RNNs for the Transformer Era](<https://arxiv.org/abs/2305.13048>), [V-JEPA: The next step toward advanced machine intelligence](<https://ai.meta.com/blog/v-jepa-yann-lecun-ai-model-video-joint-embedding-predictive-architecture/>), [Active Inference](<https://mitpress.mit.edu/9780262045353/active-inference/>)
Agency: [[2305.16291] Voyager: An Open-Ended Embodied Agent with Large Language Models](<https://arxiv.org/abs/2305.16291>), [[2309.07864] The Rise and Potential of Large Language Model Based Agents: A Survey](<https://arxiv.org/abs/2309.07864>), [Agents | Langchain](<https://python.langchain.com/docs/modules/agents/>), [GitHub - THUDM/AgentBench: A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)](<https://github.com/THUDM/AgentBench>), [[2401.12917] Active Inference as a Model of Agency](<https://arxiv.org/abs/2401.12917>), [CAN AI THINK ON ITS OWN? - YouTube](<[The Free Energy Principle approach to Agency - YouTube](https://www.youtube.com/watch?v=zMDSMqtjays>),) [Artificial Curiosity Since 1990](<https://people.idsia.ch/~juergen/artificial-curiosity-since-1990.html>)
Factual groundedness: [[2312.10997] Retrieval-Augmented Generation for Large Language Models: A Survey](<https://arxiv.org/abs/2312.10997>), [Perplexity](<https://www.perplexity.ai/>), [ChatGPT - Consensus](<https://chat.openai.com/g/g-bo0FiWLY7-consensus>)
Memory: larger context window [Gemini 10 million token context window](<[x.com](https://twitter.com/mattshumer_/status/1759804492919275555>),) or [vector databases](<https://en.wikipedia.org/wiki/Vector_database>) (Larimar: Large Language Models with Episodic Memory Control [[2403.11901] Larimar: Large Language Models with Episodic Memory Control](https://arxiv.org/abs/2403.11901) )
Hardware efficiency: extropic [Ushering in the Thermodynamic Future - Litepaper](https://www.extropic.ai/future) , tinygrad, groq [x.com](https://twitter.com/__tinygrad__/status/1769388346948853839) , ['A single chip to outperform a small GPU data center': Yet another AI chip firm wants to challenge Nvidia's GPU-centric world — Taalas wants to have super specialized AI chips | TechRadar](https://www.techradar.com/pro/a-single-chip-to-outperform-a-small-gpu-data-center-yet-another-ai-chip-firm-wants-to-challenge-nvidias-gpu-centric-world-taalas-wants-to-have-super-specialized-ai-chips) , new Nvidia GPUs [NVIDIA Just Started A New Era of Supercomputing... GTC2024 Highlight - YouTube](https://www.youtube.com/watch?v=GkBX9bTlNQA) , etched [Etched | The World's First Transformer ASIC](https://www.etched.com/) , https://techxplore.com/news/2023-12-ultra-high-processor-advance-ai-driverless.html , Thermodynamic AI and the fluctuation frontier [[2302.06584] Thermodynamic AI and the fluctuation frontier](https://arxiv.org/abs/2302.06584) , analog computing
[x.com](https://twitter.com/dmvaldman/status/1767745899407753718?t=Xe5sDPbrBVayUaAGX4ikmw&s=19) neuromorphics [Neuromorphic engineering - Wikipedia](https://en.wikipedia.org/wiki/Neuromorphic_engineering) , [Homepage | Cerebras](https://www.cerebras.net/)
Online/continuous learning: [Online machine learning - Wikipedia](https://en.wikipedia.org/wiki/Online_machine_learning) (A Comprehensive Survey of Continual Learning: Theory, Method and Application [[2302.00487] A Comprehensive Survey of Continual Learning: Theory, Method and Application](https://arxiv.org/abs/2302.00487) )
Meta learning: [Meta-learning (computer science) - Wikipedia](https://en.wikipedia.org/wiki/Meta-learning_(computer_science)) (Paired open-ended trailblazer (POET) [Paired open-ended trailblazer (POET) - Alper Ahmetoglu](https://alpera.xyz/blog/1/) )
Planning: [[2402.01817] LLMs Can't Plan, But Can Help Planning in LLM-Modulo Frameworks](<https://arxiv.org/abs/2402.01817>), [[2401.11708v1] Mastering Text-to-Image Diffusion: Recaptioning, Planning, and Generating with Multimodal LLMs](<https://arxiv.org/abs/2401.11708v1>), [[2305.16151] Understanding the Capabilities of Large Language Models for Automated Planning](<https://arxiv.org/abs/2305.16151>)
Generalizing: [[2402.10891] Instruction Diversity Drives Generalization To Unseen Tasks](<https://arxiv.org/abs/2402.10891>), [Automated discovery of algorithms from data | Nature Computational Science](<https://www.nature.com/articles/s43588-024-00593-9>), [[2402.09371] Transformers Can Achieve Length Generalization But Not Robustly](<https://arxiv.org/abs/2402.09371>), [[2310.16028] What Algorithms can Transformers Learn? A Study in Length Generalization](<https://arxiv.org/abs/2310.16028>), [[2307.04721] Large Language Models as General Pattern Machines](<https://arxiv.org/abs/2307.04721>), [A Tutorial on Domain Generalization | Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining](<https://dl.acm.org/doi/10.1145/3539597.3572722>), [[2311.06545] Understanding Generalization via Set Theory](<https://arxiv.org/abs/2311.06545>), [[2310.08661] Counting and Algorithmic Generalization with Transformers](<https://arxiv.org/abs/2310.08661>), [Neural Networks on the Brink of Universal Prediction with DeepMind's Cutting-Edge Approach | Synced](<https://syncedreview.com/2024/01/31/neural-networks-on-the-brink-of-universal-prediction-with-deepminds-cutting-edge-approach/>), [[2401.14953] Learning Universal Predictors](<https://arxiv.org/abs/2401.14953>), [Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks | Nature Communications](<https://www.nature.com/articles/s41467-021-23103-1>) (Natural language instructions induce compositional generalization in networks of neurons [Natural language instructions induce compositional generalization in networks of neurons | Nature Neuroscience](https://www.nature.com/articles/s41593-024-01607-5) ) (FRANCOIS CHOLLET - measuring intelligence and generalisation [[1911.01547] On the Measure of Intelligence](https://arxiv.org/abs/1911.01547) [x.com](https://twitter.com/fchollet/status/1763692655408779455) [#51 FRANCOIS CHOLLET - Intelligence and Generalisation - YouTube](https://youtu.be/J0p_thJJnoo) ) (Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking [[2403.09629] Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking](https://arxiv.org/abs/2403.09629) )
Search: AlphaGo ( [x.com](https://twitter.com/polynoamial/status/1766616044838236507) ), AlphaCode 2 Technical Report ( https://storage.googleapis.com/deepmind-media/AlphaCode2/AlphaCode2_Tech_Report.pdf ) , [[o1]]
It is quite possible (and a large % of researchers think) that research trying to control these crazy inscrutable matrices does not have sufficiently rapid development compared to capabilities research (increasing the amount of things these systems are capable of) and we might see more and more cases where AI systems do pretty random things we didnt intended.
Then we have no idea how to turn off behaviors with existing methods [Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training \ Anthropic](<https://www.anthropic.com/news/sleeper-agents-training-deceptive-llms-that-persist-through-safety-training>), which could be seen lately the last few days with how GPT4 started outputting total chaos after an update [OpenAI's ChatGPT Went Completely Off the Rails for Hours](<https://www.thedailybeast.com/openais-chatgpt-went-completely-off-the-rails-for-hours>), Gemini was more woke than intended ( [Google Has a New 'Woke' AI Problem With Gemini - Business Insider](https://www.businessinsider.com/google-gemini-woke-images-ai-chatbot-criticism-controversy-2024-2) [The self-unalignment problem — AI Alignment Forum](https://www.alignmentforum.org/posts/9GyniEBaN3YYTqZXn/the-self-unalignment-problem) ), or every moment I see a new jailbreak that bypasses the barriers [[2307.15043] Universal and Transferable Adversarial Attacks on Aligned Language Models](<https://arxiv.org/abs/2307.15043>).
Regarding definitions of AGI, this is good from DeepMind [Levels of AGI: Operationalizing Progress on the Path to AGI](https://arxiv.org/abs/2311.02462), or I also like, although quite vague, a pretty good definition from OpenAI: Highly autonomous systems that outperform humans at most economically valuable work, or this is a nice thread of various definitions and their pros and cons [9 definitions of Artificial General Intelligence (AGI) and why they are flawed](<[x.com](https://twitter.com/IntuitMachine/status/1721845203030470956>),) or also [Universal Intelligence: A Definition of Machine Intelligence](<https://arxiv.org/abs/0712.3329>), or Karl Friston has good definitions [KARL FRISTON - INTELLIGENCE 3.0](<[KARL FRISTON - INTELLIGENCE 3.0 - YouTube](https://youtu.be/V_VXOdf1NMw?si=8sOkRmbgzjrkvkif&t=1898>)))
In terms of predictions when AGI arrives, people around Effective Accelerationism, Singularity, Metaculus, LessWrong/Effective Altruism, and various influential people in top AGI labs, have very short timelines, often possibly in the 2020s. [Singularity Predictions 2024 by some people big in the field](https://www.reddit.com/r/singularity/comments/18vawje/singularity_predictions_2024/kfpntso/), [Metaculus: When will the first weakly general AI system be devised, tested, and publicly announced?](<[Date Weakly General AI is Publicly Known | Metaculus](https://www.metaculus.com/questions/3479/date-weakly-general-ai-is-publicly-known/>)) Then there is also this questionnaire about priorities and predictions from AI researchers, whose intervals are shrinking by about half each year in these questionnaires: [AI experts make predictions for 2040. I was a little surprised. | Science News](<[AI experts make predictions for 2040. I was a little surprised. | Science News - YouTube](https://www.youtube.com/watch?v=g7TghURVC6Y>),) [Thousands of AI Authors on the Future of AI](https://arxiv.org/abs/2401.02843)
When someone calls LLMs "just statistics", then you may similarly reductively say that humans are "just autocompleting predictions about input signals that are compared to actual signals" (using a version of bayesian inference) [Predictive coding](<https://en.wikipedia.org/wiki/Predictive_coding> [Visual processing - Wikipedia](https://en.wikipedia.org/wiki/Visual_processing) [Free energy principle - Wikipedia](https://en.wikipedia.org/wiki/Free_energy_principle) Inner screen model of consciousness: applying free energy principle to study of conscious experience [Inner screen model of consciousness: applying free energy principle to study of conscious experience - YouTube](https://www.youtube.com/watch?v=yZWjjDT5rGU&pp=ygUzZnJlZSBlbmVyZ3kgcHJpbmNpcGxlIGFwcGxpZWQgdG8gdGhlIGJyYWluIHJhbXN0ZWFk)) (global neuronal workspace theory + integrated information theory + recurrent processing theory + predictive processing theory + neurorepresentationalism + dendritic integration theory, An integrative, multiscale view on neural theories of consciousness https://www.cell.com/neuron/fulltext/S0896-6273%2824%2900088-6 ) (Models of consciousness Wikipedia [Models of consciousness - Wikipedia](https://en.wikipedia.org/wiki/Models_of_consciousness?wprov=sfla1) ) (More models https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8146510/ ) or "just bioelectricity and biochemistry" ( [Bioelectric networks: the cognitive glue enabling evolutionary scaling from physiology to mind | Animal Cognition](https://link.springer.com/article/10.1007/s10071-023-01780-3) ) (Bioelectric networks: the cognitive glue enabling evolutionary scaling from physiology to mind) or "just particles" ( https://en.wikipedia.org/wiki/Electromagnetic_theories_of_consciousness) (On Connectome and Geometric Eigenmodes of Brain Activity: The Eigenbasis of the Mind? [On Connectome and Geometric Eigenmodes of Brain Activity: The Eigenbasis of the Mind?](https://qri.org/blog/eigenbasis-of-the-mind) ) (Integrated world modeling theory [Frontiers | An Integrated World Modeling Theory (IWMT) of Consciousness: Combining Integrated Information and Global Neuronal Workspace Theories With the Free Energy Principle and Active Inference Framework; Toward Solving the Hard Problem and Characterizing Agentic Causation](https://www.frontiersin.org/articles/10.3389/frai.2020.00030/full) [Integrated world modeling theory expanded: Implications for the future of consciousness - PubMed](https://pubmed.ncbi.nlm.nih.gov/36507308/) ) (Can AI think on its own? [The Free Energy Principle approach to Agency - YouTube](https://youtu.be/zMDSMqtjays?si=MRXTcQ6s8o_KwNXd) ) (Synthetic Sentience: Can Artificial Intelligence become conscious? | Joscha Bach [Synthetic Sentience: Can Artificial Intelligence become conscious? | Joscha Bach | CCC #37c3 - YouTube](https://youtu.be/Ms96Py8p8Jg?si=HYx2lf8DrCkMcf8b) ). Or you can say that the whole universe is just a big differential equation. It doesn't really tell you specific things about concrete implementation details and about the dynamics that's actually happening there!>)
There are these priorities and predictions, the intervals of which get ~two times smaller every year in these questionares:
[AI experts make predictions for 2040. I was a little surprised. | Science News](<https://www.youtube.com/watch?v=g7TghURVC6Y>), [Thousands of AI Authors on the Future of AI](https://arxiv.org/abs/2401.02843):
"In the largest survey of its kind, 2,778 researchers who had published in top-tier artificial intelligence (AI) venues gave predictions on the pace of AI progress and the nature and impacts of advanced AI systems The aggregate forecasts give at least a 50% chance of AI systems achieving several milestones by 2028, including autonomously constructing a payment processing site from scratch, creating a song indistinguishable from a new song by a popular musician, and autonomously downloading and fine-tuning a large language model. If science continues undisrupted, the chance of unaided machines outperforming humans in every possible task was estimated at 10% by 2027, and 50% by 2047. The latter estimate is 13 years earlier than that reached in a similar survey we conducted only one year earlier [Grace et al., 2022]. However, the chance of all human occupations becoming fully automatable was forecast to reach 10% by 2037, and 50% as late as 2116 (compared to 2164 in the 2022 survey).
Most respondents expressed substantial uncertainty about the long-term value of AI progress: While 68.3% thought good outcomes from superhuman AI are more likely than bad, of these net optimists 48% gave at least a 5% chance of extremely bad outcomes such as human extinction, and 59% of net pessimists gave 5% or more to extremely good outcomes. Between 38% and 51% of respondents gave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction. More than half suggested that "substantial" or "extreme" concern is warranted about six different AI-related scenarios, including misinformation, authoritarian control, and inequality. There was disagreement about whether faster or slower AI progress would be better for the future of humanity. However, there was broad agreement that research aimed at minimizing potential risks from AI systems ought to be prioritized more."
[ML Code Challenges - Deep-ML](https://www.deep-ml.com/)
[[Omnidisciplinarity]]
## Resources
Stanford machine learning [https://www.youtube.com/playlist?list=PLoROMvodv4rNyWOpJg_Yh4NSqI4Z4vOYy](https://www.youtube.com/playlist?list=PLoROMvodv4rNyWOpJg_Yh4NSqI4Z4vOYy)
Stanford machine learning [https://www.youtube.com/playlist?list=PLoROMvodv4rMiGQp3WXShtMGgzqpfVfbU](https://www.youtube.com/playlist?list=PLoROMvodv4rMiGQp3WXShtMGgzqpfVfbU)
Stanford transformers [https://www.youtube.com/playlist?list=PLoROMvodv4rNiJRchCzutFw5ItR_Z27CM](https://www.youtube.com/playlist?list=PLoROMvodv4rNiJRchCzutFw5ItR_Z27CM)
Stanford generative models including diffusion [https://www.youtube.com/playlist?list=PLoROMvodv4rPOWA-omMM6STXaWW4FvJT8](https://www.youtube.com/playlist?list=PLoROMvodv4rPOWA-omMM6STXaWW4FvJT8)
Stanford deep learning [https://www.youtube.com/playlist?list=PLoROMvodv4rOABXSygHTsbvUz4G_YQhOb](https://www.youtube.com/playlist?list=PLoROMvodv4rOABXSygHTsbvUz4G_YQhOb)
Karpathy neural networks zero to hero [https://www.youtube.com/playlist?list=PLAqhIrjkxbuWI23v9cThsA9GvCAUhRvKZ](https://www.youtube.com/playlist?list=PLAqhIrjkxbuWI23v9cThsA9GvCAUhRvKZ)
Stanford natural language processing with deep learning [https://www.youtube.com/playlist?list=PLoROMvodv4rMFqRtEuo6SGjY4XbRIVRd4](https://www.youtube.com/playlist?list=PLoROMvodv4rMFqRtEuo6SGjY4XbRIVRd4)
MIT deep learning [https://www.youtube.com/playlist?list=PLTZ1bhP8GBuTCqeY19TxhHyrwFiot42_U](https://www.youtube.com/playlist?list=PLTZ1bhP8GBuTCqeY19TxhHyrwFiot42_U)
Stanford artificial intelligence [https://www.youtube.com/playlist?list=PLoROMvodv4rO1NB9TD4iUZ3qghGEGtqNX](https://www.youtube.com/playlist?list=PLoROMvodv4rO1NB9TD4iUZ3qghGEGtqNX)
Stanford machine learning with graphs [https://www.youtube.com/playlist?list=PLoROMvodv4rPLKxIpqhjhPgdQy7imNkDn](https://www.youtube.com/playlist?list=PLoROMvodv4rPLKxIpqhjhPgdQy7imNkDn)
Stanford natural language understanding [https://www.youtube.com/playlist?list=PLoROMvodv4rOwvldxftJTmoR3kRcWkJBp](https://www.youtube.com/playlist?list=PLoROMvodv4rOwvldxftJTmoR3kRcWkJBp)
Stanford reinforcement learning [https://www.youtube.com/playlist?list=PLoROMvodv4rOSOPzutgyCTapiGlY2Nd8u](https://www.youtube.com/playlist?list=PLoROMvodv4rOSOPzutgyCTapiGlY2Nd8u)
Stanford meta-learning [https://www.youtube.com/playlist?list=PLoROMvodv4rNjRoawgt72BBNwL2V7doGI](https://www.youtube.com/playlist?list=PLoROMvodv4rNjRoawgt72BBNwL2V7doGI)
Stanford artificial intelligence [https://www.youtube.com/playlist?list=PLoROMvodv4rPgrvmYbBrxZCK_GwXvDVL3](https://www.youtube.com/playlist?list=PLoROMvodv4rPgrvmYbBrxZCK_GwXvDVL3)
Stanford machine learning theory [https://www.youtube.com/playlist?list=PLoROMvodv4rP8nAmISxFINlGKSK4rbLKh](https://www.youtube.com/playlist?list=PLoROMvodv4rP8nAmISxFINlGKSK4rbLKh)
Stanford computer vision [https://www.youtube.com/playlist?list=PLkt2uSq6rBVctENoVBg1TpCC7OQi31AlC](https://www.youtube.com/playlist?list=PLkt2uSq6rBVctENoVBg1TpCC7OQi31AlC)
[https://www.youtube.com/playlist?list=PLSVEhWrZWDHQTBmWZufjxpw3s8sveJtnJ](https://www.youtube.com/playlist?list=PLSVEhWrZWDHQTBmWZufjxpw3s8sveJtnJ)
Stanford statistics [https://www.youtube.com/playlist?list=PLoROMvodv4rOpr_A7B9SriE_iZmkanvUg](https://www.youtube.com/playlist?list=PLoROMvodv4rOpr_A7B9SriE_iZmkanvUg)
Stanford methods in AI [https://www.youtube.com/playlist?list=PLoROMvodv4rO1NB9TD4iUZ3qghGEGtqNX](https://www.youtube.com/playlist?list=PLoROMvodv4rO1NB9TD4iUZ3qghGEGtqNX)
[https://www.youtube.com/playlist?list=PLrxfgDEc2NxZJcWcrxH3jyjUUrJlnoyzX](https://www.youtube.com/playlist?list=PLrxfgDEc2NxZJcWcrxH3jyjUUrJlnoyzX)
Stanford MIT robotics [https://www.youtube.com/playlist?list=PLkx8KyIQkMfUmB3j-DyP58ThDXM7enA8x](https://www.youtube.com/playlist?list=PLkx8KyIQkMfUmB3j-DyP58ThDXM7enA8x) [https://www.youtube.com/playlist?list=PLkx8KyIQkMfUSDs2hvTWzaq-cxGl8Ha69](https://www.youtube.com/playlist?list=PLkx8KyIQkMfUSDs2hvTWzaq-cxGl8Ha69) [https://www.youtube.com/playlist?list=PL65CC0384A1798ADF](https://www.youtube.com/playlist?list=PL65CC0384A1798ADF) [https://www.youtube.com/playlist?list=PLoROMvodv4rMeercb-kvGLUrOq4HR6BZD](https://www.youtube.com/playlist?list=PLoROMvodv4rMeercb-kvGLUrOq4HR6BZD) [https://www.youtube.com/playlist?list=PLN1iOWWHLJz3ndzRIvpbby75G2_2pYYrL](https://www.youtube.com/playlist?list=PLN1iOWWHLJz3ndzRIvpbby75G2_2pYYrL)
MIT machine learning [https://www.youtube.com/playlist?list=PLxC_ffO4q_rW0bqQB80_vcQB09HOA3ClV](https://www.youtube.com/playlist?list=PLxC_ffO4q_rW0bqQB80_vcQB09HOA3ClV) [https://www.youtube.com/playlist?list=PLnvKubj2-I2LhIibS8TOGC42xsD3-liux](https://www.youtube.com/playlist?list=PLnvKubj2-I2LhIibS8TOGC42xsD3-liux)
MIT efficient machine learning [https://www.youtube.com/playlist?list=PL80kAHvQbh-pT4lCkDT53zT8DKmhE0idB](https://www.youtube.com/playlist?list=PL80kAHvQbh-pT4lCkDT53zT8DKmhE0idB)
MIT linear algebra in machine learning [https://www.youtube.com/playlist?list=PLUl4u3cNGP63oMNUHXqIUcrkS2PivhN3k](https://www.youtube.com/playlist?list=PLUl4u3cNGP63oMNUHXqIUcrkS2PivhN3k)
Principles of Deep Learning Theory [https://arxiv.org/abs/2106.10165](https://arxiv.org/abs/2106.10165) [https://www.youtube.com/watch?v=YzR2gZrsdJc](https://www.youtube.com/watch?v=YzR2gZrsdJc) [https://www.youtube.com/watch?v=pad023JIXVA](https://www.youtube.com/watch?v=pad023JIXVA)
Active Inference book [https://mitpress.mit.edu/9780262045353/active-inference/](https://mitpress.mit.edu/9780262045353/active-inference/)
Geometric deep learning [https://geometricdeeplearning.com/](https://geometricdeeplearning.com/)
Mechanistic intepretability [https://www.neelnanda.io/mechanistic-interpretability](https://www.neelnanda.io/mechanistic-interpretability)
Topological data analysis [https://www.youtube.com/playlist?list=PLzERW_Obpmv_UW7RgbZW4Ebhw87BcoXc7](https://www.youtube.com/playlist?list=PLzERW_Obpmv_UW7RgbZW4Ebhw87BcoXc7)
Hinton AI [Neural Networks for Machine Learning — Geoffrey Hinton, UofT [FULL COURSE] - YouTube](https://www.youtube.com/playlist?list=PLLssT5z_DsK_gyrQ_biidwvPYCRNGI3iv)
[Mathematics for Machine Learning and Data Science Specialization](https://www.deeplearning.ai/courses/mathematics-for-machine-learning-and-data-science-specialization/)
[Deep Learning Course for Beginners - YouTube](https://www.youtube.com/watch?v=HJd1I3FdSnY)
[Generative Adversarial Networks (GANs) Specialization](https://www.deeplearning.ai/courses/generative-adversarial-networks-gans-specialization/)
[AI for Good Specialization - DeepLearning.AI](https://www.deeplearning.ai/courses/ai-for-good/)
## More github resources
[GitHub - patrickloeber/ml-study-plan: The Ultimate FREE Machine Learning Study Plan](https://github.com/patrickloeber/ml-study-plan)
[GitHub - dair-ai/ML-YouTube-Courses: 📺 Discover the latest machine learning / AI courses on YouTube.](https://github.com/dair-ai/ML-YouTube-Courses)
[GitHub - yazdotai/machine-learning-video-courses: Comprehensive list of machine learning videos](https://github.com/yazdotai/machine-learning-video-courses)
[GitHub - mirerfangheibi/Machine-Learning-Resources: Free and High-Quality Materials to Study Deep Learning](https://github.com/mirerfangheibi/Machine-Learning-Resources)
[ML Resources](https://sgfin.github.io/learning-resources/#ml)
[GitHub - therealsreehari/Learn-Data-Science-For-Free: This repositary is a combination of different resources lying scattered all over the internet. The reason for making such an repositary is to combine all the valuable resources in a sequential manner, so that it helps every beginners who are in a search of free and structured learning resource for Data Science. For Constant Updates Follow me in Twitter.](https://github.com/therealsreehari/Learn-Data-Science-For-Free)
[GitHub - openlists/MathStatsResources](https://github.com/openlists/MathStatsResources)
[GitHub - mdozmorov/Statistics_notes: Statistics, data analysis tutorials and learning resources](https://github.com/mdozmorov/Statistics_notes)
[GitHub - Machine-Learning-Tokyo/AI_Curriculum: Open Deep Learning and Reinforcement Learning lectures from top Universities like Stanford, MIT, UC Berkeley.](https://github.com/Machine-Learning-Tokyo/AI_Curriculum)
[GitHub - bentrevett/machine-learning-courses: A collection of machine learning courses.](https://github.com/bentrevett/machine-learning-courses)
[GitHub - Developer-Y/cs-video-courses: List of Computer Science courses with video lectures.](https://github.com/Developer-Y/cs-video-courses?tab=readme-ov-file#artificial-intelligence)
[GitHub - tigerneil/awesome-deep-rl: For deep RL and the future of AI.](https://github.com/tigerneil/awesome-deep-rl)
[GitHub - Developer-Y/math-science-video-lectures: List of Science courses with video lectures](https://github.com/Developer-Y/math-science-video-lectures)
[GitHub - Machine-Learning-Tokyo/Math_resources](https://github.com/Machine-Learning-Tokyo/Math_resources)
[GitHub - dair-ai/Mathematics-for-ML: 🧮 A collection of resources to learn mathematics for machine learning](https://github.com/dair-ai/Mathematics-for-ML)
[Foundations of Machine Learning](https://bloomberg.github.io/foml/#lectures)
[Data Science and Machine Learning Resources — Jon Krohn](https://www.jonkrohn.com/resources)
https://www.kdnuggets.com/10-github-repositories-to-master-machine-learning
[GitHub - exajobs/university-courses-collection: A collection of awesome CS courses, assignments, lectures, notes, readings & examinations available online for free.](https://github.com/exajobs/university-courses-collection?tab=readme-ov-file#artificial-intelligence)
[GitHub - prakhar1989/awesome-courses: :books: List of awesome university courses for learning Computer Science!](https://github.com/prakhar1989/awesome-courses?tab=readme-ov-file#artificial-intelligence)
[GitHub - owainlewis/awesome-artificial-intelligence: A curated list of Artificial Intelligence (AI) courses, books, video lectures and papers.](https://github.com/owainlewis/awesome-artificial-intelligence)
[GitHub - josephmisiti/awesome-machine-learning: A curated list of awesome Machine Learning frameworks, libraries and software.](https://github.com/josephmisiti/awesome-machine-learning)
[GitHub - academic/awesome-datascience: :memo: An awesome Data Science repository to learn and apply for real world problems.](https://github.com/academic/awesome-datascience?tab=readme-ov-file#the-data-science-toolbox)
[GitHub - ChristosChristofidis/awesome-deep-learning: A curated list of awesome Deep Learning tutorials, projects and communities.](https://github.com/ChristosChristofidis/awesome-deep-learning)
[GitHub - guillaume-chevalier/Awesome-Deep-Learning-Resources: Rough list of my favorite deep learning resources, useful for revisiting topics or for reference. I have got through all of the content listed there, carefully. - Guillaume Chevalier](https://github.com/guillaume-chevalier/Awesome-Deep-Learning-Resources?tab=readme-ov-file#online-classes)
[GitHub - MartinuzziFrancesco/awesome-scientific-machine-learning: A curated list of awesome Scientific Machine Learning (SciML) papers, resources and software](https://github.com/MartinuzziFrancesco/awesome-scientific-machine-learning)
[GitHub - SE-ML/awesome-seml: A curated list of articles that cover the software engineering best practices for building machine learning applications.](https://github.com/SE-ML/awesome-seml)
[GitHub - jtoy/awesome-tensorflow: TensorFlow - A curated list of dedicated resources http://tensorflow.org](https://github.com/jtoy/awesome-tensorflow)
[GitHub - altamiracorp/awesome-xai: Awesome Explainable AI (XAI) and Interpretable ML Papers and Resources](https://github.com/altamiracorp/awesome-xai)
[GitHub - ujjwalkarn/Machine-Learning-Tutorials: machine learning and deep learning tutorials, articles and other resources](https://github.com/ujjwalkarn/Machine-Learning-Tutorials)
[GitHub - kiloreux/awesome-robotics: A list of awesome Robotics resources](https://github.com/kiloreux/awesome-robotics)
[GitHub - jbhuang0604/awesome-computer-vision: A curated list of awesome computer vision resources](https://github.com/jbhuang0604/awesome-computer-vision)
[GitHub - dk-liang/Awesome-Visual-Transformer: Collect some papers about transformer with vision. Awesome Transformer with Computer Vision (CV)](https://github.com/dk-liang/Awesome-Visual-Transformer)
[GitHub - ChanganVR/awesome-embodied-vision: Reading list for research topics in embodied vision](https://github.com/ChanganVR/awesome-embodied-vision)
[GitHub - EthicalML/awesome-production-machine-learning: A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning](https://github.com/EthicalML/awesome-production-machine-learning)
[GitHub - wangyongjie-ntu/Awesome-explainable-AI: A collection of research materials on explainable AI/ML](https://github.com/wangyongjie-ntu/Awesome-explainable-AI)
[GitHub - jphall663/awesome-machine-learning-interpretability: A curated list of awesome responsible machine learning resources.](https://github.com/jphall663/awesome-machine-learning-interpretability)
[GitHub - JShollaj/awesome-llm-interpretability: A curated list of Large Language Model (LLM) Interpretability resources.](https://github.com/JShollaj/awesome-llm-interpretability)
[GitHub - MinghuiChen43/awesome-deep-phenomena: A curated list of papers of interesting empirical study and insight on deep learning. Continually updating...](https://github.com/MinghuiChen43/awesome-deep-phenomena)
[GitHub - Nikasa1889/awesome-deep-learning-theory: A curated list of awesome Deep Learning theories that shed light on the mysteries of DL](https://github.com/Nikasa1889/awesome-deep-learning-theory)
[[2106.10165] The Principles of Deep Learning Theory](https://arxiv.org/abs/2106.10165)
[GitHub - awesomedata/awesome-public-datasets: A topic-centric list of HQ open datasets.](https://github.com/awesomedata/awesome-public-datasets)
[GitHub - jsbroks/awesome-dataset-tools: 🔧 A curated list of awesome dataset tools](https://github.com/jsbroks/awesome-dataset-tools)
[GitHub - mint-lab/awesome-robotics-datasets: A collection of useful datasets for robotics and computer vision](https://github.com/mint-lab/awesome-robotics-datasets)
[GitHub - kelvins/awesome-mlops: :sunglasses: A curated list of awesome MLOps tools](https://github.com/kelvins/awesome-mlops)
[GitHub - Bisonai/awesome-edge-machine-learning: A curated list of awesome edge machine learning resources, including research papers, inference engines, challenges, books, meetups and others.](https://github.com/Bisonai/awesome-edge-machine-learning)
## Resources applications and subfields
[GitHub - yuzhimanhua/Awesome-Scientific-Language-Models: A Comprehensive Survey of Scientific Large Language Models and Their Applications in Scientific Discovery](https://github.com/yuzhimanhua/Awesome-Scientific-Language-Models)
[GitHub - georgezouq/awesome-ai-in-finance: 🔬 A curated list of awesome LLMs & deep learning strategies & tools in financial market.](https://github.com/georgezouq/awesome-ai-in-finance)
[GitHub - jyguyomarch/awesome-conversational-ai: A curated list of delightful Conversational AI resources.](https://github.com/jyguyomarch/awesome-conversational-ai)
[GitHub - theimpossibleastronaut/awesome-linguistics: A curated list of anything remotely related to linguistics](https://github.com/theimpossibleastronaut/awesome-linguistics)
[GitHub - timzhang642/3D-Machine-Learning: A resource repository for 3D machine learning](https://github.com/timzhang642/3D-Machine-Learning)
[GitHub - yenchenlin/awesome-adversarial-machine-learning: A curated list of awesome adversarial machine learning resources](https://github.com/yenchenlin/awesome-adversarial-machine-learning)
[GitHub - chbrian/awesome-adversarial-examples-dl: A curated list of awesome resources for adversarial examples in deep learning](https://github.com/chbrian/awesome-adversarial-examples-dl)
[GitHub - fepegar/awesome-medical-imaging: Awesome list of software that I use to do research in medical imaging.](https://github.com/fepegar/awesome-medical-imaging)
[GitHub - awesome-NeRF/awesome-NeRF: A curated list of awesome neural radiance fields papers](https://github.com/awesome-NeRF/awesome-NeRF)
[GitHub - vsitzmann/awesome-implicit-representations: A curated list of resources on implicit neural representations.](https://github.com/vsitzmann/awesome-implicit-representations)
[GitHub - weihaox/awesome-neural-rendering: Resources of Neural Rendering](https://github.com/weihaox/awesome-neural-rendering)
[GitHub - zhoubolei/awesome-generative-modeling: Bolei's archive on generative modeling](https://github.com/zhoubolei/awesome-generative-modeling)
[GitHub - XindiWu/Awesome-Machine-Learning-in-Biomedical-Healthcare-Imaging: A list of awesome selected resources towards the application of machine learning in Biomedical/Healthcare Imaging, inspired by](https://github.com/XindiWu/Awesome-Machine-Learning-in-Biomedical-Healthcare-Imaging)
[GitHub - hoya012/awesome-anomaly-detection: A curated list of awesome anomaly detection resources](https://github.com/hoya012/awesome-anomaly-detection)
[GitHub - subeeshvasu/Awsome_Deep_Geometry_Learning: A list of resources about deep learning solutions on 3D shape processing](https://github.com/subeeshvasu/Awsome_Deep_Geometry_Learning)
[GitHub - subeeshvasu/Awesome-Neuron-Segmentation-in-EM-Images: A curated list of resources for 3D segmentation of neurites in EM images](https://github.com/subeeshvasu/Awesome-Neuron-Segmentation-in-EM-Images)
[GitHub - subeeshvasu/Awsome_Delineation](https://github.com/subeeshvasu/Awsome_Delineation)
[GitHub - subeeshvasu/Awsome-GAN-Training: A curated list of resources related to training of GANs](https://github.com/subeeshvasu/Awsome-GAN-Training)
[GitHub - nashory/gans-awesome-applications: Curated list of awesome GAN applications and demo](https://github.com/nashory/gans-awesome-applications)
[GitHub - tstanislawek/awesome-document-understanding: A curated list of resources for Document Understanding (DU) topic](https://github.com/tstanislawek/awesome-document-understanding)
[GitHub - matthewvowels1/Awesome-Video-Generation: A curated list of awesome work on video generation and video representation learning, and related topics.](https://github.com/matthewvowels1/Awesome-Video-Generation)
[GitHub - datamllab/awesome-fairness-in-ai: A curated list of awesome Fairness in AI resources](https://github.com/datamllab/awesome-fairness-in-ai)
## Other resources
[GitHub - n2cholas/awesome-jax: JAX - A curated list of resources https://github.com/google/jax](https://github.com/n2cholas/awesome-jax)
[GitHub - benedekrozemberczki/awesome-gradient-boosting-papers: A curated list of gradient boosting research papers with implementations.](https://github.com/benedekrozemberczki/awesome-gradient-boosting-papers)
[GitHub - benedekrozemberczki/awesome-monte-carlo-tree-search-papers: A curated list of Monte Carlo tree search papers with implementations.](https://github.com/benedekrozemberczki/awesome-monte-carlo-tree-search-papers)
[GitHub - igorbarinov/awesome-data-engineering: A curated list of data engineering tools for software developers](https://github.com/igorbarinov/awesome-data-engineering)
[GitHub - oxnr/awesome-bigdata: A curated list of awesome big data frameworks, ressources and other awesomeness.](https://github.com/oxnr/awesome-bigdata)
[GitHub - benedekrozemberczki/awesome-decision-tree-papers: A collection of research papers on decision, classification and regression trees with implementations.](https://github.com/benedekrozemberczki/awesome-decision-tree-papers)
[GitHub - chihming/awesome-network-embedding: A curated list of network embedding techniques.](https://github.com/chihming/awesome-network-embedding)
## More resources
[[AI mathcode long important]]
[[AI techy words audio long important]]
[[AI techy words visual long important]]
[[AI nontechy words audio long important]]
[[AI nontechy words visual long important]]
[[AI techy words audio long important]]
[[AI mathcode short important]]
[[AI techy words audio short important]]
[[AI techy words visual short important]]
[[AI nontechy words audio short important]]
[[AI nontechy words visual short important]]
[[AI techy words audio short]]
[[AI techy words audio short important]]
[[AI techy words audio long]]
[[AI techy words audio long important]]
[[AI mathcode short]]
[[AI mathcode short important]]
[[AI mathcode long]]
[[AI mathcode long important]]
[[AI nontechy words visual short]]
[[AI nontechy words visual short important]]
[[AI nontechy words visual long]]
[[AI nontechy words visual long important]]
[[AI nontechy words audio short]]
[[AI nontechy words audio short important]]
[[AI nontechy words audio long]]
[[AI nontechy words audio long important]]
[[AI techy words visual short]]
[[AI techy words visual short important]]
[[AI techy words visual long]]
[[AI techy words visual long important]]
[[Resources theory reverse engineering mechinterp and alignment AI]]
[[Resources theory reverse engineering mechinterp an]]
[[Resources AI SoTA]]
[[Resources AI SoTA practice]]
[[Resources AI basics]]
[[Resources AI advanced 1]]
[[Resources theory reverse engineering mechinterp and alignment AI]]
[[AI tools to try]]
[[Prompts 4]]
[[Prompts 3]]
[[Prompts 2]]
[[Prompts]]
[[Cursor prompts]]
## Deep dives
- [[Theory of Everything in Intelligence]]
- ![[Theory of Everything in Intelligence#Definitions]]
## State of the art
- [State of AI report 2024 October](https://www.youtube.com/watch?v=CyOL_4K2Nyo)
- [AI Index Report 2024 – Artificial Intelligence Index](https://aiindex.stanford.edu/report/)
Top 10 Takeaways:
1. AI beats humans on some tasks, but not on all. AI has surpassed human performance on several benchmarks, including some in image classification, visual reasoning, and English understanding. Yet it trails behind on more complex tasks like competition-level mathematics, visual commonsense reasoning and planning.
2. Industry continues to dominate frontier AI research. In 2023, industry produced 51 notable machine learning models, while academia contributed only 15. There were also 21 notable models resulting from industry-academia collaborations in 2023, a new high.
3. Frontier models get way more expensive. According to AI Index estimates, the training costs of state-of-the-art AI models have reached unprecedented levels. For example, OpenAI’s GPT-4 used an estimated $78 million worth of compute to train, while Google’s Gemini Ultra cost $191 million for compute.
4. The United States leads China, the EU, and the U.K. as the leading source of top AI models. In 2023, 61 notable AI models originated from U.S.-based institutions, far outpacing the European Union’s 21 and China’s 15.
5. Robust and standardized evaluations for LLM responsibility are seriously lacking. New research from the AI Index reveals a significant lack of standardization in responsible AI reporting. Leading developers, including OpenAI, Google, and Anthropic, primarily test their models against different responsible AI benchmarks. This practice complicates efforts to systematically compare the risks and limitations of top AI models.
6. Generative AI investment skyrockets. Despite a decline in overall AI private investment last year, funding for generative AI surged, nearly octupling from 2022 to reach $25.2 billion. Major players in the generative AI space, including OpenAI, Anthropic, Hugging Face, and Inflection, reported substantial fundraising rounds.
7. The data is in: AI makes workers more productive and leads to higher quality work. In 2023, several studies assessed AI’s impact on labor, suggesting that AI enables workers to complete tasks more quickly and to improve the quality of their output. These studies also demonstrated AI’s potential to bridge the skill gap between low- and high-skilled workers. Still, other studies caution that using AI without proper oversight can lead to diminished performance.
8. Scientific progress accelerates even further, thanks to AI. In 2022, AI began to advance scientific discovery. 2023, however, saw the launch of even more significant science-related AI applications— from AlphaDev, which makes algorithmic sorting more efficient, to GNoME, which facilitates the process of materials discovery.
9. The number of AI regulations in the United States sharply increases. The number of AIrelated regulations in the U.S. has risen significantly in the past year and over the last five years. In 2023, there were 25 AI-related regulations, up from just one in 2016. Last year alone, the total number of AI-related regulations grew by 56.3%.
10. People across the globe are more cognizant of AI’s potential impact—and more nervous. A survey from Ipsos shows that, over the last year, the proportion of those who think AI will dramatically affect their lives in the next three to five years has increased from 60% to 66%. Moreover, 52% express nervousness toward AI products and services, marking a 13 percentage point rise from 2022. In America, Pew data suggests that 52% of Americans report feeling more concerned than excited about AI, rising from 37% in 2022.
## Links
[[Links AI]]
[[Links AI x quantum computing]]
[[Links AI x psychology]]
[[Links AI theory]]
[[Links AI technical]]
[[Links AI SOTA research]]
[[Links AI SOTA practice]]
[[Links AI SOTA practice(1)]]
[[Links AI science]]
[[Links AI programming]]
[[Links AI physics]]
[[Links AI nontechnical]]
[[Links AI neuroscience]]
[[Links AI mechinterp]]
[[Links AI math]]
[[Links AI healthcare biology]]
[[Links AI geopolitics politics futurology governanc]]
[[Links AI for technology development]]
[[Links AI for neuroscience]]
[[Links AI for material science]]
[[Links AI biology]]
[[Links AI basics]]
## Written by AI (may include factually incorrect information)
Awesome — I’ll create a massive, in-depth list of artificial intelligence topics spanning foundational concepts, advanced subfields, interdisciplinary areas, and emerging frontiers. Each topic will include a one-sentence explanation and be supported by as many relevant sources as possible from both academic and industry domains. I’ll format it as a structured list with clear sections, but without using tables.
This will take a bit of time, but I’ll let you know as soon as it’s ready for your review.
# Comprehensive List of AI Topics
## Foundations of Artificial Intelligence (AI)
- **Symbolic AI (Good Old-Fashioned AI):** An early approach to AI that uses human-readable symbols and explicitly coded rules to represent knowledge and logic, exemplified by rule-based expert systems and logical reasoning programs ([Symbolic AI vs. machine learning in natural language processing](https://multilingual.com/issues/may-june-2020/symbolic-ai-vs-machine-learning-in-natural-language-processing/#:~:text=processing%20multilingual,concepts%20as%20well%20as%20logic)).
- **Heuristic Search & Planning:** Techniques for exploring possible states or action sequences to reach goals efficiently, often guided by heuristic functions (estimates); for example, the A* algorithm uses heuristics to find optimal paths in problem spaces ([Ivan Kristianto Singgih posted on LinkedIn](https://www.linkedin.com/posts/ivan-kristianto-singgih-56a5652b_objectdetection-yolov8-deepsort-activity-7065714021345230849-5ZnA#:~:text=Ivan%20Kristianto%20Singgih%20posted%20on,each%20pixel%20in%20an)).
- **Knowledge Representation & Reasoning (KRR):** The area of AI focused on encoding information about the world in structured forms (like logic, ontologies, or semantic networks) that computers can use to draw inferences and solve complex problems ([Understanding knowledge reasoning in AI systems](https://telnyx.com/learn-ai/knowledge-reasoning#:~:text=Knowledge%20representation%20and%20reasoning%20,its%20significance%20in%20AI%20development)) ([Understanding knowledge reasoning in AI systems](https://telnyx.com/learn-ai/knowledge-reasoning#:~:text=Knowledge%20representation%20and%20reasoning%20,like%20reasoning)).
- **Automated Reasoning & Theorem Proving:** The use of algorithms to automatically infer new facts or prove logical statements from known premises (as in Prolog or other logic systems), enabling machines to solve problems by reasoning over symbolic knowledge ([Understanding knowledge reasoning in AI systems](https://telnyx.com/learn-ai/knowledge-reasoning#:~:text=Knowledge%20representation%20refers%20to%20structuring,easier%20to%20model%20complex%20knowledge)).
- **Expert Systems:** AI programs that emulate the decision-making of human experts using a knowledge base of if-then rules and facts; they were early successful applications of symbolic AI in domains like medical diagnosis and troubleshooting ([What is Ethical AI?](https://www.holisticai.com/blog/what-is-ethical-ai#:~:text=Ethical%20AI%20refers%20to%20the,and%20respect%20for%20human%20values)) ([Understanding knowledge reasoning in AI systems](https://telnyx.com/learn-ai/knowledge-reasoning#:~:text=Knowledge%20representation%20refers%20to%20structuring,easier%20to%20model%20complex%20knowledge)).
- **Cognitive Architectures:** Frameworks (like SOAR or ACT-R) that attempt to model human cognition in software, integrating memory, perception, and reasoning modules to achieve human-like general intelligence in problem-solving ([Neuro-symbolic AI - Wikipedia](https://en.wikipedia.org/wiki/Neuro-symbolic_AI#:~:text=Neuro,114%20and%20efficient%20machine%20learning)) ([Neuro-symbolic AI - Wikipedia](https://en.wikipedia.org/wiki/Neuro-symbolic_AI#:~:text=unconscious,10)).
## Machine Learning Paradigms
- **Machine Learning (General):** A broad field of AI where algorithms improve their performance on tasks by learning patterns from data, rather than through explicit programming ([What is Machine Learning? | Answer from SUSE Defines](https://www.suse.com/suse-defines/definition/machine-learning/#:~:text=Machine%20learning%20is%20a%20field,models%20built%20from%20sample%20inputs)). This enables computers to **learn from experience** and make data-driven predictions or decisions.
- **Supervised Learning:** A machine learning approach where models are trained on labeled examples (input-output pairs) to learn a mapping from inputs to outputs, so that they can predict the correct output for new inputs ([Supervised Machine Learning?. Supervised learning is the machine… | by Sibt-e-ali Baqar | Medium](https://medium.com/@sibteali786/supervised-machine-learning-8b13417d5c76#:~:text=,a%20set%20of%20training%20examples)). *(E.g., learning to classify images with known labels.)*
- **Unsupervised Learning:** A type of machine learning that finds hidden patterns or intrinsic structures in data without any labeled responses, for example by clustering similar data points or reducing dimensionality of data ([Unsupervised Learning: Definition, Explanation, and Use Cases | Vation Ventures](https://www.vationventures.com/glossary/unsupervised-learning-definition-explanation-and-use-cases#:~:text=Unsupervised%20learning%20is%20a%20type,patterns%20or%20grouping%20in%20data)).
- **Reinforcement Learning (RL):** An area of ML where an *agent* learns to make decisions by interacting with an environment and receiving feedback in the form of rewards or penalties, thereby learning a policy that maximizes cumulative reward ([Reinforcement Learning — Beginner’s Approach Chapter -I | by Shashwat Tiwari | Analytics Vidhya | Medium](https://medium.com/analytics-vidhya/reinforcement-learning-beginners-approach-chapter-i-689f999cf572#:~:text=According%20to%20Wikipedia)). *(This trial-and-error paradigm has achieved feats like agents learning to play Atari games or Go at superhuman levels.)*
- **Semi-Supervised Learning:** Techniques that combine a small amount of labeled data with a large amount of unlabeled data during training, allowing models to leverage unlabeled examples to improve learning when labeled data is scarce ([Active Learning Definition | DeepAI](https://deepai.org/machine-learning-glossary-and-terms/active-learning#:~:text=Active%20Learning%20Definition%20,queries%20a%20teacher%20for%20guidance)).
- **Self-Supervised Learning:** An approach where the model creates its own labels from the data (for example, by predicting missing parts of input), enabling learning from unlabeled data by solving surrogate tasks, which has been crucial for large language models ([Solving a machine-learning mystery | MIT News](https://news.mit.edu/2023/large-language-models-in-context-learning-0207#:~:text=Solving%20a%20machine,from%20poetry%20to%20programming%20code)) ([AI Atlas #9: Transformers | Glasswing Ventures](https://glasswing.vc/blog/ai-atlas-9-transformers/#:~:text=Transformers%20are%20a%20type%20of,data%2C%20such%20as%20natural%20language)).
- **Transfer Learning:** The practice of leveraging knowledge learned in one problem or domain (typically via a pretrained model) and applying it to a different but related problem, which can significantly speed up learning and improve performance when data is limited ([Understanding knowledge reasoning in AI systems](https://telnyx.com/learn-ai/knowledge-reasoning#:~:text=Knowledge%20representation%20refers%20to%20structuring,easier%20to%20model%20complex%20knowledge)).
- **Active Learning:** A learning strategy where the algorithm actively selects the most informative data points to be labeled by an oracle (e.g. a human), aiming to achieve high accuracy with fewer labeled examples by focusing on uncertain or representative samples ([Active Learning Definition | DeepAI](https://deepai.org/machine-learning-glossary-and-terms/active-learning#:~:text=Active%20Learning%20Definition%20,queries%20a%20teacher%20for%20guidance)).
- **Online Learning:** A model training regime where the algorithm updates incrementally as each new data point arrives, rather than training on a fixed batch, allowing the model to adapt continuously to streaming data (useful for dynamic environments) ([Understanding knowledge reasoning in AI systems](https://telnyx.com/learn-ai/knowledge-reasoning#:~:text=Knowledge%20representation%20and%20reasoning%20,like%20reasoning)).
- **Ensemble Learning:** Methods that combine multiple models (weak learners) to produce a more robust predictor, such as bagging, boosting (e.g. AdaBoost), or stacking, often achieving higher accuracy than individual models ([What is Ethical AI?](https://www.holisticai.com/blog/what-is-ethical-ai#:~:text=Ethical%20AI%20refers%20to%20the,and%20respect%20for%20human%20values)).
## Deep Learning and Neural Networks
- **Artificial Neural Networks (ANNs):** Computing systems inspired by biological neurons, composed of layers of interconnected “neurons” (weighted units); ANNs learn to perform tasks by adjusting connection weights based on data, enabling pattern recognition and prediction ([Artificial neural networks learn better when | EurekAlert!](https://www.eurekalert.org/news-releases/971905#:~:text=image%3A%C2%A0Artificial%20neural%20networks%20are%20computing,view%20more)).
- **Deep Learning:** A subset of ML that uses **multi-layer neural networks** to learn data representations with multiple levels of abstraction, dramatically advancing the state-of-the-art in vision, speech, and many other areas ([(PDF) Deep Learning](https://www.researchgate.net/publication/277411157_Deep_Learning#:~:text=Deep%20learning%20allows%20computational%20models,have%20shone%20light%20on%20sequential)). (Deep learning’s multi-layer approach allows it to automatically learn features from raw data, given large datasets and compute power.)
- **Convolutional Neural Networks (CNNs):** A class of deep neural networks specialized for grid-like data such as images, which use convolutional layers to automatically extract local features (like edges or textures) from images; CNNs have driven breakthroughs in image classification and object detection ([Convolutional Neural Networks: A Comprehensive Guide | by Jorgecardete | The Deep Hub | Medium](https://medium.com/thedeephub/convolutional-neural-networks-a-comprehensive-guide-5cc0b5eae175#:~:text=C%20onvolutional%20Neural%20Networks%2C%20commonly,to%20process%20and%20classify%20images)).
- **Recurrent Neural Networks (RNNs):** Neural networks designed for sequential data that maintain an internal state (memory) to capture information from earlier steps in a sequence, enabling tasks like language modeling or time-series prediction by handling temporal dependencies ([Recurrent Neural Networks (RNN)](https://www.linkedin.com/pulse/recurrent-neural-networks-rnn-bluechip-technologies-asia-mg5ec#:~:text=Recurrent%20Neural%20Networks%20,applications%20of%20Recurrent%20Neural%20Networks)). *Variants include LSTMs and GRUs, which mitigate issues like short-term memory in vanilla RNNs.*
- **Transformers:** A neural network architecture using self-attention mechanisms to process sequences in parallel, rather than step-by-step as RNNs do, capturing long-range context efficiently ([AI Atlas #9: Transformers | Glasswing Ventures](https://glasswing.vc/blog/ai-atlas-9-transformers/#:~:text=Transformers%20are%20a%20type%20of,data%2C%20such%20as%20natural%20language)). *Transformers have revolutionized natural language processing, powering large language models that achieve remarkable performance on translation, question-answering, and text generation tasks.*
- **Generative Adversarial Networks (GANs):** A framework in which two neural networks – a *generator* and a *discriminator* – are trained adversarially; the generator tries to create realistic data (e.g. images) while the discriminator learns to distinguish fake from real, pushing the generator to produce increasingly realistic outputs ([Generative Adversarial Networks. Generative Adversarial Networks (GANs)… | by Marco Del Pra | Medium](https://medium.com/@marcodelpra/generative-adversarial-networks-dba10e1b4424#:~:text=learning%20systems%20adept%20at%20replicating,of%20the%20time)). (GANs have been used to create photorealistic images, deepfakes, and art styles.)
- **Autoencoders:** Neural networks trained to compress data into a lower-dimensional code (encoding) and then reconstruct it back to the original; by learning this reconstruction task, autoencoders discover important features in the data, useful for dimensionality reduction, denoising, or representation learning ([Sparse Autoencoder · Dataloop](https://dataloop.ai/library/model/tag/sparse_autoencoder/#:~:text=A%20Sparse%20Autoencoder%20is%20a,anomaly%20detection%2C%20and%20feature%20learning)).
- **Deep Reinforcement Learning:** The combination of deep neural networks with reinforcement learning techniques, enabling agents to handle high-dimensional state spaces (like raw pixels in video games) and learn complex behaviors ([Reinforcement Learning — Beginner’s Approach Chapter -I | by Shashwat Tiwari | Analytics Vidhya | Medium](https://medium.com/analytics-vidhya/reinforcement-learning-beginners-approach-chapter-i-689f999cf572#:~:text=There%20will%20be%20a%20much,the%20art%20is%20progressing%20rapidly)). *Notable results include DeepMind’s AlphaGo and AlphaZero, where deep RL achieved superhuman gameplay by combining neural networks with game simulations and reward feedback.*
- **Neuro-Symbolic Systems:** Hybrid AI systems that integrate neural networks with symbolic reasoning, aiming to fuse the pattern recognition strength of subsymbolic AI with the explainability and logical reasoning of symbolic AI ([Neuro-symbolic AI - Wikipedia](https://en.wikipedia.org/wiki/Neuro-symbolic_AI#:~:text=Neuro,114%20and%20efficient%20machine%20learning)). (This approach seeks to address limitations of purely neural or purely symbolic systems by combining them for robust reasoning ([Neuro-symbolic AI - Wikipedia](https://en.wikipedia.org/wiki/Neuro-symbolic_AI#:~:text=Neuro,114%20and%20efficient%20machine%20learning)).)
## Natural Language Processing (NLP)
- **Natural Language Processing:** A field of AI focused on enabling computers to understand, interpret, and generate human language. It encompasses tasks like speech recognition, language understanding, and text generation, allowing applications such as translation, summarization, and sentiment analysis ([AI and the Challenge of Causal Reasoning and Reasoning under Uncertainty](https://www.linkedin.com/pulse/ai-challenge-causal-reasoning-under-uncertainty-prof-ahmed-banafa-xb1hc#:~:text=Causal%20Inference%3A)) ([Top Challenges for Artificial Intelligence in 2025 - BuddyX Theme](https://buddyxtheme.com/top-challenges-for-artificial-intelligence/#:~:text=3,fields%20like%20healthcare%20and%20finance)).
- **Language Models & Large Language Models (LLMs):** Language models predict or generate text based on learned patterns in language data. Recent **LLMs** (like GPT-3 or GPT-4) are massive neural networks trained on enormous text corpora, capable of producing remarkably coherent and contextually relevant text and answering questions in a human-like way ([Solving a machine-learning mystery | MIT News](https://news.mit.edu/2023/large-language-models-in-context-learning-0207#:~:text=Solving%20a%20machine,from%20poetry%20to%20programming%20code)).
- **Machine Translation:** The use of AI to automatically translate text or speech from one language to another. Modern systems often use sequence-to-sequence neural networks (such as transformer-based models) to achieve high-quality translations between many languages, surpassing earlier rule-based approaches ([AI Atlas #9: Transformers | Glasswing Ventures](https://glasswing.vc/blog/ai-atlas-9-transformers/#:~:text=Transformers%20are%20a%20type%20of,data%2C%20such%20as%20natural%20language)).
- **Speech Recognition (ASR):** Converting spoken language into text using AI models. This involves analyzing audio waveforms and using acoustic and language models (often deep networks) to transcribe words – as done by digital assistants like Siri or Alexa, which can understand voice commands ([Types of Artificial Intelligence | IBM](https://www.ibm.com/think/topics/artificial-intelligence-types#:~:text=,decisions%20on%20when%20to%20apply)) ([Types of Artificial Intelligence | IBM](https://www.ibm.com/think/topics/artificial-intelligence-types#:~:text=limited%20memory%20AI%20capabilities%20to,NLP%29%20and%20Limited)).
- **Speech Synthesis (Text-to-Speech):** Generating spoken audio from text. AI-driven TTS systems (using neural networks like WaveNet or Tacotron) produce natural-sounding speech, enabling applications from audiobooks to voice assistants that *speak* with human-like intonation.
- **Conversational AI & Chatbots:** AI systems that engage in dialogue with users in natural language, ranging from simple scripted chatbots to advanced agents powered by LLMs. They handle tasks like customer service or personal assistance by understanding user queries and generating appropriate responses ([Types of Artificial Intelligence | IBM](https://www.ibm.com/think/topics/artificial-intelligence-types#:~:text=,Limited%20Memory%20AI%20to%20understand)) ([Top Challenges for Artificial Intelligence in 2025 - BuddyX Theme](https://buddyxtheme.com/top-challenges-for-artificial-intelligence/#:~:text=3,fields%20like%20healthcare%20and%20finance)).
- **Information Extraction:** Techniques for automatically extracting structured information (names, relationships, events, etc.) from unstructured text. For example, an AI system can read news articles and identify entities and their relationships to populate a knowledge graph.
- **Sentiment Analysis:** The use of NLP to determine the emotional tone or opinion expressed in text (positive, negative, neutral). This is widely used in analysis of social media, reviews, or customer feedback to gauge public sentiment ([Ethics in AI: Ensuring fairness, transparency, and accountability in the age of algorithms](https://www.linkedin.com/pulse/ethics-ai-ensuring-fairness-transparency-accountability-age-algorithms-fhpjc#:~:text=Transparency%20in%20AI)) ([Ethics in AI: Ensuring fairness, transparency, and accountability in the age of algorithms](https://www.linkedin.com/pulse/ethics-ai-ensuring-fairness-transparency-accountability-age-algorithms-fhpjc#:~:text=AI%20systems%2C%20particularly%20those%20that,that%20aligns%20with%20ethical%20standards)).
- **Question Answering Systems:** AI systems that can answer questions posed in natural language by finding and formulating answers from a knowledge source. Examples range from IBM’s Watson (which used QA to win *Jeopardy!*) to open-domain QA models that leverage large text corpora or the web ([AI Atlas #9: Transformers | Glasswing Ventures](https://glasswing.vc/blog/ai-atlas-9-transformers/#:~:text=Transformers%20are%20a%20type%20of,data%2C%20such%20as%20natural%20language)).
## Computer Vision
- **Computer Vision:** The field of AI that enables machines to interpret and understand visual information from the world (images and videos). It involves tasks like recognizing objects, detecting events, and reconstructing scenes – essentially giving computers the ability to “see” ([6 ways AI is transforming healthcare | World Economic Forum](https://www.weforum.org/stories/2025/03/ai-transforming-global-health/#:~:text=AI%20can%20interpret%20brain%20scans)) ([6 ways AI is transforming healthcare | World Economic Forum](https://www.weforum.org/stories/2025/03/ai-transforming-global-health/#:~:text=AI%20can%20spot%20more%20bone,fractures%20than%20humans%20can)).
- **Image Classification:** The task of assigning a label to an entire image based on its content ([Seeing is Believing: Mastering Image Classification with Python - pago](https://pagorun.medium.com/seeing-is-believing-mastering-image-classification-with-python-e24cc1a697e1#:~:text=pago%20pagorun,analyzing%20the%20image%27s%20features)). For example, an AI model takes an image and identifies it as a *cat*, *dog*, or *face*, etc., a foundational capability in vision achieved with deep learning (e.g. CNNs on ImageNet led to high-accuracy image classifiers).
- **Object Detection:** The task of not only recognizing objects in an image but also locating them with bounding boxes ([Ivan Kristianto Singgih posted on LinkedIn](https://www.linkedin.com/posts/ivan-kristianto-singgih-56a5652b_objectdetection-yolov8-deepsort-activity-7065714021345230849-5ZnA#:~:text=Ivan%20Kristianto%20Singgih%20posted%20on,each%20pixel%20in%20an)). Object detection algorithms (like YOLO or Faster R-CNN) can identify multiple objects (e.g. cars, pedestrians) in a single image or video frame and mark their positions ([Ivan Kristianto Singgih posted on LinkedIn](https://www.linkedin.com/posts/ivan-kristianto-singgih-56a5652b_objectdetection-yolov8-deepsort-activity-7065714021345230849-5ZnA#:~:text=Ivan%20Kristianto%20Singgih%20posted%20on,each%20pixel%20in%20an)).
- **Image Segmentation:** Dividing an image into regions by classifying each pixel into a category, effectively outlining the shapes of objects or areas (e.g. separating foreground objects from background) ([Ivan Kristianto Singgih posted on LinkedIn](https://www.linkedin.com/posts/ivan-kristianto-singgih-56a5652b_objectdetection-yolov8-deepsort-activity-7065714021345230849-5ZnA#:~:text=Ivan%20Kristianto%20Singgih%20posted%20on,each%20pixel%20in%20an)). Segmentation provides a fine-grained understanding of images, useful in medical imaging (isolating organs/tumors) or autonomous driving (road, pedestrian, vehicle regions).
- **Facial Recognition:** A biometric technology that identifies or verifies individuals by analyzing images of their faces ([What is Facial Recognition? - AWS](https://aws.amazon.com/what-is/facial-recognition/#:~:text=Facial%20recognition%20is%20a%20way,an%20image%20of%20their%20face)) ([Facial Recognition - (Intro to Cognitive Science) - Fiveable](https://fiveable.me/key-terms/introduction-cognitive-science/facial-recognition#:~:text=Facial%20Recognition%20,features%20from%20images%20or%20videos)). Modern face recognition systems use deep learning to map facial features and have applications ranging from unlocking smartphones to surveillance, while raising important privacy and ethical considerations.
- **Video Analysis & Activity Recognition:** AI techniques for analyzing video streams to detect events, activities or anomalies. For instance, recognizing actions (like running vs. walking) or detecting unusual events in security footage; this builds on object detection and sequence modeling to interpret temporal visual data.
- **OCR (Optical Character Recognition):** Using computer vision to detect and read text in images (for example, scanning printed documents or street signs). AI-based OCR can handle diverse fonts and layouts, converting images to machine-readable text for indexing or translation.
## Robotics and Embodied AI
- **Robotics:** A branch of AI that deals with designing and controlling robots – machines that perform tasks in the physical world. It integrates perception (e.g. computer vision), planning, and control so that robots can navigate, manipulate objects, and interact with their environment autonomously ([AI and Business: How to Stay Ahead of the Game - Mark II Ventures](https://www.mark2ventures.com/ai-and-business-how-to-stay-ahead-of-the-game/#:~:text=Ventures%20www,AI%20that%20uses%20statistical)).
- **Autonomous Vehicles:** Self-driving cars and drones that use AI for perception and navigation. They rely on sensors (cameras, LiDAR, radar) and AI models to detect lanes, obstacles, and pedestrians, and to make driving decisions in real time, aiming to transport passengers or goods safely without human drivers ([Understanding AI and Machine Learning: A Guide for Kid - JetLearn](https://www.jetlearn.com/blog/ai-and-machine-learning#:~:text=Understanding%20AI%20and%20Machine%20Learning%3A,so%20they%20can%20navigate)).
- **Robot Navigation & SLAM:** Techniques that allow robots to move through unknown environments by building a map and localizing themselves within it simultaneously (Simultaneous Localization and Mapping). SLAM algorithms enable, for example, a robot vacuum to map your house or a drone to stabilize flight in an unknown area.
- **Manipulation and Grasping:** Methods in robotics for a robot arm or hand to handle objects—identifying how to grasp an object and exert appropriate force. This involves AI for recognizing objects and planning motions so robots can pick-and-place items or use tools in factories and homes.
- **Human-Robot Interaction:** The study and design of systems in which robots and humans communicate or collaborate. This includes natural language interfaces, gesture recognition, and safety mechanisms so that robots (from industrial cobots to social robots) can work alongside people effectively and safely.
- **Swarm Robotics:** A field where large numbers of simple robots coordinate in a decentralized way, inspired by social insects like ants or bees. These robots follow simple rules locally, leading to emergent collective behavior that can accomplish tasks like area coverage or synchronized movement as a group ([Paul Prae en LinkedIn: Large Language Models Can Self-Improve](https://ni.linkedin.com/posts/paulprae_large-language-models-can-self-improve-activity-7039821748866867200-PiR9#:~:text=Improve%20ni,https%3A%2F%2Fen.wikipedia.org%2Fwiki)).
- **Soft Robotics:** The design of robots with flexible, soft bodies (often inspired by biological organisms like octopuses), which can adapt to their environment. AI control algorithms for soft robots handle their continuous dynamics and can enable new types of movement and safe interaction with humans.
- **Edge and Mobile Robotics (Edge AI):** Deployment of AI models on robots and IoT devices with limited computing power (edge devices). This involves optimizing algorithms so that drones, mobile robots, or AR devices can perform AI tasks (vision, speech) on-board in real time without relying on cloud computing.
## Multi-Agent Systems and Collective AI
- **Multi-Agent Systems:** Systems in which multiple intelligent agents (software or robots) interact within an environment, each with their own goals or behaviors. These agents may cooperate or compete, and the system studies coordination mechanisms, communication, and emergent behaviors in scenarios like distributed problem-solving or simulated economies ([Digital Alpha Platforms - Multi-Agent Architectures - LinkedIn](https://www.linkedin.com/posts/digitalalpha_multi-agent-architectures-activity-7179160349344980992-NMbA#:~:text=Digital%20Alpha%20Platforms%20,https)).
- **Game Theory and AI:** The use of game-theoretic principles in AI to model strategic interactions between agents with possibly conflicting interests. This underpins AI for auctions, negotiations, and any scenario where agents learn optimal strategies (as in economic games or self-driving cars negotiating right-of-way).
- **Swarm Intelligence:** An approach to AI where the collective behavior of decentralized, self-organized agents leads to intelligent outcomes, taking inspiration from nature (bird flocking, ant colonies) ([Paul Prae en LinkedIn: Large Language Models Can Self-Improve](https://ni.linkedin.com/posts/paulprae_large-language-models-can-self-improve-activity-7039821748866867200-PiR9#:~:text=Improve%20ni,https%3A%2F%2Fen.wikipedia.org%2Fwiki)). Algorithms like Ant Colony Optimization and Particle Swarm Optimization use this principle to solve complex optimization problems through agent cooperation.
- **Distributed AI:** AI algorithms that run across multiple machines or agents that share information and computation. This includes federated learning (multiple agents training a model without sharing raw data) and distributed consensus algorithms, allowing scalability and data privacy in AI computations.
- **Collaborative Filtering & Recommender Systems:** While typically a subfield of ML, in a multi-agent perspective this involves many "agent" users and items; AI algorithms here learn from the behavior of many users to make personalized recommendations (as in movie or product recommendation systems).
## Probabilistic AI and Uncertainty
- **Bayesian Networks:** Graphical models that represent probabilistic relationships among a set of variables (nodes) with directed edges indicating conditional dependencies ([Understanding knowledge reasoning in AI systems](https://telnyx.com/learn-ai/knowledge-reasoning#:~:text=Abduction%20is%20a%20form%20of,hypothesis%20based%20on%20new%20evidence)). They enable reasoning under uncertainty by encoding joint probability distributions and updating beliefs when new evidence appears (used in diagnosis, forecasting, etc.).
- **Hidden Markov Models (HMMs):** Statistical models for sequences where the system being modeled is assumed to be a Markov process with unobserved (hidden) states. HMMs were a cornerstone for speech recognition and biosequence analysis by modeling sequences of observations probabilistically.
- **Markov Decision Processes (MDPs):** A mathematical framework for modeling decision-making in situations where outcomes are partly random and partly under the control of a decision-maker. MDPs formally underpin reinforcement learning problems, defining states, actions, transition probabilities, and rewards ([Reinforcement Learning — Beginner’s Approach Chapter -I | by Shashwat Tiwari | Analytics Vidhya | Medium](https://medium.com/analytics-vidhya/reinforcement-learning-beginners-approach-chapter-i-689f999cf572#:~:text=According%20to%20Wikipedia)).
- **Probabilistic Programming:** Programming paradigms and languages (like Stan, PyMC3, or Probabilistic Circuits) that allow specification of probabilistic models and automate the inference process. This makes it easier for AI systems to reason with uncertainty and update beliefs with Bayesian inference in complex models ([AI and the Challenge of Causal Reasoning and Reasoning under Uncertainty](https://www.linkedin.com/pulse/ai-challenge-causal-reasoning-under-uncertainty-prof-ahmed-banafa-xb1hc#:~:text=Causal%20Inference%3A)).
- **Causal Inference:** Techniques focused on discovering and utilizing cause-and-effect relationships from data, rather than just correlations ([AI and the Challenge of Causal Reasoning and Reasoning under Uncertainty](https://www.linkedin.com/pulse/ai-challenge-causal-reasoning-under-uncertainty-prof-ahmed-banafa-xb1hc#:~:text=Causal%20Inference%3A)). In AI, causal inference aims to allow models to understand the impact of interventions and answer counterfactual questions (e.g., "Would this patient have improved if we had given a different treatment?"), using frameworks like Pearl’s do-calculus and causal graphs.
- **Causal AI:** An emerging subfield combining AI and causal inference, which seeks AI models that can reason about interventions and not just make predictions ([Why artificial intelligence needs to understand consequences - Nature](https://www.nature.com/articles/d41586-023-00577-1#:~:text=Nature%20www,realize%20interventions%20and%20counterfactuals)) ([AI and the Challenge of Causal Reasoning and Reasoning under ...](https://www.linkedin.com/pulse/ai-challenge-causal-reasoning-under-uncertainty-prof-ahmed-banafa-xb1hc#:~:text=,from%20observational%20or%20experimental%20data)). By learning causal structures, AI systems become more robust and explainable, understanding **why** something happens, not just **what** is happening.
## Evolutionary and Bio-Inspired Computation
- **Genetic Algorithms (GAs):** Optimization algorithms inspired by biological evolution, where candidate solutions to a problem play the role of individuals in a population, and evolve over generations via selection, crossover, and mutation to find high-quality solutions ([Neuro-symbolic AI - Wikipedia](https://en.wikipedia.org/wiki/Neuro-symbolic_AI#:~:text=Neuro,114%20and%20efficient%20machine%20learning)). GAs have been applied to engineering design, scheduling, and evolving neural network weights.
- **Genetic Programming:** An extension of GAs where the structures being evolved are computer programs or algorithms themselves. The fittest programs (judged by how well they solve a given problem) are selected and varied to evolve programs that perform a task automatically, sometimes rediscovering algorithms.
- **Evolutionary Strategies and CMA-ES:** Continuous optimization techniques that evolve a *population of solution vectors* by using strategies for mutation and selection, often maintaining a covariance matrix to adapt the search distribution (Covariance Matrix Adaptation). These excel at solving complex continuous optimization problems without gradient information.
- **Swarm Optimization Algorithms:** Optimization algorithms inspired by swarm behavior, such as Particle Swarm Optimization (PSO) where a swarm of candidate solutions moves through the search space influenced by their own and neighbors’ best-known positions, or Ant Colony Optimization where simulated ants deposit pheromones to find shortest paths in graphs (useful for routing problems) ([Paul Prae en LinkedIn: Large Language Models Can Self-Improve](https://ni.linkedin.com/posts/paulprae_large-language-models-can-self-improve-activity-7039821748866867200-PiR9#:~:text=Improve%20ni,https%3A%2F%2Fen.wikipedia.org%2Fwiki)).
- **Artificial Life and Cellular Automata:** Simulations of life-like systems where simple rules at the level of individuals or cells lead to complex emergent phenomena. Cellular automata (like Conway’s Game of Life) and artificial life experiments contribute to understanding how complexity can arise and are used in creative AI and modeling ecosystems.
- **Neuroevolution:** The application of evolutionary algorithms to optimize neural network parameters or architectures (weights, structures, or hyperparameters). Approaches like NEAT (NeuroEvolution of Augmenting Topologies) evolve neural network topologies along with weights, and have produced novel neural architectures without human design.
## AI Ethics, Fairness, and Society
- **Ethical AI:** The practice of designing and deploying AI systems in a manner aligned with moral values and societal norms, emphasizing principles like fairness, transparency, accountability, and respect for privacy ([What is Ethical AI?](https://www.holisticai.com/blog/what-is-ethical-ai#:~:text=Ethical%20AI%20refers%20to%20the,and%20respect%20for%20human%20values)). Ethical AI seeks to prevent biases in AI decisions ([Ethics in AI: Ensuring fairness, transparency, and accountability in the age of algorithms](https://www.linkedin.com/pulse/ethics-ai-ensuring-fairness-transparency-accountability-age-algorithms-fhpjc#:~:text=Transparency%20in%20AI)), ensure explainability of models, and avoid harm to individuals or groups.
- **AI Bias and Fairness:** The analysis and mitigation of biases in AI systems. Since AI models can inadvertently learn societal biases from data, techniques in this area aim to detect unfair outcomes (e.g., along lines of race or gender) and adjust algorithms or data to ensure equitable decisions ([Ethics in AI: Ensuring fairness, transparency, and accountability in the age of algorithms](https://www.linkedin.com/pulse/ethics-ai-ensuring-fairness-transparency-accountability-age-algorithms-fhpjc#:~:text=Bias%20in%20AI%20can%20stem,outcomes%20or%20groups%20over%20others)) ([Ethics in AI: Ensuring fairness, transparency, and accountability in the age of algorithms](https://www.linkedin.com/pulse/ethics-ai-ensuring-fairness-transparency-accountability-age-algorithms-fhpjc#:~:text=AI%20systems%2C%20particularly%20those%20that,that%20aligns%20with%20ethical%20standards)).
- **Explainability and Interpretability:** Methods to make AI decisions understandable to humans. **Explainable AI (XAI)** provides human-interpretable justifications for model outputs, often through techniques that highlight important features or by simplifying complex models ([Top Challenges for Artificial Intelligence in 2025 - BuddyX Theme](https://buddyxtheme.com/top-challenges-for-artificial-intelligence/#:~:text=3,fields%20like%20healthcare%20and%20finance)). This transparency is crucial for trust in domains like healthcare or finance.
- **Privacy-Preserving AI:** Approaches like federated learning and differential privacy that enable AI models to learn from data without compromising personal or sensitive information. These techniques allow AI to utilize insights from user data (e.g., training on your smartphone inputs) while mathematically protecting individual data points from disclosure.
- **AI Safety and Alignment:** The field concerned with ensuring that advanced AI systems operate reliably and remain under human control, without unintended harmful behaviors ([Google CEO Sundar Pichai and the Future of AI | The Circuit](https://www.yeschat.ai/blog-Google-CEO-Sundar-Pichai-and-the-Future-of-AI-The-Circuit-33920#:~:text=AI%20safety%20is%20the%20field,importance%20of%20responsible%20AI%20development)). *AI alignment* specifically focuses on aligning AI goals with human values – a critical issue as AI systems become more powerful, to prevent scenarios where AI pursues objectives detrimental to humanity.
- **Responsible AI Governance:** The development of policies, regulations, and frameworks to govern AI deployment (e.g., guidelines for autonomous weapons or facial recognition use). This includes AI ethics boards in organizations, international agreements on AI principles, and compliance with laws like the EU’s upcoming AI Act to balance innovation with public welfare ([AI And The Law – Navigating The Future Together | United Nations University](https://unu.edu/article/ai-and-law-navigating-future-together#:~:text=provide%20a%20viable%20substitute%20for,obstacles%20to%20this%20revolutionary%20technology)) ([AI And The Law – Navigating The Future Together | United Nations University](https://unu.edu/article/ai-and-law-navigating-future-together#:~:text=historical%20data%2C%20which%20may%20contain,decisions%20made%20by%20AI%20systems)).
- **Existential Risk and AI:** A topic of debate focusing on long-term risks that highly advanced AI (especially artificial general intelligence or superintelligence) could pose to humanity’s future. It calls for proactive research into safety measures to ensure *superintelligent AI*, if achieved, would remain beneficial and under control ([Google CEO Sundar Pichai and the Future of AI | The Circuit](https://www.yeschat.ai/blog-Google-CEO-Sundar-Pichai-and-the-Future-of-AI-The-Circuit-33920#:~:text=AI%20safety%20is%20the%20field,importance%20of%20responsible%20AI%20development)) ([Google CEO Sundar Pichai and the Future of AI | The Circuit](https://www.yeschat.ai/blog-Google-CEO-Sundar-Pichai-and-the-Future-of-AI-The-Circuit-33920#:~:text=Artificial%20General%20Intelligence%20)).
## Artificial General Intelligence (AGI) and Future Directions
- **Artificial General Intelligence (AGI):** A hypothetical future AI that possesses broad cognitive abilities at the human level or beyond, able to understand or learn *any* intellectual task that a human can ([Google CEO Sundar Pichai and the Future of AI | The Circuit](https://www.yeschat.ai/blog-Google-CEO-Sundar-Pichai-and-the-Future-of-AI-The-Circuit-33920#:~:text=Artificial%20General%20Intelligence%20)). Unlike narrow AI systems, which excel only in specific domains, AGI would transfer learning across tasks and exhibit common sense reasoning and adaptability akin to human intelligence.
- **Artificial Superintelligence (ASI):** An even more speculative category of AI referring to an intellect that far surpasses the brightest human minds in essentially all areas, including scientific creativity, general wisdom, and social skills. Often discussed in the context of the long-term impact of AI, ASI would exceed human capabilities and is currently purely theoretical ([Types of Artificial Intelligence | IBM](https://www.ibm.com/think/topics/artificial-intelligence-types#:~:text=)).
- **Machine Consciousness and Cognitive Computing:** Explorations into whether and how an AI system might achieve conscious awareness or subjective experience. While currently in the realm of philosophy and theoretical research, this topic intersects with cognitive science and neuroscience to understand if consciousness is substrate-independent or emergent from certain computational processes ([The Bidirectionality of Neuroscience and Artifcial Intelligence - Exploring the Bidirectional Relationship Between Artificial Intelligence and Neuroscience - NCBI Bookshelf](https://www.ncbi.nlm.nih.gov/books/n/nap27764/sec_ch2/#:~:text=THE%20ROLE%20OF%20AI%20IN,COGNITIVE%20NEUROSCIENCE)) ([The Bidirectionality of Neuroscience and Artifcial Intelligence - Exploring the Bidirectional Relationship Between Artificial Intelligence and Neuroscience - NCBI Bookshelf](https://www.ncbi.nlm.nih.gov/books/n/nap27764/sec_ch2/#:~:text=Pavlick%20described%20a%20%E2%80%9Cvirtuous%20cycle%E2%80%9D,ways%20of%20improving%20neural%20networks)).
- **Quantum AI:** The integration of quantum computing with AI algorithms, leveraging quantum phenomena to potentially accelerate learning or solve problems intractable for classical computers. Quantum machine learning is an emerging research area, aiming to use quantum circuits to perform tasks like state classification or speeding up optimization and sampling for AI models.
- **Federated Learning:** A distributed learning approach where the model training is spread across multiple devices or servers holding local data, with only model updates (gradients) being aggregated centrally ([Active Learning Definition | DeepAI](https://deepai.org/machine-learning-glossary-and-terms/active-learning#:~:text=Active%20Learning%20Definition%20,queries%20a%20teacher%20for%20guidance)). This allows training AI on sensitive data (like mobile user data or healthcare records) without that data ever leaving its source, preserving privacy while still benefiting from collective learning.
- **AutoML and Neural Architecture Search:** Techniques to automate the design of AI models and hyperparameter tuning. AutoML systems can search for the best model architectures or learning settings for a given problem ([Generative Adversarial Networks. Generative Adversarial Networks (GANs)… | by Marco Del Pra | Medium](https://medium.com/@marcodelpra/generative-adversarial-networks-dba10e1b4424#:~:text=learning%20systems%20adept%20at%20replicating,of%20the%20time)), making AI more accessible and sometimes discovering unconventional architectures (via methods including reinforcement learning or evolution to design networks).
- **Multimodal AI:** Systems that combine multiple types of data (text, images, audio, etc.) in a unified model. For example, models like CLIP and DALL-E understand both language and vision, enabling image generation from text or vice versa. Multimodal AI reflects how humans learn from many modalities at once and is advancing capabilities in creative content generation and robotics.
- **AI for Science and Discovery:** Applying AI to accelerate scientific research, such as using deep learning to predict protein structures (DeepMind’s AlphaFold), to design new materials, or to control plasma in fusion reactors. These cutting-edge applications show AI not just solving industry problems but contributing to fundamental scientific advancements.
- **AI and Creativity:** AI systems increasingly participate in creative tasks – from generating art and music to assisting in writing novels or designing products. Creative AI, using techniques like GANs or transformers, is producing original paintings, musical compositions, and designs ([AI in Creative Fields: The Next Frontier for Art, Music, and Writing | CloudxLab Blog](https://cloudxlab.com/blog/ai-in-creative-fields-the-next-frontier-for-art-music-and-writing/#:~:text=Artificial%20Intelligence%20,implications%20for%20creators%20and%20consumers)) ([AI in Creative Fields: The Next Frontier for Art, Music, and Writing | CloudxLab Blog](https://cloudxlab.com/blog/ai-in-creative-fields-the-next-frontier-for-art-music-and-writing/#:~:text=Overview%20of%20AI%20in%20Art,Creation)), raising questions about authorship and the nature of creativity while also providing novel tools for human creators.
- **Technological Singularity:** A theoretical future point where AI improvement becomes self-perpetuating and exponential, resulting in intelligence far beyond human comprehension or control. Often associated with the emergence of superintelligence, discussions around the singularity involve speculation on the profound societal and existential implications if such a scenario were to occur.
## Interdisciplinary AI (AI + X)
- **AI and Neuroscience:** A two-way exchange where insights from brain research inspire new AI algorithms (e.g., neural networks, spiking neural nets) ([The Intersection of Neuroscience and AI: Understanding the Human Brain | Aster](https://www.asterhospitals.in/blogs-events-news/aster-cmi-bangalore/intersection-of-neuroscience-and-ai-understanding-human-brain#:~:text=Neural%20networks%2C%20which%20are%20complex,computer%20interfaces%20is%20on%20the)), and AI models help neuroscientists to analyze neural data and model cognition ([The Intersection of Neuroscience and AI: Understanding the Human Brain | Aster](https://www.asterhospitals.in/blogs-events-news/aster-cmi-bangalore/intersection-of-neuroscience-and-ai-understanding-human-brain#:~:text=Unraveling%20the%20Mysteries%20of%20the,Brain%20with%20AI)). This intersection has led to neuromorphic computing (hardware mimicking brain processes) and uses of AI to map brain activity or develop brain-machine interfaces.
- **AI and Psychology/Cognitive Science:** AI techniques are used to simulate cognitive processes (memory, learning, problem-solving) to test theories of mind, while cognitive science findings (like human heuristics or cognitive biases) inform the development of AI that more closely mimics human thinking. This yields cognitive architectures and insights into making AI decisions more human-like in rationale.
- **AI and Economics (Computational Economics):** The integration of AI with economic modeling and game theory, where AI agents simulate market behaviors, optimize auctions and pricing, or learn strategies in economic games ([](https://www.cs.toronto.edu/~cebly/Papers/editorial.pdf#:~:text=While%20the%20distance%20between%20AI,of%20economic%20institutions%20and%20decisions)) ([](https://www.cs.toronto.edu/~cebly/Papers/editorial.pdf#:~:text=Within%20AI%2C%20we%20see%20several,research%20questions%20in%20the%20de%02sign)). Conversely, economic principles (like incentive design and mechanism design) are used to improve multi-agent AI systems – for example, using market mechanisms to allocate resources among AI agents.
- **AI and Law:** Applying AI to legal tasks (such as document review, legal research, contract analysis, and predicting case outcomes) to increase efficiency in law firms and courts ([How Is AI Changing the Legal Profession?](https://pro.bloomberglaw.com/insights/technology/how-is-ai-changing-the-legal-profession/#:~:text=In%20recent%20years%2C%20related%20technological,based%20on%20a%20short%20prompt)). It also encompasses developing legal frameworks for AI (regulating AI’s use) and addressing questions of liability and ethics when AI systems make decisions that have legal consequences ([AI And The Law – Navigating The Future Together | United Nations University](https://unu.edu/article/ai-and-law-navigating-future-together#:~:text=The%20first%20such%20obstacle%20is,have%20a%20right%20to%20be)) ([AI And The Law – Navigating The Future Together | United Nations University](https://unu.edu/article/ai-and-law-navigating-future-together#:~:text=practitioners%20and%20facilitate%20improved%20access,to%20justice)).
- **AI and Healthcare:** The use of AI to improve medical diagnosis (e.g., detecting diseases from medical images with greater accuracy ([6 ways AI is transforming healthcare | World Economic Forum](https://www.weforum.org/stories/2025/03/ai-transforming-global-health/#:~:text=AI%20can%20interpret%20brain%20scans))), personalize treatment recommendations, assist in drug discovery by predicting molecule behavior, and monitor patients (via wearable data and predictive analytics). AI in healthcare has shown promise in early detection of conditions and supporting overwhelmed healthcare systems ([
Artificial intelligence in healthcare: transforming the practice of medicine - PMC
](https://pmc.ncbi.nlm.nih.gov/articles/PMC8285156/#:~:text=Artificial%20intelligence%20,of%20AI%20augmented%20healthcare%20systems)).
- **AI and Education:** AI-driven personalized learning systems and intelligent tutoring that adapt to student needs, providing individualized exercises and feedback ([AI in Education: The Rise of Intelligent Tutoring Systems | Park University](https://www.park.edu/blog/ai-in-education-the-rise-of-intelligent-tutoring-systems/#:~:text=Personalized%20Learning)). Other applications include automated grading, AI teaching assistants (answering student questions), and educational data mining to improve curricula. The goal is to enhance learning outcomes by tailoring education to each learner’s pace and style with AI ([AI in Education: The Rise of Intelligent Tutoring Systems | Park University](https://www.park.edu/blog/ai-in-education-the-rise-of-intelligent-tutoring-systems/#:~:text=Intelligent%20tutoring%20systems%20tailor%20educational,deeper%20connections%20with%20the%20material)).
- **AI and Finance:** Widespread use of AI for algorithmic trading (making split-second decisions on trades), fraud detection (spotting anomalous transactions that may indicate fraud), risk management (modeling financial risk and market trends), and personalized banking (chatbots and credit scoring) ([What Is Artificial Intelligence in Finance? | IBM](https://www.ibm.com/think/topics/artificial-intelligence-finance#:~:text=Artificial%20intelligence%20,in%20the%20financial%20services%20industry)) ([What Is Artificial Intelligence in Finance? | IBM](https://www.ibm.com/think/topics/artificial-intelligence-finance#:~:text=AI%20is%20revolutionizing%20how%20financial,more%20personalized%20interactions%2C%20faster%20and)). Financial institutions leverage machine learning to analyze vast datasets for insights and to automate routine processes, while ensuring compliance and managing ethical issues like fairness in lending.
- **AI and Art/Creativity:** The convergence of AI with creative fields – AI algorithms generate paintings, music, poetry, and designs. For instance, GANs have produced artwork exhibited in galleries, and language models write fiction or assist in scriptwriting ([AI in Creative Fields: The Next Frontier for Art, Music, and Writing | CloudxLab Blog](https://cloudxlab.com/blog/ai-in-creative-fields-the-next-frontier-for-art-music-and-writing/#:~:text=Artificial%20Intelligence%20,implications%20for%20creators%20and%20consumers)) ([AI in Creative Fields: The Next Frontier for Art, Music, and Writing | CloudxLab Blog](https://cloudxlab.com/blog/ai-in-creative-fields-the-next-frontier-for-art-music-and-writing/#:~:text=Overview%20of%20AI%20in%20Art,Creation)). Artists are increasingly collaborating with AI as a tool, raising both excitement for new art forms and debates about the nature of creativity and intellectual property.
- **AI for Social Good:** Interdisciplinary efforts where AI is applied to address societal and global challenges – such as using AI for environmental monitoring and climate modeling, disaster prediction and response, improving accessibility for people with disabilities (e.g., AI-driven assistive technologies), and humanitarian efforts like analyzing satellite imagery to guide relief work. These projects highlight the potential of AI to contribute positively to society when guided by human insight and values.
## AI Tools, Libraries, and Platforms
- **TensorFlow:** An end-to-end open-source platform for machine learning developed by Google, which provides a comprehensive ecosystem of tools and libraries for building and deploying ML models (especially deep neural networks) ([TensorFlow | Google Open Source Projects](https://opensource.google/projects/tensorflow#:~:text=TensorFlow%20is%20an%20end,tools%2C%20libraries%2C%20and%20community%20resources)). *TensorFlow is widely used in both research and industry for its scalability and production-ready capabilities (with TensorFlow Lite for mobile and TensorFlow Serving for deployment).*
- **PyTorch:** An open-source machine learning library initially developed by Facebook AI Research, known for its dynamic computation graph and intuitive Python interface ([PyTorch (Machine Learning Library) - Lightcast](https://lightcast.io/open-skills/skills/KSWXHT30GQY9B4QSXC5O/pytorch-machine-learning-library#:~:text=PyTorch%20is%20an%20open%20source,and%20natural%20language%20processing)). PyTorch is popular in research for deep learning due to its flexibility and has been adopted in industry as well (it powers many computer vision and NLP systems, and supports hardware acceleration).
- **scikit-learn:** A free, open-source ML library for Python that offers a broad range of efficient implementations of classical machine learning algorithms (for classification, regression, clustering, etc.) ([Technology Skill: Scikit-learn - O*NET](https://www.onetonline.org/search/tech/example?e=Scikit-learn&j=15-2051.00#:~:text=Technology%20Skill%3A%20Scikit,classification%2C%20regression%20and%20clustering)). Scikit-learn focuses on ease of use, clean API, and integration with other scientific Python libraries, making it a go-to tool for data mining and smaller-scale ML tasks.
- **Keras:** An open-source neural-network library that provides a high-level API for building and training deep learning models ([What is Keras | IGI Global Scientific Publishing](https://www.igi-global.com/dictionary/temporal-analysis-and-prediction-of-ambient-air-quality-using-remote-sensing-deep-learning-and-geospatial-technologies/104787#:~:text=What%20is%20Keras%20,neural%20network%20library%20that)). Keras, now part of TensorFlow, simplifies constructing neural networks by offering intuitive building blocks (layers, optimizers, loss functions) and was key in making deep learning accessible to beginners and rapid prototyping.
- **OpenAI Gym:** A toolkit for developing and comparing reinforcement learning algorithms by providing a standard set of environments (such as games, simulations, robotic tasks) where agents can be trained and evaluated ([GeekLiB/openAI-gym - GitHub](https://github.com/GeekLiB/openAI-gym#:~:text=OpenAI%20Gym%20is%20a%20toolkit,you%20access%20to%20an)). Researchers use OpenAI Gym to benchmark RL algorithms consistently on tasks like controlling a cart-pole or playing Atari games.
- **ROS (Robot Operating System):** An open-source robotics middleware and framework that provides essential tools, libraries, and conventions for developing robot applications ([What is ROS? - Ubuntu](https://ubuntu.com/robotics/what-is-ros#:~:text=What%20is%20ROS%3F%20,reuse%20code%20between%20robotics%20applications)). ROS facilitates message-passing between sensors, actuators, and control nodes in a robot, and includes packages for localization, mapping, perception, and simulation, becoming a standard in robotics research and development.
- **Apache Spark (MLlib):** A big-data processing engine that includes **MLlib**, a scalable machine learning library. It allows AI algorithms to run on large datasets distributed across clusters, supporting tasks like large-scale clustering, classification, and collaborative filtering using the power of distributed computing.
- **Hugging Face Transformers:** A popular open-source library that provides pre-trained models and easy-to-use interfaces for state-of-the-art NLP models (transformers). It allows practitioners to download models like BERT, GPT, or T5 and fine-tune or use them for tasks such as text classification, Q&A, and translation with just a few lines of code.
- **Jupyter Notebooks:** While not an AI library per se, Jupyter notebooks are an indispensable tool in the AI workflow, allowing interactive coding, visualization, and documentation. They enable data scientists to experiment with models, track results, and share reproducible research, thus supporting the AI development process in an accessible format.
Each of these topics represents a rich area of study and application within artificial intelligence. Together, they span the **foundational concepts**, major **subfields**, key **methodologies**, cutting-edge **tools**, and diverse **applications** of AI – illustrating both the breadth and depth of this rapidly evolving field. ([What is Machine Learning? | Answer from SUSE Defines](https://www.suse.com/suse-defines/definition/machine-learning/#:~:text=Machine%20learning%20is%20a%20field,models%20built%20from%20sample%20inputs)) ([Google CEO Sundar Pichai and the Future of AI | The Circuit](https://www.yeschat.ai/blog-Google-CEO-Sundar-Pichai-and-the-Future-of-AI-The-Circuit-33920#:~:text=AI%20safety%20is%20the%20field,importance%20of%20responsible%20AI%20development))
More: [[AI-written artificial intelligence]]