## Tags - Part of: [[Intelligence]] - Related: [[Artificial Intelligence x Biological Intelligence x Collective Intelligence]] - Includes: - Additional: ## Definitions - [[Artificial Intelligence]] x [[Biological intelligence]] ## Contents - Current AI models (different architectures in different paradigms) are superhuman in some things (like memory, chess, Go, some mathematics, capacity, not dying, some science and so on.), but still subhuman in other contexts (like autonomous creating of value in our economic system with long term coherence, often causal reasoning, continual learning, adaptibility, and so on). - Let's make a benchmark testing for AI systems that can nicely do causal modeling, strong generalization, continuous learning, data & compute efficiency and stability/reliability in symbolic reasoning, agency, more complex tasks across time and space, long term planning, optimal bayesian inference etc. The ultimate benchmark would be giving Ai systems all the information that Newton, Maxwell, Boltzman, Einstein, Feynman, Edward Witten, Von Neumann etc. had before their discoveries in physics or other fields and then seeing if the system could come up with the same or isomorphic discoveries. - Mainstream AI uses [[Artificial neural networks]], we use something closer to [[Spiking Neural Network]]? [Spiking neural network - Wikipedia](https://en.wikipedia.org/wiki/Spiking_neural_network) - Mainstream AI uses [[backpropagation]], we use something closer to [[forward-forward algorithm]] and [[hebbian learning]]? [\[2212.13345\] The Forward-Forward Algorithm: Some Preliminary Investigations](https://arxiv.org/abs/2212.13345) - Mainstream AI is simple linear algebra, but we model neurons with electronics such as [[Hodgkin–Huxley model]] with ? [Dendrites: Why Biological Neurons Are Deep Neural Networks - YouTube](https://www.youtube.com/watch?v=hmtQPrH-gC4) - Mainstream AI is a static program, we are [[selfregulating]] [[selforganizing]] [[dynamical systems]] like [[neural celluar automata]]? [Growing Neural Cellular Automata](https://distill.pub/2020/growing-ca/) - Is brain doing predictive coding? [Predictive coding - Wikipedia](https://en.wikipedia.org/wiki/Predictive_coding) Is brain [[bayesian]] and doing [[active inference]]? [Active InferenceThe Free Energy Principle in Mind, Brain, and Behavior | Books Gateway | MIT Press](https://direct.mit.edu/books/oa-monograph/5299/Active-InferenceThe-Free-Energy-Principle-in-Mind) - is brain's [[electromagnetism]] relevant? [Frontiers | Don’t forget the boundary problem! How EM field topology can address the overlooked cousin to the binding problem for consciousness](https://www.frontiersin.org/journals/human-neuroscience/articles/10.3389/fnhum.2023.1233119/full) - Are oscillations relevant? [Neural oscillation - Wikipedia](https://en.wikipedia.org/wiki/Neural_oscillation) - [Complexity]() of biological cells is beyond comprehension, is that relevant? Are microtubules are fringe? is Penrose's [[godel]]ian [[quantum gravity]] [[microtubules]] collapsing the [[wave function]] madness right? Maybe Pensore is right and we need quantum gravity supercomputer for AGI lol. [Orchestrated objective reduction - Wikipedia](https://en.wikipedia.org/wiki/Orchestrated_objective_reduction) 1:44:35 [Joscha Bach Λ Ben Goertzel: Conscious Ai, LLMs, AGI - YouTube](https://www.youtube.com/watch?v=xw7omaQ8SgA&feature=youtu.be) - Now the biggest limitations in current AI systems are probably: to create more complex systematic coherent reasoning, planning, generalizing, search, agency (autonomy), memory, factual groundedness, online/continuous learning, software and hardware energetic and algoritmic efficiency, human-like ethical reasoning, or controllability, into AI systems, which they have relatively weak for more complex tasks, but we are making progress in this, either through composing LLMs in multiagent systems, scaling, higher quality data and training, poking around how they work inside and thus controlling them, through better mathematical models of how learning works and using these insights, or modified or overhauled architecture, etc.... or embodied robotics is also getting attention recently... and all top AGI labs are working/investing in these things to varying degrees. ## Main resources - [Exploring the Intersection of Artificial Intelligence and Neuroscience | Internet Networking AI](https://jpvasseur.me/ai-and-neuroscience) ## Brain storming " a lot of very evolutionary old behaviors are hardwired in us really hard and would most likely develop in isolation as well thanks to genetics, but we also learn many of behaviors throughout our lifes, while genes also seem to predispose for a lot of more high level behaviors imitation learning is big part of how we learn, but there's also other kinds of learning that don't involve imitation, otherwise no novel and generalizing behaviors would emerge there's also reinforcement learning, and major form of it is learning and adapting from feedback in the form of a reward signal that labels behavior as correct or incorrect, without showing any examples of correct behavior that can be imitated that's scientifically pretty established to work relatively well for biological organisms and big factor is also probably something along the lines of evolutionary divergent search optimizing for novelty, combined with convergently optimizing some evolutionary objectives approximately encoded as basic needs in our motivation engines the more i try to look for what all kinds of learning algorithms the brain and biology in general might be using, the more i'm fascinated i am by their complexity and openendedness " [https://www.youtube.com/watch?v=_2vx4Mfmw-w](https://www.youtube.com/watch?v=_2vx4Mfmw-w) https://www.researchgate.net/publication/46424802_Abandoning_Objectives_Evolution_Through_the_Search_for_Novelty_Alone [https://www.youtube.com/watch?v=DxBZORM9F-8](https://www.youtube.com/watch?v=DxBZORM9F-8) kenneth stanley A lot of his arguments can be summarized as: Greatness cannot be only planned. Rage against only maximizing predefined objectives, embrace more divergent search full of discovery, novelty and accidental epiphany with serendipity. “ Is evolution intelligence? I think evolution is a law in the natural sciences that has its own equation, just like in physics and other natural sciences we have other equations. I think evolution is now the most intelligent algorithm that exists now, because it has emergently created human general intelligence: us. And we are also physical systems that can be described by equations, including our intelligence I think. And I think evolution, like all other laws in the natural sciences, is emergent from the laws of fundamental physics such as the standard model of particle physics, where general relativity is still not integrated in our model of the universe. https://youtu.be/lhYGXYeMq_E?si=iqgtA1rGMi1hEbrx&t=2197 I agree a lot with this section on evolutionary algorithms 36:47. Kenneth Stanley, with whom I agree a lot, who was at OpenAI, tries to argue a lot that the algorithm behind open-ended divergent evolution created all this beautiful creative interesting diversity of novel organisms that we see everywhere. Thus, evolution also creates all collective intelligences such as ants and humans, and essentially indirectly through us, the AI technologies that we see everywhere now. Technically, one could also argue that people with AIs are also a form of collective intelligence together. There is nothing more fundamentally creative yet. There probably isn't a single objective in evolution as many AI people see it, but instead evolution learns many different emergent objectives in a gigantic space of all possible objectives through something like guided divergent search that uses mutation and selection a lot. And in practice, systems like AlphaEvolve show that hybridly combining gradient-based methods with evolutionary algorithms is now one of the best methodologies for novel discoveries that we have now. I think that even more symbolic methods should be stuffed into it hybridly on a more fundamental level. ” I think in practice any predictive machine, biological or not, is constrained by it's architectural biases, finite data, finite computational resources for modelling, finite limited sense modalities, finite limited perspectives as an agent in a bigger complex system, etc. So every biological and nonbiological information processing system always live their evolutionary niches, never fully universal But generality is a spectrum for example, but it can be evaluated in a lot of possible ways The space of all possible intelligences is so fascinating in general for me :D [[Thoughts comparing AI and biological intelligence]] ## Resources [[Links comparing AI and biological intelligence Ne]] #### Intersection between transformers machine learning architecture and the brain 1. Hippocampus and Spatial Information: - Researchers have found that the hippocampus, a brain structure critical for memory, can be modeled as a type of neural network similar to transformers. This model tracks spatial information in a way that parallels the brain's inner workings, suggesting that transformers can mimic certain brain functions related to memory and spatial awareness[1]. 2. Neuron-Astrocyte Networks: - A hypothesis has been proposed that neuron-astrocyte networks in the brain can naturally implement the core computations performed by transformers. Astrocytes, which are non-neuronal cells in the brain, play a role in information processing and could be key to understanding how transformers might be biologically implemented[2][3]. 3. Grid Cells and Spatial Representation: - Transformers have been shown to replicate the spatial representations of grid cells in the hippocampus. Grid cells help animals understand their position in space, and transformers can determine their current location by analyzing past states and movements, similar to how grid cells function[4][5]. Computational and Biological Parallels 1. Self-Attention Mechanism: - The self-attention mechanism in transformers, which allows them to process inputs by considering the relationships between all elements, has been difficult to interpret biologically. However, it has been suggested that the tripartite synapse (a connection involving an astrocyte and two neurons) could perform the role of normalization in the transformer's self-attention operation[2]. 2. Energy Efficiency and Learning: - Unlike transformers, which require massive amounts of data and energy for training, the human brain operates on a much smaller energy budget and learns efficiently from limited data. This difference highlights the brain's superior efficiency and adaptability compared to current AI models[2][3]. Implications for AI and Neuroscience 1. Improving AI Models: - Insights from neuroscience can help improve AI models by providing a better understanding of how the brain processes information. For instance, understanding the role of astrocytes in brain function could lead to more biologically plausible AI architectures[3]. 2. Understanding Brain Disorders: - Studying the parallels between transformers and brain function could also provide new hypotheses for how brain disorders and diseases affect astrocyte function, potentially leading to new therapeutic approaches[2]. In conclusion, while transformers and the human brain share some similarities in their hierarchical organization and information processing capabilities, significant differences remain. The brain's complexity and efficiency far surpass current AI models, but ongoing research continues to bridge the gap, offering valuable insights for both fields[1][2][3][4][5]. Citations: 1 [How Transformers Seem to Mimic Parts of the Brain Quanta Magazine](https://www.quantamagazine.org/how-ai-transformers-mimic-parts-of-the-brain-20220912/) 2 [Building transformers from neurons and astrocytes](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10450673/) 3 [AI models are powerful, but are they biologically plausible? | MIT News | Massachusetts Institute of Technology](https://news.mit.edu/2023/ai-models-astrocytes-role-brain-0815) 4 [The Neural Network in Our Heads: How Transformer Architectures Mirror the Human Brain](https://www.brown-tth.com/post/the-neural-network-in-our-heads-how-transformer-architectures-mirror-the-human-brain) 5 [[2112.04035] Relating transformers to models and neural representations of the hippocampal formation](https://arxiv.org/abs/2112.04035)