## Tags - Part of: [[Intelligence]] - Related: [[Artificial Intelligence x Biological Intelligence x Collective Intelligence]] - Includes: - Additional: ## Definitions - [[Artificial Intelligence]] x [[Biological intelligence]] ## Contents - Current AI models (different architectures in different paradigms) are superhuman in some things (like memory, chess, Go, some mathematics, capacity, not dying, some science and so on.), but still subhuman in other contexts (like autonomous creating of value in our economic system with long term coherence, often causal reasoning, continual learning, adaptibility, and so on). - Let's make a benchmark testing for AI systems that can nicely do causal modeling, strong generalization, continuous learning, data & compute efficiency and stability/reliability in symbolic reasoning, agency, more complex tasks across time and space, long term planning, optimal bayesian inference etc. The ultimate benchmark would be giving Ai systems all the information that Newton, Maxwell, Boltzman, Einstein, Feynman, Edward Witten, Von Neumann etc. had before their discoveries in physics or other fields and then seeing if the system could come up with the same or isomorphic discoveries. - Mainstream AI uses [[Artificial neural networks]], we use something closer to [[Spiking Neural Network]]? [Spiking neural network - Wikipedia](https://en.wikipedia.org/wiki/Spiking_neural_network) - Mainstream AI uses [[backpropagation]], we use something closer to [[forward-forward algorithm]] and [[hebbian learning]]? [\[2212.13345\] The Forward-Forward Algorithm: Some Preliminary Investigations](https://arxiv.org/abs/2212.13345) - Mainstream AI is simple linear algebra, but we model neurons with electronics such as [[Hodgkin–Huxley model]] with ? [Dendrites: Why Biological Neurons Are Deep Neural Networks - YouTube](https://www.youtube.com/watch?v=hmtQPrH-gC4) - Mainstream AI is a static program, we are [[selfregulating]] [[selforganizing]] [[dynamical systems]] like [[neural celluar automata]]? [Growing Neural Cellular Automata](https://distill.pub/2020/growing-ca/) - Is brain doing predictive coding? [Predictive coding - Wikipedia](https://en.wikipedia.org/wiki/Predictive_coding) Is brain [[bayesian]] and doing [[active inference]]? [Active InferenceThe Free Energy Principle in Mind, Brain, and Behavior | Books Gateway | MIT Press](https://direct.mit.edu/books/oa-monograph/5299/Active-InferenceThe-Free-Energy-Principle-in-Mind) - is brain's [[electromagnetism]] relevant? [Frontiers | Don’t forget the boundary problem! How EM field topology can address the overlooked cousin to the binding problem for consciousness](https://www.frontiersin.org/journals/human-neuroscience/articles/10.3389/fnhum.2023.1233119/full) - Are oscillations relevant? [Neural oscillation - Wikipedia](https://en.wikipedia.org/wiki/Neural_oscillation) - [Complexity]() of biological cells is beyond comprehension, is that relevant? Are microtubules are fringe? is Penrose's [[godel]]ian [[quantum gravity]] [[microtubules]] collapsing the [[wave function]] madness right? Maybe Pensore is right and we need quantum gravity supercomputer for AGI lol. [Orchestrated objective reduction - Wikipedia](https://en.wikipedia.org/wiki/Orchestrated_objective_reduction) 1:44:35 [Joscha Bach Λ Ben Goertzel: Conscious Ai, LLMs, AGI - YouTube](https://www.youtube.com/watch?v=xw7omaQ8SgA&feature=youtu.be) - Now the biggest limitations in current AI systems are probably: to create more complex systematic coherent reasoning, planning, generalizing, search, agency (autonomy), memory, factual groundedness, online/continuous learning, software and hardware energetic and algoritmic efficiency, human-like ethical reasoning, or controllability, into AI systems, which they have relatively weak for more complex tasks, but we are making progress in this, either through composing LLMs in multiagent systems, scaling, higher quality data and training, poking around how they work inside and thus controlling them, through better mathematical models of how learning works and using these insights, or modified or overhauled architecture, etc.... or embodied robotics is also getting attention recently... and all top AGI labs are working/investing in these things to varying degrees. ## Main resources - [Exploring the Intersection of Artificial Intelligence and Neuroscience | Internet Networking AI](https://jpvasseur.me/ai-and-neuroscience) ## Brain storming [[Thoughts comparing AI and biological intelligence]] ## Resources [[Links comparing AI and biological intelligence Ne]] #### Intersection between transformers machine learning architecture and the brain 1. Hippocampus and Spatial Information: - Researchers have found that the hippocampus, a brain structure critical for memory, can be modeled as a type of neural network similar to transformers. This model tracks spatial information in a way that parallels the brain's inner workings, suggesting that transformers can mimic certain brain functions related to memory and spatial awareness[1]. 2. Neuron-Astrocyte Networks: - A hypothesis has been proposed that neuron-astrocyte networks in the brain can naturally implement the core computations performed by transformers. Astrocytes, which are non-neuronal cells in the brain, play a role in information processing and could be key to understanding how transformers might be biologically implemented[2][3]. 3. Grid Cells and Spatial Representation: - Transformers have been shown to replicate the spatial representations of grid cells in the hippocampus. Grid cells help animals understand their position in space, and transformers can determine their current location by analyzing past states and movements, similar to how grid cells function[4][5]. Computational and Biological Parallels 1. Self-Attention Mechanism: - The self-attention mechanism in transformers, which allows them to process inputs by considering the relationships between all elements, has been difficult to interpret biologically. However, it has been suggested that the tripartite synapse (a connection involving an astrocyte and two neurons) could perform the role of normalization in the transformer's self-attention operation[2]. 2. Energy Efficiency and Learning: - Unlike transformers, which require massive amounts of data and energy for training, the human brain operates on a much smaller energy budget and learns efficiently from limited data. This difference highlights the brain's superior efficiency and adaptability compared to current AI models[2][3]. Implications for AI and Neuroscience 1. Improving AI Models: - Insights from neuroscience can help improve AI models by providing a better understanding of how the brain processes information. For instance, understanding the role of astrocytes in brain function could lead to more biologically plausible AI architectures[3]. 2. Understanding Brain Disorders: - Studying the parallels between transformers and brain function could also provide new hypotheses for how brain disorders and diseases affect astrocyte function, potentially leading to new therapeutic approaches[2]. In conclusion, while transformers and the human brain share some similarities in their hierarchical organization and information processing capabilities, significant differences remain. The brain's complexity and efficiency far surpass current AI models, but ongoing research continues to bridge the gap, offering valuable insights for both fields[1][2][3][4][5]. Citations: 1 [How Transformers Seem to Mimic Parts of the Brain Quanta Magazine](https://www.quantamagazine.org/how-ai-transformers-mimic-parts-of-the-brain-20220912/) 2 [Building transformers from neurons and astrocytes](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10450673/) 3 [AI models are powerful, but are they biologically plausible? | MIT News | Massachusetts Institute of Technology](https://news.mit.edu/2023/ai-models-astrocytes-role-brain-0815) 4 [The Neural Network in Our Heads: How Transformer Architectures Mirror the Human Brain](https://www.brown-tth.com/post/the-neural-network-in-our-heads-how-transformer-architectures-mirror-the-human-brain) 5 [[2112.04035] Relating transformers to models and neural representations of the hippocampal formation](https://arxiv.org/abs/2112.04035)