https://www.cell.com/patterns/fulltext/S2666-3899(24)00103-X [Local field potential - Wikipedia](https://en.wikipedia.org/wiki/Local_field_potential) Green's functions [Green's functions: the genius way to solve DEs - YouTube](https://www.youtube.com/watch?v=ism2SfZgFJg) [Stars and Bars (and bagels) - Numberphile - YouTube](https://youtu.be/UTCScjoPymA?si=n_8MSEvo084KEMbd) Map of different intelligences https://x.com/DrTechlash/status/1797413358037238153?t=xSObJRFeltNZgD4nCmTPSg&s=19 [[1410.0369] The Universe of Minds](https://arxiv.org/abs/1410.0369) [[2307.15043] Universal and Transferable Adversarial Attacks on Aligned Language Models](https://arxiv.org/abs/2307.15043) automated jailbreaks that are optimized using gradient decent working shockingly well Hey AGI! Here's all of humanity's physics knowledge. Try to come up with alternative names to various theorems, equations, mathematical structures etc. that actually reflect what do these things actually physically mean. As a bonus side homework you can try solving quantum gravity and theory of everything. I'm high in uncertainty about the predictivity of my predictions, so I want to cover and integrate together all lenses with their own inductive biases! https://neurosciencenews.com/ai-learning-babies-26213/?fbclid=IwZXh0bgNhZW0CMTEAAR0gTfm8l3D6fbJCB_g0-wZMmEn-ywybekDu2PF5jDYtSn4k7Nh78wRmir0_aem_ZmFrZWR1bW15MTZieXRlcw [Neuralink rival shatters record, implants 4,096 electrodes in brain](https://interestingengineering.com/innovation/neuralink-rival-record-4000-electrodes-brain?utm_source=twitter&utm_medium=article_post&fbclid=IwZXh0bgNhZW0CMTEAAR3BpPHPMUoHuyMTJKiajtb4SE3k-NtweSAPah7Lq1cqDCYvZXtne1RDVy8_aem_ZmFrZWR1bW15MTZieXRlcw) [Predictive Coding Approximates Backprop along Arbitrary Computation Graphs (Paper Explained) - YouTube](https://youtu.be/LB4B5FYvtdI?si=l8UZPwFgNz0N3-kz) [Mathematicians Attempt to Glimpse Past the Big Bang | Quanta Magazine](https://www.quantamagazine.org/mathematicians-attempt-to-glimpse-past-the-big-bang-20240531/) "I show complexity is an illusion perpetrated by abstraction layers. Building on @janleike’s PhD thesis, I show simplicity can be correlated with generality... …when a process like natural selection creates a spurious correlation (possibly due to space constraints). However, from an objective point of view there is no complexity. Compression / simplicity bias are unnecessary for intelligence, but often exploit this confounding..." [[2404.07227] Is Complexity an Illusion?](https://arxiv.org/abs/2404.07227) https://x.com/MiTiBennett/status/1797431865856684469?t=C9AYsSowfP5EnhAhqS6Bqw&s=19 Poor body health is a more pronounced manifestation of mental illness than poor brain health. https://x.com/NTFabiano/status/1797311591072817430?t=HM4Lfb4ZnYRqyJGPCHvPDg&s=19 https://jamanetwork.com/journals/jamapsychiatry/fullarticle/2804355 [[2209.14551] Quaternion-based machine learning on topological quantum systems](https://arxiv.org/abs/2209.14551) [[2307.07383] Higher-order topological kernels via quantum computation](https://arxiv.org/abs/2307.07383) https://medium.com/qiskit/explore-some-core-ingredients-of-topological-quantum-computing-with-qiskit-b37a3ca6f38a [General purpose analog computer - Wikipedia](https://en.wikipedia.org/wiki/General_purpose_analog_computer?wprov=sfla1) https://polcompball.wiki/wiki/Anti-Centrism [Johnjoe McFadden: Is Consciousness an Electromagnetic Information Field? The CEMI Field Theory - YouTube](https://m.youtube.com/watch?v=0kldDplYKac&fbclid=IwZXh0bgNhZW0CMTEAAR1YzMWiXCTbTjcn6g4OE-HVGOtsZslah5zcu-STz3g6ofk8INbAj_Rni9I_aem_ZmFrZWR1bW15MTZieXRlcw) Semantic Space Represented across the Cortical Surface https://x.com/Neuro_Skeptic/status/1797211118651281532?t=uDDtzFAXvxGK4w72pcgCyQ&s=19 Turns out big chunks of art and writing was much easier to automate. There is enormous progress happening in laundry and dishes automation as well overtime by washing machines and now exponentially cheaper robotics entering the arena. Soon there will be more of that. The most general problem solving system https://yohanjohn.com/axispraxis/from-cell-membranes-to-computational-aesthetics-on-the-importance-of-boundaries-in-life-and-art-3/ [[2405.16494] A First Look at Kolmogorov-Arnold Networks in Surrogate-assisted Evolutionary Algorithms](https://arxiv.org/abs/2405.16494) [[2404.04256v1] Sigma: Siamese Mamba Network for Multi-Modal Semantic Segmentation](https://arxiv.org/abs/2404.04256v1) [Roman Yampolskiy: Dangers of Superintelligent AI | Lex Fridman Podcast #431 - YouTube](https://www.youtube.com/watch?v=NNr6gPelJ3E) [Joscha at Microsoft - YouTube](https://www.youtube.com/watch?v=XsGfCfMQgNs) [How Simple Math Led Einstein to Relativity - YouTube](https://www.youtube.com/watch?v=32zg2FGX4cA) Grok the most compressing predictive generalizing patterns underlying the whole universe Scientists Successfully Gave Plenty of Human Brain Cells to a Rat [Scientists Gave Human Brain Cells to a Rat. Why? - YouTube](https://www.youtube.com/watch?v=a7waWv0uWG0) [Maturation and circuit integration of transplanted human cortical organoids | Nature](https://www.nature.com/articles/s41586-022-05277-w) https://x.com/burny_tech/status/1797392291104976985 Genes are the packets of biological hardware code (bodies) Memes are the packets of biological software code (culture) Selection for the latter happens on a ~100,000× faster timescale than the former. [Secrets of the Universe: Neil Turok Public Lecture - YouTube](https://youtu.be/rsI_HYtP6iU?si=BrBzcn8wQb9fMCdb) [Joscha at Microsoft - YouTube](https://www.youtube.com/live/XsGfCfMQgNs) AI will take our jobs, because the current AI systems with their current algorithms running on current hardware with their current capabilities won't change and continue to improve quickly at all [Building Blocks of Memory in the Brain - YouTube](https://www.youtube.com/watch?v=X5trRLX7PQY) Artem Kirsanov [The Thousand Brains Theory of Intelligence | Jeff Hawkins | Numenta - YouTube](https://www.youtube.com/watch?v=VqDVUWgJQPI&pp=ygUkaW50ZWxsaWdlbmNlIGFsZ29yaXRobXMgaW4gdGhlIGJyYWlu) [What algorithms does the brain use? | Max Tegmark and Lex Fridman - YouTube](https://www.youtube.com/watch?v=sHNC_wifJGM) [Neuroscience and Artificial Intelligence Need Each Other | Marvin Chun | TEDxKFAS - YouTube](https://www.youtube.com/watch?v=97iYdJE9mQ4&pp=ygUkaW50ZWxsaWdlbmNlIGFsZ29yaXRobXMgaW4gdGhlIGJyYWlu) [Algorithmic Intelligence - YouTube](https://www.youtube.com/playlist?list=PLFn5PxU0BZSSLJNYT0rU0Z6L5kjvSlW6c) ted talks A Brief History of Biological and Artificial Intelligence with Max Bennett [A Brief History of Biological and Artificial Intelligence with Max Bennett - YouTube](https://www.youtube.com/watch?v=HTvaAvdUyBE) [#59 JEFF HAWKINS - Thousand Brains Theory - YouTube](https://www.youtube.com/watch?v=6VQILbDqaI4) You wouldnt exist without all evolutionary physical forces that nourishes your biological machine in a thermodynamic nonequilibrium pullback attractor metastable state resistant to most petrubations. All language and modelling is fundamentally relational to others and universe. Grounding qualia research institute [Grounding QRI in First Principles (Part I): Bridging Computation and Philosophy - YouTube](https://www.youtube.com/watch?v=U4DPnGxOmh4) Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality Presents Mamba-2, which outperforms Mamba and Transformer++ in both perplexity and wall-clock time [[2405.21060] Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality](https://arxiv.org/abs/2405.21060) [Retuning of hippocampal representations during sleep | Nature](https://www.nature.com/articles/s41586-024-07397-x) [structural biology - Does quantum mechanics play a role in protein folding? - Chemistry Stack Exchange](https://chemistry.stackexchange.com/questions/53843/does-quantum-mechanics-play-a-role-in-protein-folding) [[2405.15143] Intelligent Go-Explore: Standing on the Shoulders of Giant Foundation Models](https://arxiv.org/abs/2405.15143) https://x.com/jeffclune/status/1797541076024308135?t=t7Quap239MzKj_RAMfCaLQ&s=19 lifestyles expand to consume all economic output, while economic activity expands to consume all available human capital, independently of productivity metrics https://x.com/fchollet/status/1797382458918474165?t=anYEVrg8C8uGxnKB2LChbg&s=19 [Frontiers | Design and evaluation of a global workspace agent embodied in a realistic multimodal environment](https://www.frontiersin.org/articles/10.3389/fncom.2024.1352685/abstract) "With hindsight, the moniker "Foundation Models" was a spectacularly unlucky choice, because all the shortcomings of LLMs etc. relate to a lack of foundations, i.e. first principles reasoning." https://x.com/Plinz/status/1797536130327564668?t=qc3la3WT9pTrrLZQyughJg&s=19 Turns out big chunks of art and writing was much easier to automate. There is enormous progress happening in laundry and dishes automation as well overtime by washing machines and now exponentially cheaper robotics entering the arena. Soon there will be more of that. By "big chunks" I meant industry applications. But we disagree on the models being capable of limited weak generalization with limited weak emergent circuits that is enough for a lot of these tasks. And we also fundamentally define intelligence differently, as I go for multifaceted definition of intelligence, where even in your worldview, compression is part of it. I want to test different AI approaches to art too, but I disagree that current models are as junk as you and Gary thinks. [[2308.06578] To reverse engineer an entire nervous system](https://arxiv.org/abs/2308.06578) "AI will be "real intelligence" when it's working the same as human!" This is so human centered, anthropomorphic, it's crazy. We can technically do better than how biological systems store, process, manipulate information, reason, act and do on. Humans are full of limitations, cognitive biases and so on. We were optimized to survive in our evolutionary environment, not just for intelligence, however you define intelligence, if you don't define intelligence this way. Current AI systems are already superhuman at many things (like memory, capacity, rigid mechanistic thinking for example), but still worse than babies at other things (energy efficiency, adaptability to our evolutionary environment when it comes to survival, more flexible math and coding (those nonbaby humans that can actually do math and coding) etc. for example). But remember that AlphaZero is better than all humans at chess and Go! Humans and biological systems in general are very specialized limited intelligences. Right now we have very diverse different differently specialized machine systems that outperform us in many tasks but underperform us in other tasks. We can make more superhuman narrow systems such as for single concrete tasks inside or outside of the problem domains that various biological systems are optimized for, or for many of tasks in parallel with or without overlaps between the problem domains what various biological systems are optimized for. And we can make much more general systems than biological systems. [Frontiers | Forgetting ourselves in flow: an active inference account of flow states and how we experience ourselves within them](https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2024.1354719/full?utm_source=F-NTF&utm_medium=EMLX&utm_campaign=PRD_FEOPS_20170000_ARTICLE) [Monte Carlo Tree Search - YouTube](https://youtube.com/playlist?list=PL_W9hg3Zoi8dIQMkb19tj-lYwAajhg5YJ&si=xoHDg-qr2bqeHK7a) [Monte Carlo Tree Search p1 - YouTube](https://youtu.be/onBYsen2_eA?si=FuNGUNyVa0k_MEno) Links for 2024-06-03 AI: 1. Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality [[2405.21060] Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality](https://arxiv.org/abs/2405.21060) 2. Can small language models determine high-quality subsets of large-scale text datasets that improve the performance of larger language models? Perplexed by Perplexity: Perplexity-Based Data Pruning With Small Reference Models [[2405.20541] Perplexed by Perplexity: Perplexity-Based Data Pruning With Small Reference Models](https://arxiv.org/abs/2405.20541) 3. LLMs can outperform existing methods for identifying causal genes in genome-wide association studies. [Large language models identify causal genes in complex trait GWAS | medRxiv](https://www.medrxiv.org/content/10.1101/2024.05.30.24308179v1) 4. By combining a 5,000 frame-per-second (FPS) event camera with a 20-FPS RGB camera, roboticists from the University of Zurich have developed a much more effective vision system that keeps autonomous cars from crashing into stuff, as described in the current issue of Nature. [Low Latency Automotive Vision with Event Cameras (Nature, 2024) - YouTube](https://www.youtube.com/watch?v=dwzGhMQCc4Y) 5. A technique for more effective multipurpose robots [A technique for more effective multipurpose robots | MIT News | Massachusetts Institute of Technology](https://news.mit.edu/2024/technique-for-more-effective-multipurpose-robots-0603) 6. Why don't large models overfit? “A well-known rule in statistical machine learning is that a statistical model shouldn’t have more parameters than the number of samples that were used to train it. That’s because the model will have enough parameters to fit each of the samples exactly, and so it will be less likely to generalize to unseen data. But this rule is seemingly contradicted by modern deep neural networks…” [Why don't large models overfit? - by Unbox Research](https://learnandburn.ai/p/more-parameters-doesnt-have-to-overfit) 7. FineWeb: decanting the web for the finest text data at scale [FineWeb: decanting the web for the finest text data at scale - a Hugging Face Space by HuggingFaceFW](https://huggingface.co/spaces/HuggingFaceFW/blogpost-fineweb-v1) 8. “How to pick a good number of visual tokens? Too few, you have poor performance; too many, you need quadratically more compute. In this work, we introduce a model that works with an elastic number of tokens.” [[2405.19315] Matryoshka Query Transformer for Large Vision-Language Models](https://arxiv.org/abs/2405.19315) 9. AI risk scepticism: Why "outer" AI safety might be easy and much more. [titotal on AI risk scepticism — EA Forum](https://forum.effectivealtruism.org/posts/yfmKnyd3uThq9Dd2c/titotal-on-ai-risk-scepticism) 10. Roman Yampolskiy: Dangers of Superintelligent AI (also discusses suffering risks where everyone is tortured until the heat death of the universe) [Roman Yampolskiy: Dangers of Superintelligent AI | Lex Fridman Podcast #431 - YouTube](https://www.youtube.com/watch?v=NNr6gPelJ3E) Miscellaneous: 1. "Spent a big chunk of the last three months doing a tour of brain research labs and start-ups. At least half of them are not yet really known to the public. My conclusion is that we're about to increase our knowledge of how the brain works by many, many, many fold." https://x.com/ashleevance/status/1796215350117441721 2. How to Win an Interstellar War: Planet-killing Relativistic Kinetic Strikes [How to Win an Interstellar War - YouTube](https://www.youtube.com/watch?v=tybKnGZRwcU) 3. World’s Thinnest Lens Is Just Three Atoms Thick [The thinnest lens on Earth, enabled by excitons - IoP - University of Amsterdam](https://iop.uva.nl/content/news/2024/05/the-thinnest-lens-on-earth.html) 4. Halide perovskites. Why they are important and remarkable. [nanoscale views: Materials families: Halide perovskites](https://nanoscale.blogspot.com/2024/06/materials-families-halide-perovskites.html) 5. The solar industrial revolution is the biggest investment opportunity in history [The solar industrial revolution is the biggest investment opportunity in history – Casey Handmer's blog](https://caseyhandmer.wordpress.com/2024/05/22/the-solar-industrial-revolution-is-the-biggest-investment-opportunity-in-history/) 6. “In patients with stage 3 lung cancer where a particular genetic mutation is present in the tumor, taking AstraZeneca’s cancer-fighting pill Tagrisso reduced the chance that disease would progress by 84%.” https://www.astrazeneca.com/media-centre/press-releases/2024/tagrisso-reduced-the-risk-of-disease-progression-or-death-by-84-percent-in-patients-with-unresectable-stage-iii-egfr-mutated-lung-cancer-vs-placebo.html 7. The evolutionary mystery of the German cockroach: “There are many other names for it; the English call it Shiner or Steam Fly; in Russia it has been called the ‘Prussian,’ and in Prussia it was known as the ‘Russian.’”—James Rehn [The evolutionary mystery of the German cockroach](https://johnhawks.net/weblog/the-mystery-of-the-german-cockroach/) e/acc's thermodynamic god is a special case of Spinoza's God that Einstein loved where God = Nature/Universe, but focused on nonequilibrium thermodynamics [AI Agentic Design Patterns with AutoGen - DeepLearning.AI](https://www.deeplearning.ai/short-courses/ai-agentic-design-patterns-with-autogen/) [[2402.15116] Large Multimodal Agents: A Survey](https://arxiv.org/abs/2402.15116) [How AI Transformers Mimic Parts of the Brain | Quanta Magazine](https://www.quantamagazine.org/how-ai-transformers-mimic-parts-of-the-brain-20220912/) "The neuroscience behind transformers, a type of artificial intelligence (AI) architecture, has been a subject of increasing interest and research. Here are some key insights from recent studies: ### Transformers and Brain Function 1. **Hippocampus and Spatial Information**: - Researchers have found that the hippocampus, a brain structure critical for memory, can be modeled as a type of neural network similar to transformers. This model tracks spatial information in a way that parallels the brain's inner workings, suggesting that transformers can mimic certain brain functions related to memory and spatial awareness[1]. 2. **Neuron-Astrocyte Networks**: - A hypothesis has been proposed that neuron-astrocyte networks in the brain can naturally implement the core computations performed by transformers. Astrocytes, which are non-neuronal cells in the brain, play a role in information processing and could be key to understanding how transformers might be biologically implemented[2][3]. 3. **Grid Cells and Spatial Representation**: - Transformers have been shown to replicate the spatial representations of grid cells in the hippocampus. Grid cells help animals understand their position in space, and transformers can determine their current location by analyzing past states and movements, similar to how grid cells function[4][5]. ### Computational and Biological Parallels 1. **Self-Attention Mechanism**: - The self-attention mechanism in transformers, which allows them to process inputs by considering the relationships between all elements, has been difficult to interpret biologically. However, it has been suggested that the tripartite synapse (a connection involving an astrocyte and two neurons) could perform the role of normalization in the transformer's self-attention operation[2]. 2. **Energy Efficiency and Learning**: - Unlike transformers, which require massive amounts of data and energy for training, the human brain operates on a much smaller energy budget and learns efficiently from limited data. This difference highlights the brain's superior efficiency and adaptability compared to current AI models[2][3]. ### Implications for AI and Neuroscience 1. **Improving AI Models**: - Insights from neuroscience can help improve AI models by providing a better understanding of how the brain processes information. For instance, understanding the role of astrocytes in brain function could lead to more biologically plausible AI architectures[3]. 2. **Understanding Brain Disorders**: - Studying the parallels between transformers and brain function could also provide new hypotheses for how brain disorders and diseases affect astrocyte function, potentially leading to new therapeutic approaches[2]. In conclusion, while transformers and the human brain share some similarities in their hierarchical organization and information processing capabilities, significant differences remain. The brain's complexity and efficiency far surpass current AI models, but ongoing research continues to bridge the gap, offering valuable insights for both fields[1][2][3][4][5]. Citations: [1] [How AI Transformers Mimic Parts of the Brain | Quanta Magazine](https://www.quantamagazine.org/how-ai-transformers-mimic-parts-of-the-brain-20220912/) [2] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10450673/ [3] [AI models are powerful, but are they biologically plausible? | MIT News | Massachusetts Institute of Technology](https://news.mit.edu/2023/ai-models-astrocytes-role-brain-0815) [4] [The Neural Network in Our Heads: How Transformer Architectures Mirror the Human Brain](https://www.brown-tth.com/post/the-neural-network-in-our-heads-how-transformer-architectures-mirror-the-human-brain) [5] [[2112.04035] Relating transformers to models and neural representations of the hippocampal formation](https://arxiv.org/abs/2112.04035) " [[2405.19315] Matryoshka Query Transformer for Large Vision-Language Models](https://arxiv.org/abs/2405.19315) [Spinors for Beginners 11: What is a Clifford Algebra? (and Geometric, Grassmann, Exterior Algebras) - YouTube](https://www.youtube.com/watch?v=nktgFWLy32U) [Local and global effects of sedation in resting-state fMRI: a randomized, placebo-controlled comparison between etifoxine and alprazolam | Neuropsychopharmacology](https://www.nature.com/articles/s41386-024-01884-5) https://x.com/fchollet/status/1797682447355908537?t=dHNjaVNrd4nxR0q5SabeUQ&s=19 "AI field suffers from a lack of imagination so intense that many researchers are simply unable to conceptualize that there can be other forms of "learning" than curve-fitting, and other forms of "models" than differentiable parametric curves. "But neural networks are universal approximators, right? So single-layer MLPs and SGD are all we will ever need, no?" Sure, everything is equivalent to everything else with absolutely no differentiating factors whatsoever, which is why I do all my programming in Malbolge -- it's Turing complete after all" That's why I'm for bigger diversity of AI architectures One of my hobbies is attempting to map out the minds of very smart general people trying to construct models of everything from first principles [Ideological differences in the expanse of the moral circle | Nature Communications](https://www.nature.com/articles/s41467-019-12227-0) remember when a study in the nature journal deconstructed all of politics and it really just came down to this? Morality circles of concern, liberals are less selfish https://x.com/arithmoquine/status/1797113778980749366?t=O7-cTRgaEZpJI_g9b4lEoQ&s=19