To increase intelligence you have to increase entropy but encode the newer undefined structures that have more degrees of freedom and potential for complexity with more complex patterns from sensory data across modalities by inducing correlation with them across various time scales across the whole neural network in as efficient compressed dynamical topologically efficient hetearchical representations as possible [Shtetl-Optimized » Blog Archive » The First Law of Complexodynamics](https://scottaaronson.blog/?p=762)
Automate scanning of all papers proposing various mathematical models of fundamental physics, intelligence, neurophenomenology, and their empirical tests, and synthetize them into one mathematical framework in as compatible way as possible, prioritizing the most predictive ones
Is the brain or the universe turing complete, or hyperturing complete, or something more? More than the notion of uncomputable functions, maybe some more math we don't know yet
Qualia completeness
[[2405.11804] (Perhaps) Beyond Human Translation: Harnessing Multi-Agent Collaboration for Translating Ultra-Long Literary Texts](https://arxiv.org/abs/2405.11804)
Reverseengineer the equations of physics, intelligence and neurophenomenology
[[2405.09818] Chameleon: Mixed-Modal Early-Fusion Foundation Models](https://arxiv.org/abs/2405.09818)
https://scholar.google.de/citations?view_op=view_citation&hl=de&user=eGgZmPgAAAAJ&citation_for_view=eGgZmPgAAAAJ:zYLM7Y9cAGgC
[Evaluating the Bayesian causal inference model of intentional binding through computational modeling | Scientific Reports](https://www.nature.com/articles/s41598-024-53071-7)
https://www.cell.com/neuron/abstract/S0896-6273(24)00121-1
https://www.cell.com/neuron/fulltext/S0896-6273(24)00280-0
[What objective is STaR optimizing?](https://justinchiu.netlify.app/blog/star/)
[John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI - YouTube](https://youtu.be/Wo95ob_s_NI?si=mE6Lv_ozprzK0CA3)
Classification of mathematical problems as linear and nonlinear is like classification of the Universe as bananas and non-bananas.
Statistucak mechanics of deep learning https://twitter.com/CalcCon/status/1791131005530861773?t=drwM_yzoA67U229p1ma_IQ&s=19
https://www.lesswrong.com/posts/NvwjExA7FcPDoo3L7/are-there-cognitive-realms
Intelligence isn't a single vector
[[2404.10636] What are human values, and how do we align AI to them?](https://arxiv.org/abs/2404.10636)
https://www.tandfonline.com/eprint/AEW59WDGSNAYFXHW3UED/full?target=10.1080/17588928.2024.2349546
[Animal brain inspired AI game changer for autonomous robots](https://www.nanowerk.com/news2/robotics/newsid=65229.php?fbclid=IwZXh0bgNhZW0CMTEAAR0_JBO0BAV49O82lI64IEMQeYAmzZwCqvMO3iqvo5CDDe0alxwQOTnfrV4_aem_Aav0U7C2b1_ZMVLTagm0CXutQp7w0-K5QsfMs9dF81Xtvib7d4iff507QxVOkZEAgg7VaWG6IXb23oDXx5BT4jUi)
[The Neurocircuitry of Fear, Stress, and Anxiety Disorders | Neuropsychopharmacology](https://www.nature.com/articles/npp200983)
Coultrafinitism: Natural numbers are non-existing approximating idealizations, only ungraspable irrational numbers actually explain fundamental phenomena
It's interesting how both neuroscience and artificial neural network mechanistic interpretability are dealing with neurons encoding superpositions of features
[Cognition Emerges From Neural Dynamics - Earl K Miller, Wave Club, April 2 2024 - YouTube](https://youtu.be/yHMTgb8CuDE?si=HSHi9dR_3tlGQ2bE)
[This Algorithm Could Make a GPT-4 Toaster Possible - YouTube](https://youtu.be/rVzDRfO2sgs?si=GD2ShSQKTp8PNHkN)
[EP 238 Sam Sammane on Humanity's Role in an AI-Dominated Future - The Jim Rutt Show](https://www.jimruttshow.com/sam-sammane/)
I think that artificial neural nets or neurosymbolics most likely won't have all the hardcoded evolutionary priors that humans gathered over milions of years including rogue priors that subset of our population has. We would have to explicitly hardcore those, if we knew how. I think there won't be rogue priors as emergent by default, the space of all possible AI emergent patterns is too giant for that. And imitation learning won't make it.
What kind of digital twin world model synchronizing with sensory data realtime does the brain implement and how to mathematize it
We're a (nonlinear) wave trying to know itself, which of course has an in-built fundamental degree of uncertainty due to the uncertainty principle, where detail in the frequency and spatial domain trade off each other. We're a wave with frequencies across the spectrum, with frequency-position tradeoffs all across the frequency range. We're a wave that can only know itself partially, yet it seeks to know itself fully. Hence freedom, hence longing, hence neither oneness nor twoness nor nness.
https://twitter.com/algekalipso/status/1790666971216073078?t=nVsPan8PcbWz_HYJwPASmQ&s=19
I think neurosymbolics are needed to workout weird edge cases in feature learning
And when you try to replicate this, it's interesting that it happens just to the newest OpenAI model, not with the older one, or Claude, or Gemini
https://twitter.com/burny_tech/status/1790797383988588632?t=h17wC75X4uObmx_cgiiPcA&s=19
https://benwheatley.github.io/blog/2024/04/30-13.54.02.html
[[2310.02877] Stationarity without mean reversion in improper Gaussian processes](https://arxiv.org/abs/2310.02877)
Life is omniperiodic
[[2312.02799] Conway's Game of Life is Omniperiodic](https://arxiv.org/abs/2312.02799)
https://twitter.com/Dragonmaurizio/status/1790529872734925153?t=FbV7qpKLQDfCo-Shd-Gxgg&s=19
What is your brain's eigendecomposition of laplacian matrix corresponding to it's structure?
[John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI - YouTube](https://youtu.be/Wo95ob_s_NI?si=Ilujcyjj1Z6gfjl1)
GPT-4omni text+audio+images+video multimodality is just a begging. Nvidia is cooking true omnimodality.
https://twitter.com/burny_tech/status/1790813746056536149?t=tyBnf-tdsPD2OMP824Dwqg&s=19
It seems we're learning that deep learning is mostly about the data. If you want to know where it will really take off look to areas where you can continuously generate increasingly diverse but consistently high quality data. That leads you to quantum chemistry:
https://twitter.com/TimothyDuignan/status/1790720737860604215?t=eoEEdhQiivMnip8xeLGrVA&s=19
[[2405.05961] Towards comprehensive coverage of chemical space: Quantum mechanical properties of 836k constitutional and conformational closed shell neutral isomers consisting of HCNOFSiPSClBr](https://arxiv.org/abs/2405.05961)
[Brian Keating: Cosmology, Astrophysics, Aliens & Losing the Nobel Prize | Lex Fridman Podcast #257 - YouTube](https://www.youtube.com/watch?v=nhGwJLXzHs8)
Shard - a proof-of-concept for an infinitely scalable distributed system composed of consumer hardware for training and running ML models!
Features:
- Data + Pipeline Parallel for handling arbitrarily large models
- Algorithmic load balancing for throughput optimization
- Fault tolerance for unreliable machines
https://twitter.com/AkshGarg03/status/1790824537904554351
[[2405.07987] The Platonic Representation Hypothesis](https://arxiv.org/abs/2405.07987)
"Maximum effectors are entities that seek to have the largest effect possible in the cosmos, irrespective of the direction in which this effect will take us. There's no good reason to be a maximum effector, but the fact that intelligence can be applied to this task, in light of the fact that this task alone is also in principle the one that has the largest effect, makes them actually a threat at the cosmic scale. They are the first thing that a super-cooperator cluster will probably want to coordinate against. Then you have entropy maximizers, which are not quite maximum effectors, but they're pretty close. They essentially seek to accelerate the heat death of the universe, irrespective of the states along the way. Again, it makes no sense, but it can cause large enough effect sizes that it's worth considering. Third, you have pure replicators, which constrain their search using rationality and cognition, to possibilities where the information pattern continues to exist and maximizes its number of copies. A lot of pure replicators can be coordinated with, although a solid foundation is lacking for long-term cooperation and integration into cooperation structures. Then you have gradients of goal achievers and utility function maximizers, which are generally pretty dangerous, but can be much more reasonable than pure replicators. Then you have, finally, entities that care about consciousness one way or another. In turn, you have within it clusters of mind designs that comprehend this cooperation ladder and can reason about it. And then you have actualized cooperation clusters that are aiming towards the generic direction of wellness for all sentients and access to full spectrum qualia.
I think this whole reasoning is universal and that other civilizations elsewhere in the cosmos would likely stumble upon it as well, so we can acausally, and then causally, coordinate with them."
https://twitter.com/algekalipso/status/1790869913038672274
How to navigate current incentives to incentivise a great IMO ideally mostly decentralized but cooperating ecosystem of superclusters together mostly maximizing the total amount of deeply fullfilling interesting experiences happening in the whole universe is what I think about daily. Politics, technology, memetics, economy,... Let's eventually beat the heat death of the universe too.
[Brain Oscillations in Field Potentials: Epiphenomenon or Causally Efficacious? - YouTube](https://youtu.be/myx914hfcic?si=pRYTacbcpe8N8i8e)
[Where Minds Come From: the scaling of collective intelligence, and what it means for AI and you - YouTube](https://youtu.be/44W9Mw4AGT8?si=zYjwVPWZz9XsL3KB)
[Bryan Johnson: Meditation, Ketamine, Consciousness, Longevity, Sleep, Immortality - YouTube](https://www.youtube.com/watch?v=PXkhhHPUud4)
https://www.researchgate.net/publication/378769596_Oscillating_Spacetime_The_Foundation_of_the_Universe
General model of general intelligence
Latent space of latent spaces
Attractor/Singularity in the latent space of latent spaces
https://twitter.com/SmokeAwayyy/status/1790646534117458042?t=D-hL1GnF5iib_FXQy4GdEg&s=19
[Forms of life, forms of mind | Dr. Michael Levin | Life after Death: in another world, at another scale](https://thoughtforms.life/life-after-death-in-another-world-at-another-scale/)
Links for 2024-05-16
AI:
1. With spatial intelligence, AI will understand the real world: “Eons ago, the first creatures developed hardware to turn light into sight. Then their small neural nets turn sight into insight. Then large neural nets turn insight into foresight, enabling reasoning, planning, and actions. They become embodied agents that learn to perceive and interact with a complex world, bootstrapping intelligence along the way.” — Jim Fan [Fei-Fei Li: With spatial intelligence, AI will understand the real world | TED Talk](https://www.ted.com/talks/fei_fei_li_with_spatial_intelligence_ai_will_understand_the_real_world)
2. OpenAI Co-founder Says AI Will Replace Him in 5 Years — John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI [John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI](https://www.dwarkeshpatel.com/p/john-schulman)
3. Artificial intelligence is revolutionizing mathematics, according to a leading mathematician. [Why mathematics is set to be revolutionized by AI](https://www.nature.com/articles/d41586-024-01413-w)
4. Scientists use generative AI to answer complex questions in physics [Scientists use generative AI to answer complex questions in physics | MIT News | Massachusetts Institute of Technology](https://news.mit.edu/2024/scientists-use-generative-ai-complex-questions-physics-0516)
5. RACER: Learning Epistemic Risk-Sensitive Policies with Online RL [RACER](https://sites.google.com/view/racer-epistemic-rl)
6. People cannot distinguish GPT-4 from a human in a Turing test. In a pre-registered Turing test we found GPT-4 is judged to be human 54% of the time. [[2405.08007] People cannot distinguish GPT-4 from a human in a Turing test](https://arxiv.org/abs/2405.08007)
7. SciFIBench, a scientific figure interpretation benchmark for LMMs. GPT-4o leads in 3/4 evaluations. [GitHub - jonathan-roberts1/SciFIBench: Accompanying repo for the 'SciFIBench: Benchmarking Large Multimodal Models for Scientific Figure Interpretation' project](https://github.com/jonathan-roberts1/SciFIBench)
8. Topoformer: brain-like topographic organization in Transformer language models through spatial querying and reweighting — “LLMs fundamentally differ from the human language system. One notable difference is that biological brains are spatially organized, while current LLMs are not. This paper is addressing this issue.” [Topoformer](https://tahabinhuraib.github.io/topoformer.github.io/)
9. Today’s AI models are impressive. Teams of them will be formidable [Today’s AI models are impressive. Teams of them will be formidable](https://www.economist.com/science-and-technology/2024/05/13/todays-ai-models-are-impressive-teams-of-them-will-be-formidable) [no paywall: https://archive.is/ilYK9]
10. Safety researchers leave Ilya Sutskever (co-founder & Chief Scientist) and Jan Leike have quit OpenAI. https://www.lesswrong.com/posts/JSWF2ZLt6YahyAauE/ilya-sutskever-and-jan-leike-resign-from-openai
11. PolyAI secures near $500mn valuation in boost to UK’s AI ambitions [Subscribe to read](https://www.ft.com/content/928a22ba-1d04-4865-b854-5ebc5e53ea61) [no paywall: https://archive.is/h7e78]
Miscellaneous:
1. A new way to detect light in the brain using magnetic resonance imaging (MRI) could enable researchers to map changes in gene expression, map anatomical connections between cells, or reveal how cells communicate with each other. [Using MRI, engineers have found a way to detect light deep in the brain | MIT News | Massachusetts Institute of Technology](https://news.mit.edu/2024/using-mri-engineers-have-found-way-detect-light-deep-brain-0510)
2. “The priests of Tlaloc believed the tears of innocent children to be particularly pleasing to the god...The ritual began with the bones of the children being broken, their hands or their feet burned, and carvings etched into their flesh...Insufficient tears from the children were believed to result in insufficient rains for the crops that year, so no brutality was spared.” [California Public Schools Remove Aztec Chant | National Review](https://www.nationalreview.com/corner/a-win-for-parents-a-loss-for-aztec-worship-in-schools/)
This AI copyright stuff is stupid when it comes to how the technology works IMO. They're not copying, they're in combination memorizing and generalizing using abstract representations, just like the human brains when they learn, but the difference is that the generalizations are weaker and of slightly different type when you test it in multiple benchmarks, and that the learning algorithm probably isn't gradient descent but something like forward forward propagation working on slightly different but very similar principles on a biological substrate instead of silicon one with slightly different but similar architectures with different types of advantages and disadvantages. They're different in many ways, yes, but memorization is not it fundamentally. There's thing called bias variance trade off in statistical learning theory, and only copying would be over fitting, which is not what gradient descent on neural networks with attention mechanism is doing. I'm up for supporting artists financially, like with universal basic income, and better economic incentives for artists to flourish, but this copyright stuff just feels unscientific when it comes to its technical arguments.
https://twitter.com/AndrewYNg/status/1791134037178020308?t=7ghHE9j-5Akp3Yyat_ZNog&s=19
"When building complex workflows, I see developers getting good results with this process:
- Write quick, simple prompts and see how it does.
- Based on where the output falls short, flesh out the prompt iteratively. This often leads to a longer, more detailed, prompt, perhaps even a mega-prompt.
- If that’s still insufficient, consider few-shot or many-shot learning (if applicable) or, less frequently, fine-tuning.
- If that still doesn’t yield the results you need, break down the task into subtasks and apply an agentic workflow."
"All philosophers should be encouraged to write a paragraph outlining their core view of the world and their non-trivial contributions to the sense making commons.
Here is David Pearce's:
All that matters is the pleasure-pain axis. Pain and pleasure disclose the world’s inbuilt metric of (dis)value. Our overriding ethical obligation is to minimise suffering. After we have reprogrammed the biosphere to wipe out experience below “hedonic zero”, we should build a “triple S” civilisation based on gradients of superhuman bliss. The nature of ultimate reality baffles me. But intelligent moral agents will need to understand the multiverse if we are to grasp the nature and scope of our wider cosmological responsibilities. My working assumption is non-materialist physicalism. Formally, the world is completely described by the equation(s) of physics, presumably a relativistic analogue of the universal Schrödinger equation. Tentatively, I’m a wavefunction monist who believes we are patterns of qualia in a high-dimensional complex Hilbert space. Experience discloses the intrinsic nature of the physical: the “fire” in the equations. The solutions to the equations of QFT or its generalisation yield the values of qualia. What makes biological minds distinctive, in my view, isn’t subjective experience per se, but rather non-psychotic binding. Phenomenal binding is what consciousness is evolutionarily “for”. Without the superposition principle of QM, our minds wouldn’t be able to simulate fitness-relevant patterns in the local environment. When awake, we are quantum minds running subjectively classical world-simulations. I am an inferential realist about perception. Metaphysically, I explore a zero ontology: the total information content of reality must be zero on pain of a miraculous creation of information ex nihilo. Epistemologically, I incline to a radical sceptidcism that would be sterile to articulate. Alas, the history of philosophy twinned with the principle of mediocrity suggests I burble as much nonsense as everyone else.
https://twitter.com/algekalipso/status/1791149923444064322?t=v14ChT,Cx31BQcB6-q-8RBg&s=19
"
[Distributed and dynamical communication: a mechanism for flexible cortico-cortical interactions and its functional roles in visual attention | Communications Biology](https://www.nature.com/articles/s42003-024-06228-z)
[[2405.06686] Word2World: Generating Stories and Worlds through Large Language Models](https://arxiv.org/abs/2405.06686)
fhlddddhttps://twitter.com/togelius/status/1790815597720170612?t=w8sZt6lmAl1TiwoGoaiYIw&s=19
https://twitter.com/MatthewSacchet/status/1791127001765642476?t=Hj_ksR4paBYbPDiMDel7XQ&s=19
https://www.cell.com/heliyon/fulltext/S2405-8440(24)07254-2
Ai winters and summers
https://twitter.com/davidad/status/1790992589073654090?t=IN7oTsoDcpxYiMOi83-8Mg&s=19
Links for 2024-05-17
AI:
1. Chameleon: Mixed-Modal Early-Fusion Foundation Models — matches or exceeds the performance of much larger models, including Gemini Pro and GPT-4V… [[2405.09818] Chameleon: Mixed-Modal Early-Fusion Foundation Models](https://arxiv.org/abs/2405.09818)
2. OpenAI strikes Reddit deal to train ChatGPT on its posts [OpenAI strikes Reddit deal to train its AI on your posts - The Verge](https://www.theverge.com/2024/5/16/24158529/reddit-openai-chatgpt-api-access-advertising)
3. Waymo says its robotaxis are now making 50,000 paid trips every week. If the company is getting 50,000 rides a week, that means it receives an average of 300 bookings every hour, or five bookings every minute. [Waymo says its robotaxis are now making 50,000 paid trips every week](https://www.engadget.com/waymo-says-its-robotaxis-are-now-making-50000-paid-trips-every-week-130005096.html)
4. Google presents CAT3D: Create Anything in 3D with Multi-View Diffusion Models [CAT3D: Create Anything in 3D with Multi-View Diffusion Models](https://cat3d.github.io/)
Neuroscience and Psychology:
1. Most neurons in the cortex remain silent, even during sensory stimulation and behavior. What are these silent neurons good for? This is known as the “dark matter problem” of the brain. https://www.cell.com/neuron/fulltext/S0896-6273(24)00276-9
2. What you learn and how you learn it can lead to important differences in neural activity structure. These differences play an important role in later adaptation. [De novo motor learning creates structure in neural activity that shapes adaptation | Nature Communications](https://www.nature.com/articles/s41467-024-48008-7)
3. Spurious reconstruction from brain activity [[2405.10078] Spurious reconstruction from brain activity](https://arxiv.org/abs/2405.10078)
4. "IQ has both a direct effect on the probability of inventing which is almost five times as large as that of having a high-income father, and an indirect effect through education" [PDF] https://www.nber.org/system/files/working_papers/w24110/w24110.pdf
[MIT gives AI the power to 'reason like humans' by creating new hybrid AI architecture | Live Science](https://www.livescience.com/technology/artificial-intelligence/mit-gives-ai-the-power-to-reason-like-humans-by-creating-hybrid-architecture)
[[2310.19791] LILO: Learning Interpretable Libraries by Compressing and Documenting Code](https://arxiv.org/abs/2310.19791)
[[2312.08566] Learning adaptive planning representations with natural language guidance](https://arxiv.org/abs/2312.08566)
[[2402.18759] Learning with Language-Guided State Abstractions](https://arxiv.org/abs/2402.18759)
[Life’s Secret Ingredient? USC Scientist Discovers New “Rule of Biology”](https://scitechdaily.com/lifes-secret-ingredient-usc-scientist-discovers-new-rule-of-biology/)
[Frontiers | Selectively advantageous instability in biotic and pre-biotic systems and implications for evolution and aging](https://www.frontiersin.org/articles/10.3389/fragi.2024.1376060/full)
[Never-Repeating Tiles Can Safeguard Quantum Information | Quanta Magazine](https://www.quantamagazine.org/never-repeating-tiles-can-safeguard-quantum-information-20240223/)
Degrees of freedom in metta (love and kindness meditation):
From/to:
Localized, delocalized representations (or centralized, decentralized)
Representations corresponding to real, imaginary people, beings, or things (imaginary as in not present in our shared hallucination)
Grabby, nongrabby process
Mine:
Universal care and love for all sentience, all humanity, wanting flourishing and growth of freedom, wellbeing, intelligence, meaning, fullfillment
Care for more local community
Stronger single bonds, to people, other beings, real or imaginary, or even math
Gemini 1.5 Pro math finetune https://twitter.com/OriolVinyalsML/status/1791521517211107515?t=W4ohQv6q2TfLMAAoS-eyEg&s=19
[Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet](https://transformer-circuits.pub/2024/scaling-monosemanticity/)
https://x.com/austinc3301/status/1793043799020609794?t=Uzcl11umqVaBTdMOyCsjIw&s=19
https://x.com/AlexTamkin/status/1792962650114273565?t=lZS_vWM2nKxd40F0WTbCyA&s=19
Anthropic is testing a model 4x the compute of Claude 3 Opus
[Reflections on our Responsible Scaling Policy \ Anthropic](https://www.anthropic.com/news/reflections-on-our-responsible-scaling-policy)