"This Special Issue is motivated by the Kolmogorov theory of consciousness (KT) and related mathematical frameworks in cognition and consciousness, including active inference and predictive coding. It invites contributions that shed light on the intricate link between brain dynamics and the experiential phenomena they induce. Central to this discourse is the proposition that agents (computational entities) construct compressive models (algorithms) of the world to track world data and guide action planning through objective function evaluation. This perspective underscores the profound impact of model mathematical structure on the brain's dynamic trajectories, or the "dynamical landscape," and on the resulting qualia (structured experience). It opens exciting avenues for empirical investigation and methodological innovation, leveraging advanced concepts from dynamical systems theory, geometry, topology, algorithmic information theory, and critical phenomena theory. Success in this endeavor promises to significantly enrich fields like fundamental neuroscience, computational neuropsychiatry, and artificial intelligence, offering novel insights and approaches.
Titled “The Mathematics of Structured Experience: Exploring Dynamics, Topology, and Complexity in the Brain”, this Special Issue aims to explore several key areas in this program:
Characteristics of compressive world models: We aim to delve into the nature of the world models created and run by natural and artificial agents. What do we mean, precisely, by a world model? What is the connection between program structure and the resulting dynamics? What is the role of symmetry and criticality in shaping world models and programs, and how do they enable agents to encapsulate the world's complexity in a comprehensible form?
Mapping models to dynamical systems: A crucial exploration will be how compressive world model characteristics translate into the workings of agents as dynamical systems and their features, especially recurrent neural networks. Topics include the study of geometry and topology of invariant manifolds, dimensionality reduction (manifold hypothesis), dynamical latent spaces, and their connection with algorithmic concepts such as compression and symmetry. We invite contributions that use tools from dynamical systems theory, geometry, topology, and criticality to characterize and understand the underlying dynamics of biological or artificial systems and their relation to the data they generate (neuroimaging/neurophysiology or other data).
Empirical paradigms for validation: This Special Issue also seeks to address the design of experimental paradigms aimed at validating these concepts. Specifically, we are interested in establishing connections between features derived from structured experience reports or other behavior and the observed structure in the brain (or, more generally, complex systems) dynamics (e.g., as measured by neuroimaging techniques) using the tools mentioned in the previous points. Application areas include the study of states of consciousness and disorders of consciousness, as well as non-human consciousness, using currently available datasets or through the design of specific experiments.
Implications for the design of artificial intelligence and computational models of the brain. How does this perspective influence research on artificial systems or computational models of the brain? What are the design principles inspired by the mathematics of algorithmic agenthood that can be used in artificial intelligence or computational neuroscience?
Through these explorations, this Special Issue aims to stimulate a multidisciplinary dialogue that bridges abstract mathematical concepts in algorithmic information theory, dynamics, geometry, and topology with neural phenomena and, ultimately, first-person experience. Our objective is to enhance our understanding of how the brain encodes, processes, and manifests structured experience and how this understanding can inform new computational models and therapeutic approaches for neuroscience and clinical neuropsychiatric as well as artificial intelligence."
[Entropy | Special Issue : The Mathematics of Structured Experience: Exploring Dynamics, Topology, and Complexity in the Brain](https://www.mdpi.com/journal/entropy/special_issues/YTT8WR1417)
[OSF](https://osf.io/preprints/psyarxiv/zg27u)
The best way to predict the future is to build it
Break the suboptimal metaattractors across all scales
Destroy fear of failure
https://arxiv.org/abs/1504.03303
[An Observation on Generalization - YouTube](https://www.youtube.com/watch?v=AKMuA_TVz3A)
[Sholto Douglas & Trenton Bricken - How to Build & Understand GPT-7's Mind - YouTube](https://youtu.be/UTuuTTnjxMQ?si=eBuo9jCqcn_t3ntX)
https://arxiv.org/abs/2403.17844
[Entropy | Free Full-Text | Shared Protentions in Multi-Agent Active Inference](https://www.mdpi.com/1099-4300/26/4/303)
Fuck environmental influences incentivizing me to be more selfish.
"I"'m never fucking doing that ever, "I" fucking hate selfishness.
I want to do so many things to change the world while constantly feeling limited by my limited agency, energy, time, capabilities etc.
I already accelerate systematically working on all of these factors a lot, but more is needed
I'm not gonna just accept the fact that current status quo is highly suboptimal and the narratives that it's not changeable. Bullshit, we can do so much to make it better when we are not demoralized, current power structures aren't given at all. They are changing constantly, and we can accelerate that change.
What I fear the most is smaller and smaller and more wealthy and powerful amount of people having access to more and more intelligent systems relative to everyone else. In general extreme centralization and concentration of power in the hands of few, not really aligned with everyone, instead of everyone.
[Collective intelligence: A unifying concept for integrating biology across scales and substrates | Communications Biology](https://www.nature.com/articles/s42003-024-06037-4)
[x.com](https://twitter.com/daphne_cor/status/1773766547892285539?t=ScANwQ1jeptB2-A08STG7A&s=19)
https://arxiv.org/abs/2403.19648
https://arxiv.org/abs/2212.07677
https://arxiv.org/abs/2305.15027
When someone sneezes, instead of "God bless you" say "peer reviewed studies bless you". (For a second live in ignorance about replication crisis and peer review degrading yearly)
https://www.pnas.org/doi/10.1073/pnas.2306732120
We need more people living on the boundary of deep empiricism and deep theory
[x.com](https://twitter.com/burny_tech/status/1773501036453347464)
[Making AI accessible with Andrej Karpathy and Stephanie Zhan - YouTube](https://www.youtube.com/watch?v=c3b-JASoPi0)
Andrej Karpathy: Current AI systems are imitation learners, but for superhuman AIs we will need better reinforcement learning like in AlphaGo. The model should selfplay, be in a the loop with itself and its own psychology, to achieve superhuman levels of intelligence.
"We've got these next word prediction things. Do you think there's a path towards building a physicist or a Von Neumann type model that has a mental model of physics that's self-consistent and can generate new ideas for how do you actually do Fusion? How do you get faster than light if it's even possible? Is there any path towards that or is it a fundamentally different Vector in terms of these AI model developments?"
"I think it's fundamentally different in one aspect. I guess what you're talking about maybe is just capability question because the current models are just not good enough and I think there are big rocks to be turned here and I think people still haven't really seen what's possible in the space at all and roughly speaking I think we've done step one of AlphaGo. We've done imitation learning part, there's step two of AlphaGo which is the RL and people haven't done that yet and I think it's going to fundamentally be the part that is actually going to make it work for something superhuman. I think there's big rocks in capability to still be turned over here and the details of that are kind of tricky but I think this is it, we just haven't done step two of AlphaGo. Long story short we've just done imitation.
I don't think that people appreciate for example number one how terrible the data collection is for things like ChatGPT. Say you have a problem some prompt is some kind of mathematical problem a human comes in and gives the ideal solution right to that problem. The problem is that the human psychology is different from the model psychology. What's easy or hard for the human is different to what's easy or hard for the model. And so human kind of fills out some kind of a trace that comes to the solution but some parts of that are trivial to the model and some parts of that are massive leap that the model doesn't understand and so you're kind of just losing it and then everything else is polluted by that later. So fundamentally what you need is the model needs to practice itself how to solve these problems. It needs to figure out what works for it or does not work for it. Maybe it's not very good at four-digit addition so it's going to fall back and use a calculator, but it needs to learn that for itself based on its own capability and its own knowledge. So that's number one that's totally broken I think bur it's a good initializer though for something agent like.
And then the other thing is we're doing reinforcement learning from human feedback but that's a super weak form of reinforcement learning, it doesn't even count as reinforcement learning. I think what is the equivalent in AlphaGo for RLHF is what I call it's a vibe check. Imagine if you wanted to train an AlphaGo RLHF. It would be giving two people two boards and said which one do you prefer and then you would take those labels and you would train model and then you would RL against that. What are the issues with that? Number one is that's it's just vibes of the board, that's what you're training against. Number two if it's a reward model that's a neural net then it's very easy to overfit to that reward model for the model you're optimizing over and it's going to find all these spurious ways of hacking that massive model, that's the problem.
AlphaGo gets around these problems because they have a very clear objective function you can ARL against it. RLHF is nowhere near RL, it's silly. And the other thing is, imitation learning is super silly. RLHF is nice improvement, but it's still silly. I think people need to look for better ways of training these models, so that it's in the loop with itself and its own psychology, and I think we're there will probably be unlocks in that direction."
https://wiki.positivemarker.org/wiki/Theory_of_aging
https://journals.sagepub.com/doi/full/10.1177/0269881120916143
[Memories are made by breaking DNA — and fixing it](https://www.nature.com/articles/d41586-024-00930-y)
[x.com](https://twitter.com/SmokeAwayyy/status/1773436262856450232) AGI System Flow Chart for Solving Complex Problems
[Frontiers | Cognition Without Neural Representation: Dynamics of a Complex System](https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2021.643276/full)
https://arxiv.org/abs/2403.09863
[Cryptography's Mathematical 'Worlds': Which One Do We Live In? - YouTube](https://www.youtube.com/watch?v=RjzSFa03i2U)
[Entropy | Free Full-Text | Revealing the Dynamics of Neural Information Processing with Multivariate Information Decomposition](https://www.mdpi.com/1099-4300/24/7/930)
https://www.lesswrong.com/posts/BaEQoxHhWPrkinmxd/announcing-neuronpedia-as-a-platform-to-accelerate-research
Computational memetics
1. Definition and Scope
- What is computational memetics?
- Key concepts and terminology
- Relationship to other fields (e.g., cognitive science, evolutionary psychology, artificial intelligence)
2. Historical Context
- Origins of memetics (e.g., Richard Dawkins' "The Selfish Gene")
- Development of computational memetics
- Milestones and significant contributions
3. Theoretical Foundations
- Meme theory
- Evolutionary algorithms
- Information theory
- Complex systems theory
4. Meme Representation and Encoding
- Meme structure and components
- Meme encoding schemes (e.g., binary, symbolic, neural)
- Meme metadata and annotations
5. Meme Transmission and Propagation
- Meme replication and mutation
- Meme fitness and selection
- Meme networks and communities
- Meme spreading dynamics (e.g., viral, endemic)
6. Meme Evolution and Adaptation
- Meme recombination and hybridization
- Meme drift and divergence
- Meme-environment interactions
- Meme co-evolution and symbiosis
7. Meme Cognition and Processing
- Meme perception and recognition
- Meme interpretation and understanding
- Meme memory and retrieval
- Meme creativity and generation
8. Applications and Use Cases
- Meme-based communication and language
- Meme-driven social dynamics and collective behavior
- Meme-inspired optimization and problem-solving
- Meme engineering and design
9. Challenges and Future Directions
- Meme ethics and regulation
- Meme privacy and security
- Meme explainability and interpretability
- Meme-based artificial general intelligence
[What if we simulated biology using physics - YouTube](https://www.youtube.com/watch?v=ncC-GMzF9RY)
[Sholto Douglas & Trenton Bricken - How to Build & Understand GPT-7's Mind - YouTube](https://www.youtube.com/watch?v=UTuuTTnjxMQ)
There’s a cultural trend in the West (or at least in the US) where cynicism is equated to seriousness/realism. So, the more cynical the take is, the more seriously people will take it — even if it’s cynicism taken to the point of absurd conspiracy theory.
So, naturally, the most “doomer” takes on everything are taken to be the most realistic by the masses.
The downside, obviously, is that the insistence that things are awful and can never get any better breeds a mindset that shoots down any attempts to make things better; lacking any sort of hope for the future is entirely self-defeating.
[Navigating Between Phenomena and Artefacts: A Critical History of Neuroscientific Interpretations of Brain Waves](https://qri.org/blog/brain-waves-history)
[Multiple complex developmental disorder - Wikipedia](https://en.wikipedia.org/wiki/Multiple_complex_developmental_disorder)
wish there were more interpretability visualizations on how diffusion models & transformers implement invariants. most basic example being how 3d transformations of a face are encoded in the latent space that outputs 2d. feels like stylegan3 was peak (albeit not rly 3d) [x.com](https://twitter.com/mayfer/status/1773608454797525311)
the more one cares about his current physical hardware with current available methods, the more likely one is able to survive until longevity escape velocity and until merging with arbitrary hardware will be more possible in better ways than current methods (if it happens in our lifetimes and will be available for everyone, but im optimistic about it)
[Konrad Kording | Causal Perturbations to Whole Brain Emulate C. Elegans - YouTube](https://www.youtube.com/watch?v=r3UMIyhzew0)
[Jonathan Gorard: Quantum Gravity & Wolfram Physics Project - YouTube](https://www.youtube.com/watch?v=ioXwL-c1RXQ)
How many nested recursive circular layers of selfawareness about being selfaware about being selfaware are you on?
Infinitely looping meta-layers of self-awareness about the infinite loop formed by the meta-layers of self-awareness
Celkově mám pocit že je to spíš něco mezi takhle pesimistickým pohledem a až moc optimisitckým pohledem. Utopian realism ❤️ Myslím, že spousta lidí se reálně snaží upřímně vytvořit co nejlepší svět pro všechny s dobrým úmyslem, a spousta lidí co např jenom sbírají moc pro sebe, pak různý kombinace, a další typy lidí. Někdy je být kooperativní nebo altruistický je víc evolučně výhodný, někdy opak je víc evolučně výhodný.
Pocit jaký svět je se mi dost mění podle toho jakou mám zrovna (intenzivní) náladu a co mě zrovna vzalo největší pozornost. Lidi nemající peníze na střechu nad hlavou? Všechno je nespravedlivý, je potřeba změnit veškerý mocný síly co tyto nerovnosti umožňují! Někdo přes UBIlike metodu zlepší život městečku v Africe? Svět je plný úžasných lidí a je naděje na změnu! Někdo mě (omylem, neúmyslně) emočně ublížil? I don't want to socially interact ever again! Někdo mě řekl něco hezkýho? Život je plný nejlepších lidí a můžu všem věřit! Někdo použil technologie na to aby zlepšil sociální situaci přes propojení komunikace mezi lidmama? Bomba! Někdo s technologiemi manipuluje lidi! Aaaaa! Elysium, 1984, terminator, dystopian budoucnosti? Aaaaaa! Utopian budoucnosti kde technologie všechny upliftnou, kde cestujeme po galaxiích? Yeeeeee!
Myslím že můžeme přes přemýšlení co má dost negativní bias najít ve všem problémy, ale zároveň mít naději že přes akce jde všechno zlepšovat. I když lidi jsou pořád v poverty, a i když někteří nemají na střechu nad hlavu, a i když cena domů roste a příjmy moc ne apod., myslím že je zároveň důležitý si všímat si i těch dobrých statistik, např že poverty a diseases díky vědě a technologím všude klesají! Ale v AI vidím největší value, ultimátní akcelerátor všech věd a technologií co má potenciál postscarity postlabour ekonomiky pro všechny tvory! Aaaa I need to save up more money so that I can more assist in building that future! Líbí se mi motto: The best way to predict the future is to build it!
[Nick Bostrom | Life and Meaning in an AI Utopia - YouTube](https://www.youtube.com/watch?v=o28s-mnykdE)
I had a dream about Calabi-Yau Manifolds simulated on a computer in Turing complete game of life cellular automata simulated on a computer in Turing complete different cellular automata simulated on a computer in Turing complete different cellular automata and so on infinitely
https://www.reuters.com/technology/microsoft-openai-planning-100-billion-data-center-project-information-reports-2024-03-29/
Seeing this, I like the argument that a lot of intelligence will need a lot of compute, which might be a very limiting physical factor for recursive selfimprovement with intelligent explosion (foom) as for better intelligence we (or AI) need to build a lot of data center infrastructure which takes a lot of resources you have to gather in our economy with limited access etc.
But maybe soon we'll actually scale up potentially better techniques other than transformers in LLMs, like some neurosymbolic stuff with reinforcement learning, that might be much more effective and need less compute for better performance, but still might need a lot of compute relatively to brains.
Or will there soon be empirically scaled even better algorithmic and/or hardware improvement that will get the compute efficiency on the level of brains? Not sure about it.
But seeing the recent developments with current SotA AI systems, I feel like we're already slowly but surely transitioning there to more neurosymbolic stuff with reinforcement learning. Or maybe we need to add more explicit symbolic program synthesis in the neurosymbolic hybrid architecture? Or better objective functions incentivizing generalization? Or more RL fundamentally?
When it comes to biology, it's this massive hierarchy of scales, every layer depends on the layer below, each scale cocausing eachother in many ways, described by the free energy principle, maybe that's what we will actually need to implement for more energy efficient AI systems. What about training embodied agents in synthetic more realistic reallife like simulations trying to grasp all these scales in Active Inference style? I wonder if we have enough computation to approximate it enough, maybe it will be possible only irl, similarly how we train humans for more humanlike energy efficient AI, while current AI systems might be superhuman in different ways while lacking the humanlike energy efficiency.
[AI comes up with battery design that uses 70 per cent less lithium | New Scientist](https://www.newscientist.com/article/2411374-ai-comes-up-with-battery-design-that-uses-70-per-cent-less-lithium/#:~:text=Researchers%20used%20AI%20to%20design,lithium%20than%20some%20competing%20designs)
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9505413/#:~:text=Various%20applications%20of%20AI%20are,and%20useful%20in%20biological%20research
https://arxiv.org/abs/2403.19647v1
[x.com](https://twitter.com/SuryaGanguli/status/1765401784103960764)
[x.com](https://twitter.com/SuryaGanguli/status/1765401784103960764)
https://arxiv.org/abs/2403.02579