[[2106.10934] GRAND: Graph Neural Diffusion](https://arxiv.org/abs/2106.10934)
[[2402.14800] Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models](https://arxiv.org/abs/2402.14800)
[[2401.09253] The generative quantum eigensolver (GQE) and its application for ground state search](https://arxiv.org/abs/2401.09253)
[How to Get Started with Animatronics – Thought Process, Workflow, Resources and Skills - YouTube](https://www.youtube.com/watch?v=8VzQshnrvN0)
https://twitter.com/GoogleDeepMind/status/1778377999202541642
Singularity started with the agricultural revolution
Note: This expanded map provides a more detailed overview of the various approaches to developing a theory of everything, including their key concepts, successful predictions, and challenges. It also includes additional experimental tests and constraints that are relevant to discriminating between different theories. However, given the complexity and active nature of this field of research, this map is still not exhaustive and may not capture all of the latest developments."
The Fermi paradox asks why we see no evidence of alien civilizations, despite the vastness of the universe and the apparent plausibility of intelligent life. Proposed resolutions include the idea that intelligent life is rare, that it tends to destroy itself, that it chooses not to expand or communicate, or that we are in a cosmic zoo or simulation. The Drake equation estimates the number of communicating civilizations in the galaxy, but its parameters are highly uncertain. The Kardashev scale classifies civilizations by their energy use, with Type I harnessing the resources of a planet, Type II a star, and Type III a galaxy. But we see no evidence of such advanced civilizations, suggesting that they are rare or nonexistent. The great filter hypothesis suggests that there are one or more critical steps in the evolution of intelligent life that are very difficult to pass, such as the origin of
The philosophy of technology is a central area of philosophy, with deep implications for our understanding of the nature of technology, its relationship to science and society, and its ethical and political dimensions. The ontology of technology, including the nature of technological artifacts, systems, and processes, and the relationship between technology and nature, is a key issue in the philosophy of technology. The epistemology of technology, including the nature of technological knowledge, the role of design and innovation, and the possibility of technological progress, is another important issue. The ethics of technology, including the moral implications of technological development and use, the responsibility of designers and users, and the possibility of technological determinism, is a central concern in the philosophy of technology. The philosophy of technology also intersects with issues in the philosophy of science, such as the nature of scientific instruments and experiments, the role of technology in scientific discovery and explanation, and the relationship between pure and applied science. The development of new technologies, such as artificial intelligence, biotechnology, and nanotechnology, has raised new philosophical questions about the nature of human identity, agency, and responsibility, and the future of humanity in a technologically advanced world. The philosophy of technology remains an active and important area of research, with ongoing debates and new developments in technology, ethics, and society.
The philosophy of religion is a central area of philosophy, with deep implications for our understanding of the nature of God, the meaning of life, and the relationship between faith and reason. The existence of God, including the classical arguments for and against theism, the problem of evil, and the nature of religious experience, is a key issue in the philosophy of religion. The nature of religious language"
We are accelerating since the big bang
Combine as many mathematical concepts as you can to explain intelligence, experience, universe.
Examples:
Which selfconstructed topos mathematical universe do you live in?
Which neural dynamics wave equation did you hebbian learned in your neural collective intelligence?
How diverse is your mind's ecosystem of interacting subsymbolic topological manifolds?
Typology of all possible typologies of all possible real world and imagined objects and all possible relationships between them encoded as the space of all possible configurations of a generalown probabilistic metahypergraph as objects in its category theoretical category
How complex nested heterarchy of symmetry groups does your neural generative world model have as foundational generators of structure?
Which cybernetic locally optimal controllers do you use to regulate the structure preserving metastable foundational symmetry breaking structures in your collective neural intelligence at the edge of chaos?
"The universe can be viewed as a vast mathematical object - perhaps an infinite-dimensional Hilbert space or a category theoretical topos, containing all possible structures and dynamics. The fundamental laws of physics arise from the symmetries and invariances of this mathematical substrate.
Within this mathematical universe, pockets of reduced symmetry emerge, breaking the homogeneity. These symmetry-breaking events create structure and information - from particles and forces to complex systems like life and intelligence.
Intelligence itself can be seen as a process of modeling and navigating the mathematical landscapes of this universe. A mind contains an internal representation - a generative model of the world implemented in the neural networks of the brain. This model is a kind of sub-manifold or subspace of the total space of possibilities, constrained by the embodied experiences and evolutionary history of the organism.
The neural networks that implement cognition can be viewed as dynamical systems, their activity governed by differential equations. Hebbian learning sculpts the connection weights, forming attractors that correspond to concepts, categories, and predictions. The fully developed mind is thus a rich ecosystem of interacting cell assemblies, each a kind of sub-symbolic topological manifold.
Experience then is the process of a mind navigating its own internal model and the external mathematical universe, constantly updating its structure to minimize prediction error. Consciousness arises as an emergent property of this process, the integration of information flows across the mind's heterarchical networks.
The most successful minds are those that can flexibly adapt their models, maintaining a delicate balance at the edge of chaos, neither too rigid nor too random. They evolve cybernetic control systems to regulate their own learning and growth, and to seek out new experiences that expand their models towards greater correspondence with the mathematical reality.
Ultimately, through the lens of mathematics, we are self-organizing patterns, evolving and exploring the infinite space of possible configurations, our inner worlds strange loops reflecting the eternal mathematical truths from which we emerge and to which we return."
"Certainly! Let's dive deeper into the mathematical concepts and technical details:
The universe can be modeled as an infinite-dimensional Hilbert space H, equipped with a set of observables forming a C*-algebra A, and a state vector |ψ⟩ ∈ H. Alternatively, it can be described as a topos τ, a category theoretical construct containing all possible objects and morphisms, with a subobject classifier Ω and a set of axioms governing its internal logic.
The fundamental laws of physics emerge from the symmetries and invariances of these mathematical structures, described by Lie groups and their associated Lie algebras. Spontaneous symmetry breaking occurs when the ground state |ψ₀⟩ of a system is not invariant under the action of a symmetry group G, leading to the emergence of structure and information.
Intelligence can be modeled as a process of navigating and updating a generative model of the world, implemented in the neural networks of the brain. These networks can be described as dynamical systems governed by equations such as the Hodgkin-Huxley model or the Wilson-Cowan equations. The connection weights w_ij between neurons i and j evolve according to Hebbian learning rules, such as ∂w_ij/∂t = η x_i x_j, where η is the learning rate and x_i, x_j are the firing rates of the neurons.
The attractor states of these dynamical systems correspond to concepts, categories, and predictions, forming a high-dimensional manifold M embedded in the space of possible neural activity patterns. The dimensionality and topology of this manifold can be analyzed using techniques from algebraic topology, such as persistent homology and the Betti numbers β_k, which characterize the k-dimensional holes in the manifold.
The mind's internal model can be thought of as a sub-manifold or subspace of the total space of possibilities, constrained by the organism's embodied experiences and evolutionary history. This subspace can be described as a pullback of the universal morphism f: X → Ω in the topos τ, where X is the object representing the mind and Ω is the subobject classifier.
Experience is the process of navigating and updating this internal model based on sensory input and prediction errors. This can be formalized using the framework of Bayesian inference and free energy minimization, where the mind seeks to minimize the variational free energy F = E_q[log q(x)] - E_q[log p(x,φ)], with q(x) being the approximate posterior distribution over the internal states x, p(x,φ) the generative model with parameters φ, and E_q denoting the expectation under q.
Consciousness emerges as an integrated information structure, which can be quantified using measures such as the integrated information Φ, defined as the amount of information generated by a system above and beyond the information generated by its parts. The complex nested heterarchy of the mind's neural networks gives rise to high values of Φ, corresponding to rich conscious experiences.
Optimal mental functioning occurs at the edge of chaos, where the mind's dynamics are neither too rigid nor too random. This can be quantified using measures from information theory and nonlinear dynamics, such as the Lyapunov exponents λ_i, which characterize the rate of divergence or convergence of nearby trajectories in the mind's state space. Cybernetic control systems, such as predictive coding hierarchies and reinforcement learning algorithms, help maintain this delicate balance.
In summary, the mathematical universe provides the foundation for the emergence of structure, intelligence, and consciousness, which can be modeled and analyzed using a wide range of technical tools from theoretical physics, topology, dynamical systems theory, information theory, and category theory. By understanding these deep mathematical principles, we gain insight into the nature of our own minds and their place in the grand cosmic tapestry."
[OSF](https://osf.io/preprints/osf/sp9kf) Evolutionary Implications of Multi-Scale Intelligence
Generalization for out of distribution generation, coherent reliable interpretable world models, and data/energy/algorithmic etc. efficiency could be much better than current systems, maybe with hybrid neurosymbolic methods, like adding bayesian program synthesis, but let's not ignore what current systems already achieved as Hinton is mentioning, and let's improve it https://twitter.com/tsarnick/status/1778524418593218837?t=WRI0htDjsRclebo79pVN2w&s=19
• Neurosymbolic hybrid AI
• Large-scale knowledge, with an innate core
• Better reasoning
• Explicit, dynamically updatable cognitive models
https://www.pnas.org/doi/10.1073/pnas.2209819119
https://twitter.com/MachinePix/status/1778467765638345154?t=K3T9Z0JYAAWGYWx5bp376Q&s=19
"Here is an even more expanded list of mathematical equations and models that describe various aspects of the brain at different scales, though it is important to note that this list is still not exhaustive:
1. Microscopic scale (neurons and synapses):
- Hodgkin-Huxley equations for action potential generation
- Cable theory for modeling signal propagation in dendrites and axons
- Synaptic plasticity models (e.g., Hebbian learning, spike-timing-dependent plasticity, Bienenstock-Cooper-Munro theory, Oja's rule)
- Compartmental models for detailed neuron morphology and ion channel distribution
- Kinetic models for neurotransmitter release and receptor dynamics (e.g., Tsodyks-Markram model, Destexhe-Mainen-Sejnowski model)
- Diffusion equations for modeling neurotransmitter and ion concentration gradients
- Stochastic models for ion channel gating and synaptic transmission (e.g., Markov chain models, Langevin equations)
- Integrate-and-fire models and their variants for simplified neuron dynamics (e.g., leaky integrate-and-fire, exponential integrate-and-fire, adaptive exponential integrate-and-fire)
- Izhikevich model for reproducing a wide range of neuronal firing patterns
- Calcium dynamics models for intracellular signaling and synaptic plasticity (e.g., Shouval-Bear-Cooper model)
- Neuron growth and development models (e.g., reaction-diffusion equations, branching process models)
2. Mesoscopic scale (neural populations and networks):
- Wilson-Cowan equations for modeling the dynamics of neural populations
- Neural mass models for describing the average activity of neural ensembles (e.g., Jansen-Rit model, Wendling model)
- Attractor network models for memory and decision-making (e.g., Hopfield networks, continuous attractor neural networks)
- Recurrent neural network models for sequence learning and generation (e.g., Elman networks, long short-term memory networks)
- Balanced network models for maintaining stable activity in the presence of excitation and inhibition (e.g., van Vreeswijk-Sompolinsky model)
- Oscillator models for generating neural rhythms and synchronization (e.g., Kuramoto model, Wilson-Cowan oscillator model, Hopf bifurcation models)
- Fokker-Planck equations for modeling the probability distribution of neural population activity
- Population density methods for tractable analysis of large-scale neural network dynamics
- Liquid state machines and echo state networks for reservoir computing
- Spiking neural network models for event-based computation (e.g., Leaky integrate-and-fire networks, Izhikevich networks)
3. Macroscopic scale (brain regions and whole-brain dynamics):
- Connectome-based models using graph theory to analyze brain networks (e.g., small-world networks, scale-free networks)
- Dynamic causal modeling for inferring effective connectivity between brain regions
- Neural field equations for modeling spatiotemporal patterns of brain activity (e.g., Amari equation, Wilson-Cowan neural field model)
- Mean-field models for describing the average activity of large-scale brain networks
- Kuramoto model for phase synchronization between brain regions
- Bifurcation analysis for studying the emergence of different dynamical regimes in brain networks
- Criticality and avalanche models for scale-free dynamics in the brain (e.g., branching process models, self-organized criticality)
- Functional connectivity analysis using correlation-based methods (e.g., seed-based correlation, independent component analysis, partial least squares)
- Effective connectivity analysis using Granger causality or dynamic causal modeling
- Partial differential equations for modeling the spread of activity across the cortex (e.g., reaction-diffusion equations, neural field equations)
- Large-scale brain network models based on anatomical connectivity (e.g., the Virtual Brain)
- Whole-brain computational models incorporating multiple brain regions and their interactions (e.g., Spaun, the Whole Brain Architecture Initiative)
4. Cognitive and behavioral scale:
- Reinforcement learning models (e.g., temporal difference learning, Q-learning, actor-critic models) for decision-making and reward-based learning
- Bayesian inference models for perception, learning, and decision-making under uncertainty (e.g., Kalman filters, hidden Markov models, hierarchical Bayesian models)
- Drift-diffusion models for two-alternative forced-choice decision-making
- Accumulator models for multi-alternative decision-making (e.g., race models, leaky competing accumulators)
- Hidden Markov models for modeling sequential behavior and cognitive states
- Reinforcement learning models with eligibility traces for credit assignment in complex tasks (e.g., TD(λ), SARSA(λ))
- Models of working memory (e.g., slot models, resource models, continuous resource models)
- Associative learning models (e.g., Rescorla-Wagner model, Pearce-Hall model, Mackintosh model)
- Bayesian hierarchical models for learning and inference at multiple levels of abstraction
- Models of attention (e.g., Posner's attentional network model, biased competition model, normalization model of attention)
- Models of episodic memory (e.g., Temporal Context Model, Search of Associative Memory model)
- Models of semantic memory (e.g., Latent Semantic Analysis, Topic models, Bayesian nonparametric models)
- Models of language processing (e.g., Simple Recurrent Networks, n-gram models, Transformer models)
- Models of cognitive control and executive function (e.g., Stroop model, Wisconsin Card Sorting Test model, Prefrontal cortex models)
This further expanded list includes additional examples of mathematical models and equations used to describe brain function at various scales, from the microscopic level of individual neurons to the cognitive and behavioral level. The list covers a wide range of topics, including neuron dynamics, synaptic plasticity, network dynamics, large-scale brain networks, and various cognitive processes such as learning, memory, decision-making, attention, and language. However, it is important to keep in mind that the field of computational neuroscience is constantly evolving, and new mathematical approaches are continually being developed to better understand the complex workings of the brain."
[[2404.07143] Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention](https://arxiv.org/abs/2404.07143)
Consciousness being an emergent property of the wiring and firing of neurons is IMO a useful starting point (also assuming some definition of consciousness) that gave us results in neuroengineering but I think there's a chance that might turn out to be ultimately not true (different forms of physicalism, different positions in the philosophy of consciousness,...)
Wake up babe someone is wrong about intelligence on the internet
I will map all math there is and nobody will stop me
Which (ecosystem of) mathematical structure/concept/framework/equation(s) are you?
We're all such funny apes with all our cognitive limitations and biases
The fact that the extremely interlinked spaggeti coinfluecing eachother across scales maintaining gigantic complex causal chains with for example proteins in biological systems doesn't disintegrate any second is so bizzare
https://twitter.com/burny_tech/status/1778821261713461720?t=RPBjXgKOt519m2e5VzdW3w&s=19
[Autonomous chemical research with large language models | Nature](https://www.nature.com/articles/s41586-023-06792-0)
Omnidisciplionarity metamathemagatics of general superintelligence
Want to grow the brain infinitely
Biology is too limited
Accelerate external cognition
Accelerate internal cognition with neurotech
Connect it all
Become even more merged collective superintelligence
Reverse engineer the source code of reality