In the realm of chemistry, the periodic table organizes elements based on their atomic number and electron configuration, which determine their chemical properties. Chemical bonding, including ionic, covalent, and metallic bonds, arises from the interactions between valence electrons of atoms. Molecular geometry, dictated by the Valence Shell Electron Pair Repulsion (VSEPR) theory, influences the shape and reactivity of molecules. Thermodynamics governs the flow of heat and energy in chemical systems, with the laws of thermodynamics describing the conservation of energy, the increase of entropy, and the unattainability of absolute zero temperature. The complexity of life emerges from the intricate organization of biological systems across multiple scales. The central dogma of molecular biology outlines the flow of genetic information from DNA to RNA to proteins, which perform the majority of cellular functions. The structure and function of proteins are determined by their amino acid sequence and the resulting three-dimensional folding. Cell signaling pathways, involving receptors, transducers, and effectors, allow cells to respond to external stimuli and coordinate their activities. The diversity of life on Earth is the result of billions of years of evolution, driven by the processes of mutation, natural selection, and genetic drift. The fossil record provides evidence for the gradual changes in species over time, while comparative genomics reveals the molecular similarities and differences between organisms. The theory of evolution by natural selection, proposed by Charles Darwin and Alfred Russel Wallace, explains how beneficial traits become more prevalent in populations over generations. Earth's systems, including the geosphere, hydrosphere, atmosphere, and biosphere, are interconnected and shaped by complex feedback loops. The carbon cycle, for example, involves the exchange of carbon between the atmosphere, oceans, and living organisms, regulating Earth's climate. Plate tectonics, driven by convection currents in the mantle, leads to the creation and destruction of Earth's crust, the formation of mountains, and the occurrence of earthquakes and volcanic eruptions. The study of the universe has revealed its astonishing scale and complexity. The cosmic microwave background radiation, a remnant of the Big Bang, provides insight into the early universe and supports the theory of cosmic inflation. The formation and evolution of stars, from protostars to main-sequence stars to supernovae or black holes, is governed by the balance between gravity and nuclear fusion. The discovery of exoplanets, planets orbiting other stars, has expanded our understanding of planetary systems and the potential for life beyond Earth. Mathematics, the foundation of quantitative science, provides the tools for modeling, analyzing, and predicting natural phenomena. Differential equations describe the rates of change in systems, from the motion of fluids to the growth of populations. Chaos theory reveals the inherent unpredictability of certain deterministic systems, such as weather patterns, due to their sensitivity to initial conditions. Fractals, self-similar patterns that exhibit complexity at every scale, are found in natural structures ranging from coastlines to blood vessels. The frontiers of science continue to expand, driven by advances in technology and the collective efforts of researchers worldwide. From the exploration of the quantum realm to the search for the unification of gravity with quantum mechanics, from the mapping of the human brain to the quest for sustainable energy solutions, science pushes the boundaries of our understanding and shapes the future of our civilization. As we stand at the threshold of a new era of scientific discovery, the frontiers of human knowledge continue to expand at an unprecedented pace. From the exploration of the quantum realm to the search for life beyond Earth, from the mapping of the human brain to the engineering of artificial intelligence, the questions that drive scientific inquiry are as profound and compelling as ever. The pursuit of science, as a quest for understanding and a means of improving the human condition, remains one of the greatest and most noble endeavors of our species. [[1504.03303] Ultimate Intelligence Part II: Physical Measure and Complexity of Intelligence](https://arxiv.org/abs/1504.03303) [[2403.17844] Mechanistic Design and Scaling of Hybrid Architectures](https://arxiv.org/abs/2403.17844) https://twitter.com/daphne_cor/status/1773766547892285539?t=ScANwQ1jeptB2-A08STG7A&s=19 [[2403.19648] Human-compatible driving partners through data-regularized self-play reinforcement learning](https://arxiv.org/abs/2403.19648) [[2212.07677] Transformers learn in-context by gradient descent](https://arxiv.org/abs/2212.07677) [[2305.15027] A Rigorous Link between Deep Ensembles and (Variational) Bayesian Methods](https://arxiv.org/abs/2305.15027) https://twitter.com/burny_tech/status/1773501036453347464 [Theory of aging - Positive Marker](https://wiki.positivemarker.org/wiki/Theory_of_aging) https://twitter.com/SmokeAwayyy/status/1773436262856450232 AGI System Flow Chart for Solving Complex Problems [[2403.09863] Towards White Box Deep Learning](https://arxiv.org/abs/2403.09863) [Which Computational Universe Do We Live In? - YouTube](https://www.youtube.com/watch?v=RjzSFa03i2U) [The Beauty of Life Through The Lens of Physics - YouTube](https://www.youtube.com/watch?v=ncC-GMzF9RY) wish there were more interpretability visualizations on how diffusion models & transformers implement invariants. most basic example being how 3d transformations of a face are encoded in the latent space that outputs 2d. feels like stylegan3 was peak (albeit not rly 3d) https://twitter.com/mayfer/status/1773608454797525311 [[2403.19647v1] Sparse Feature Circuits: Discovering and Editing Interpretable Causal Graphs in Language Models](https://arxiv.org/abs/2403.19647v1) https://twitter.com/SuryaGanguli/status/1765401784103960764 https://twitter.com/SuryaGanguli/status/1765401784103960764 [[2403.02579] Geometric Dynamics of Signal Propagation Predict Trainability of Transformers](https://arxiv.org/abs/2403.02579) https://twitter.com/burny_tech/status/1774167901810889200 [The Biosingularity is Near - by Anatoly Karlin](https://www.nooceleration.com/p/the-biosingularity-is-near) https://twitter.com/StuartHameroff/status/1717323310248431688 [[2403.15796] Understanding Emergent Abilities of Language Models from the Loss Perspective](https://arxiv.org/abs/2403.15796) [[2403.19154] STaR-GATE: Teaching Language Models to Ask Clarifying Questions](https://arxiv.org/abs/2403.19154?fbclid=IwAR1EHCIImLmZotFz1Nh_2exUa4BdeyThbYS96UzQx99CjCGO1z3SxR3gPiI) [NeurIPS 2023](https://nips.cc/virtual/2023/84295) Topological Deep Learning: Going Beyond Graph Data [The Beauty of Life Through The Lens of Physics - YouTube](https://www.youtube.com/watch?v=ncC-GMzF9RY) responce: https://twitter.com/JonHaidt/status/1774571680511508601 [Dr Ben Goertzel - Why OpenAI Really Fired Sam Altman: AI Is A Threat To Humanity - YouTube](https://www.youtube.com/watch?v=33op-5suPjo) adding symbolic stuff like this one i showed you DreamCoder: Growing generalizable, interpretable knowledge with wake-sleep Bayesian program learning [[2006.08381] DreamCoder: Growing generalizable, interpretable knowledge with wake-sleep Bayesian program learning](https://arxiv.org/abs/2006.08381) hmm yea [GitHub - oneHuster/Meta-Learning-Papers: A classified list of meta learning papers based on realm.](https://github.com/oneHuster/Meta-Learning-Papers) [[2004.05439] Meta-Learning in Neural Networks: A Survey](https://arxiv.org/abs/2004.05439) [[2301.08028] A Survey of Meta-Reinforcement Learning](https://arxiv.org/abs/2301.08028) [A survey of deep meta-learning | Artificial Intelligence Review](https://link.springer.com/article/10.1007/s10462-021-10004-4) https://www.sciencedirect.com/science/article/pii/S003132032200067X https://www.sciencedirect.com/science/article/pii/S0950705122004737 Learning to Learn without Gradient Descent by Gradient Descent: [[1611.03824] Learning to Learn without Gradient Descent by Gradient Descent](https://arxiv.org/abs/1611.03824) Learning to Optimize: [[1606.01885] Learning to Optimize](https://arxiv.org/abs/1606.01885) Evolved Policy Gradients: [[1802.04821] Evolved Policy Gradients](https://arxiv.org/abs/1802.04821) Evolving Deep Neural Networks [[1703.00548] Evolving Deep Neural Networks](https://arxiv.org/abs/1703.00548) [[1812.09584] Meta Architecture Search](https://arxiv.org/abs/1812.09584) [[1901.11117] The Evolved Transformer](https://arxiv.org/abs/1901.11117) [[2006.08084] Neural Execution Engines: Learning to Execute Subroutines](https://arxiv.org/abs/2006.08084) [[2010.12621] Learning to Execute Programs with Instruction Pointer Attention Graph Neural Networks](https://arxiv.org/abs/2010.12621) + generalization [[1902.00741] Distinction Graphs and Graphtropy: A Formalized Phenomenological Layer Underlying Classical and Quantum Entropy, Observational Semantics and Cognitive Computation](https://arxiv.org/abs/1902.00741) [[1901.08162] Causal Reasoning from Meta-reinforcement Learning](https://arxiv.org/abs/1901.08162) [[2303.17768] Scalable Bayesian Meta-Learning through Generalized Implicit Gradients](https://arxiv.org/abs/2303.17768) [[2005.04589] Maximal Algorithmic Caliber and Algorithmic Causal Network Inference: General Principles of Real-World General Intelligence?](https://arxiv.org/abs/2005.04589) [[2102.10581] Patterns of Cognition: Cognitive Algorithms as Galois Connections Fulfilled by Chronomorphisms On Probabilistically Typed Metagraphs](https://arxiv.org/abs/2102.10581) symmetries is all you need [[2112.03321] Noether Networks: Meta-Learning Useful Conserved Quantities](https://arxiv.org/abs/2112.03321) [Noether Networks: Meta-Learning Useful Conserved Quantities (w/ the authors) - YouTube](https://www.youtube.com/watch?v=Xp3jR-ttMfo) [[2403.19887] Jamba: A Hybrid Transformer-Mamba Language Model](https://arxiv.org/abs/2403.19887) Hybride all ML architectures, create neurosymbolic shapeshifting ubermesch adapting to everything generalizing to everything https://twitter.com/burny_tech/status/1774848819642937513 [[1703.10987] On the Impossibility of Supersized Machines](https://arxiv.org/abs/1703.10987) [[2402.16714] Quantum linear algebra is all you need for Transformer architectures](https://arxiv.org/abs/2402.16714) [[2403.19186v1] Optimization hardness constrains ecological transients](https://arxiv.org/abs/2403.19186v1) [[2301.08727] Neural Architecture Search: Insights from 1000 Papers](https://arxiv.org/abs/2301.08727) [[1908.00709] AutoML: A Survey of the State-of-the-Art](https://arxiv.org/abs/1908.00709) [[1810.13306] Automated Machine Learning: From Principles to Practices](https://arxiv.org/abs/1810.13306) "why do I believe that complicated nonlinear interactions can be approximated by linear models(specific genetic networks) because evolution has done it. a good analysis of my intuition here" https://twitter.com/lu_sichu/status/1774906175454155021 https://twitter.com/BrianRoemmele/status/1774657222280466858?t=ltTdYDXNENO_LnnT6qZyBw&s=19 [[2403.17101] AI Consciousness is Inevitable: A Theoretical Computer Science Perspective](https://arxiv.org/abs/2403.17101) [[2404.03325] Embodied Neuromorphic Artificial Intelligence for Robotics: Perspectives, Challenges, and Research Development Stack](https://arxiv.org/abs/2404.03325) [G√∂del's Incompleteness Theorem (First) - YouTube](https://www.youtube.com/watch?v=zNAvtgBDv8w) [[2404.02258] Mixture-of-Depths: Dynamically allocating compute in transformer-based language models](https://arxiv.org/abs/2404.02258) https://twitter.com/TheSeaMouse/status/1775782800362242157 https://twitter.com/IntuitMachine/status/1775555323496984673?t=hA9o7t4_FZVzMznUUhhIPw&s=19 [Quanta Magazine](https://www.quantamagazine.org/in-new-paradox-black-holes-appear-to-evade-heat-death-20230606/) [[2210.02769] Artificial virtuous agents in a multiagent tragedy of the commons](https://arxiv.org/abs/2210.02769) https://twitter.com/burny_tech/status/1774952762536587545 https://twitter.com/spiantado/status/1775217414613131714?t=epgQbvHiowemq8AHeclJJA&s=19 [[1512.02742] Relative Entropy in Biological Systems](https://arxiv.org/abs/1512.02742) https://twitter.com/turboblitzzz/status/1767679094479573334?t=7K6mfhMoBiUMDVLSCGZzVg&s=19 Transformers and diffusion models are way too parameter heavy to be practical. A simple temporal sequence Boltzmann machine can simulate the videos of bouncing balls 100,000x more efficiently than Sora. https://twitter.com/BorisMPower/status/1775026903960965543?t=4N0tLEjqgZ3XoFfC7Knrxw&s=19 AI companies landscape https://pbs.twimg.com/media/GKL2I_3XUAETc9K?format=jpg&name=large https://twitter.com/pronounced_kyle/status/1775411468076482926?t=jP8bpTF_xMK5xwClzmaFmQ&s=19 Become all belief networks in parallel https://twitter.com/Liv_Boeree/status/1775538750719955056?t=SWHqOrEh8xLv9tKSxiAxLw&s=19 https://twitter.com/burny_tech/status/1777306690171412916 [A neural speech decoding framework leveraging deep learning and speech synthesis | Nature Machine Intelligence](https://www.nature.com/articles/s42256-024-00824-8) https://twitter.com/charliebholtz/status/1770566082400813060/ sdxl-turbo skoro realtime generovani obrazku z audia https://twitter.com/lifan__yuan/status/1775217887701278798 antiaging https://twitter.com/_sviridov_/status/1777332900297347220 [[2404.04125] No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance](https://arxiv.org/abs/2404.04125) [[2404.02280] Demonstration of logical qubits and repeated error correction with better-than-physical error rates](https://arxiv.org/abs/2404.02280) Direct Nash Optimization: Teaching Language Models to Self-Improve with General Preferences [[2404.03715] Direct Nash Optimization: Teaching Language Models to Self-Improve with General Preferences](https://arxiv.org/abs/2404.03715) [[2404.03648] AutoWebGLM: Bootstrap And Reinforce A Large Language Model-based Web Navigating Agent](https://arxiv.org/abs/2404.03648) [[2404.02893] ChatGLM-Math: Improving Math Problem-Solving in Large Language Models with a Self-Critique Pipeline](https://arxiv.org/abs/2404.02893) [[2402.19427] Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models](https://arxiv.org/abs/2402.19427) Google releases model with new Griffin architecture that outperforms transformers [The Quantum Simulation Hypothesis | David Chalmers (Part 1/3) - YouTube](https://www.youtube.com/watch?v=5r9V1ryksnw) [[2404.05405] Physics of Language Models: Part 3.3, Knowledge Capacity Scaling Laws](https://arxiv.org/abs/2404.05405) https://twitter.com/ZeyuanAllenZhu/status/1777513016592040248 https://twitter.com/omarsar0/status/1777709227319968034?t=pNiPoM95FfYEFMSA5jsCMA God is Mathematics, not an entity that knows Mathematics. God is everything in the world. All the knowledge, ... We never know what it is but certainly not a thing we can imagine in our own way. Such God is not against our perception of the world, it is our understanding of the world. It is all the sciences we come up to and the more we understand the world, the nearer we get to the being. This expanded map covers even more areas of mathematics and their applications, showcasing the incredible breadth and depth of the field. However, it is still not a complete list, as mathematics continues to grow and evolve, with new connections and applications being discovered all the time. https://twitter.com/burny_tech/status/1777843540480999708 Representing this map visually, one might use a multi-layered graph structure, where each node (representing fields/subfields/concepts) connects through lines (indicating relationships and applications), sometimes crossing between layers (formal to natural sciences) to reflect interdisciplinary methodologies. Icons or colors could differentiate primary fields for clarity. Such a format would maximize both density and access to information, serving as a valuable educational and research resource." **This map is a simplified representation of the vast and interconnected world of science. Each field and subfield has its own rich history, theories, and methods, and new discoveries are constantly being made.**" **This map is merely a glimpse into the vast and intricate world of science. Each field and subfield contains a wealth of knowledge waiting to be explored!**" - **Logic**: Symbolic Logic, Modal Logic, Philosophical Logic. - **Earth Sciences**: Geology, Meteorology, Oceanography, Seismology, Paleontology, Climatology. This detailed map provides deeper insight into the sciences, illustrating not only the breadth and scope of individual fields but also the extensive interconnectedness that propels contemporary scientific inquiry and technological innovation. Each branch and sub-discipline could be further delineated and expanded into myriad niche specialties, stressing the ever-evolving and boundless nature of human knowledge." [[2404.05782] Dynamical stability and chaos in artificial neural network trajectories along training](https://arxiv.org/abs/2404.05782) "When the entropy is zero bits, this is sometimes referred to as unity, where there is no uncertainty at all – no freedom of choice – no information." Zero ontology? https://twitter.com/burny_tech/status/1777871137109610957 This further expanded version includes an even greater breadth and depth of subfields, concepts, and examples across the various areas of formal and natural sciences, as well as their philosophical foundations and far future possibilities. The relationships between fields are highlighted through shared concepts, methods, and applications, as well as through the transdisciplinary sciences that bridge multiple domains. However, even this highly detailed map is still a compressed representation of the vast scope of human knowledge, and many important details, nuances, and emerging areas are necessarily omitted. The map could be further expanded and refined, delving into ever more specialized subfields and cutting-edge research topics. Nonetheless, I hope this provides a comprehensive overview of the incredible richness and complexity of scientific inquiry, and the profound implications it has for our understanding of the world and our place in it. I'm with you on looking into neurosymbolic methods for better score on math like AlphaGeometry or TacticAI for football tactics or DreamCoder bayesian program synthesis, or going back to AlphaZero, but in general ignoring/denying stuff about LLMs improving overtime (better performance relative to size, gigantic context window, GPT4 now with pretty apparently good improvements, tons of engineering hacks etc.) which doesnt fit your perspective feels like extremely motivated thinking and strong selection/confirmation negative bias https://twitter.com/GanjinZero/status/1777926220132626753?t=Z9p1IYo0DQMyLX3Kr0jZxw&s=19 https://twitter.com/QuantumGenMat/status/1758707212120420723?t=lFqANqZJfv-c9D8GUaLuKw&s=19 https://twitter.com/Kat__Woods/status/1778062631632588950?t=nocOy1uP3VL43yZ0s93d4g&s=19 [[2404.07103] Graph Chain-of-Thought: Augmenting Large Language Models by Reasoning on Graphs](https://arxiv.org/abs/2404.07103) “by combining AlphaGeometry with Wu's method we set a new state-of-the-art for automated theorem proving on IMO-AG-30, solving 27 out of 30 problems, the first AI method which outperforms an IMO gold medalist.” [[2404.06405] Wu's Method can Boost Symbolic AI to Rival Silver Medalists and AlphaGeometry to Outperform Gold Medalists at IMO Geometry](https://arxiv.org/abs/2404.06405) [Lie algebra visualized: why are they defined like that? - YouTube](https://www.youtube.com/watch?v=gj4kvpy1eCE)