[5] A Pilot Study Evaluating the Stigma and Public Perception about the Causes of Depression and Schizophrenia https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3481715/ Here is a gigantic, detailed mind map of the field of astrophysics: A[Astrophysics] --> B[Cosmology] A --> C[Stellar Astrophysics] A --> D[Galactic Astrophysics] A --> E[High Energy Astrophysics] A --> F[Astrometry and Celestial Mechanics] A --> G[Astronomical Instrumentation and Techniques] B --> B1[Big Bang Theory] B --> B2[Cosmic Microwave Background] B --> B3[Dark Matter] B --> B4[Dark Energy] B --> B5[Structure Formation] B --> B6[Cosmic Inflation] B --> B7[Cosmological Parameters] C --> C1[Stellar Structure and Evolution] C --> C2[Stellar Atmospheres] C --> C3[Stellar Nucleosynthesis] C --> C4[Star Formation] C --> C5[Stellar Pulsations and Oscillations] C --> C6[Stellar Magnetic Fields] C --> C7[Stellar Winds and Mass Loss] C --> C8[Supernovae and Stellar Remnants] D --> D1[Galactic Structure] D --> D2[Galactic Dynamics] D --> D3[Interstellar Medium] D --> D4[Molecular Clouds] D --> D5[Star Clusters] D --> D6[Galactic Chemical Evolution] D --> D7[Active Galactic Nuclei] D --> D8[Galactic Magnetic Fields] E --> E1[X-ray Astronomy] E --> E2[Gamma-ray Astronomy] E --> E3[Cosmic Rays] E --> E4[Neutron Stars and Pulsars] E --> E5[Black Holes] E --> E6[Accretion Disks] E --> E7[Jets and Outflows] E --> E8[Gravitational Waves] F --> F1[Astrometric Techniques] F --> F2[Celestial Reference Systems] F --> F3[Ephemerides] F --> F4[Orbital Dynamics] F --> F5[Celestial Mechanics] F --> F6[Tidal Interactions] F --> F7[Relativistic Celestial Mechanics] G --> G1[Telescopes and Optics] G --> G2[Detectors and Imaging Devices] G --> G3[Astronomical Spectroscopy] G --> G4[Interferometry] G --> G5[Adaptive Optics] G --> G6[Data Reduction and Analysis] G --> G7[Computational Astrophysics] G --> G8[Astronomical Software and Databases] This mind map breaks down the vast field of astrophysics into its main branches and sub-topics. Each branch represents a major area of study within astrophysics: 1. Cosmology: The study of the universe as a whole, its origin, evolution, and ultimate fate. 2. Stellar Astrophysics: The study of the formation, evolution, and properties of stars. 3. Galactic Astrophysics: The study of the structure, formation, and evolution of galaxies. 4. High Energy Astrophysics: The study of high-energy phenomena in the universe, such as X-rays, gamma rays, and cosmic rays. 5. Astrometry and Celestial Mechanics: The study of the positions, motions, and dynamics of celestial objects. 6. Astronomical Instrumentation and Techniques: The development and application of instruments and methods for observing and analyzing astronomical phenomena. Each of these main branches is further divided into more specific sub-topics, providing a comprehensive overview of the diverse and complex field of astrophysics. This mind map can serve as a guide for students, researchers, and anyone interested in exploring the various aspects of astrophysics and understanding how they interconnect. This expanded map provides a more comprehensive and mathematically rigorous overview of the Standard Model, but it is still far from exhaustive. The Standard Model is an incredibly rich and complex theory, and fully exploring its mathematical foundations would require delving into many more advanced topics in theoretical and mathematical physics." These equations provide a mathematical foundation for understanding and quantifying complexity in information theory, computer science, mathematics, and other related fields. They help in analyzing the information content, computational resources, geometric properties, dynamical behavior, and structural intricacy of complex systems." This diagram attempts to capture many of the key areas and concepts in neuroscience, from the gross anatomical structures of the nervous system to the cellular and molecular underpinnings of neural function, as well as the various technologies used to study the brain and the disorders that can affect it. However, given the vast scope and complexity of neuroscience, this diagram is necessarily simplified and incomplete. It should be viewed as a high-level overview rather than an exhaustive representation of the field." [1] [Introduction to Gibbs free energy (video) | Khan Academy](https://www.khanacademy.org/science/ap-chemistry-beta/x2eef969c74e0d802:applications-of-thermodynamics/x2eef969c74e0d802:gibbs-free-energy-and-thermodynamic-favorability/v/introduction-to-gibbs-free-energy) [Quanta Magazine](https://www.quantamagazine.org/how-physics-gifted-math-with-a-new-geometry-20200729/) [1] [Quanta Magazine](https://www.quantamagazine.org/string-theory-meets-loop-quantum-gravity-20160112/) [5] https://www.physicsforums.com/threads/loop-quantum-gravity-or-string-theory.1010608/" Francois Chollet's mental model for LLMs: LLMs are stores of programs. Querying LLMs involves selecting a program from the latent program space and running it on your data. And the ability of LLMs to interpolate between these programs is what makes LLMs so flexible. https://twitter.com/PicoPaco17/status/1783983717037342793?t=PuNtVE7Qu5prckE6dgwv2A&s=19 [Fran√ßois Chollet - Creating Keras 3 - YouTube](https://youtu.be/oe6fuxhGVRE?si=5rRkf8xvvsegv1Tc) [[2404.11018] Many-Shot In-Context Learning](https://arxiv.org/abs/2404.11018) [[2404.18930] Hallucination of Multimodal Large Language Models: A Survey](https://arxiv.org/abs/2404.18930) [The Beauty of Life Through The Lens of Physics - YouTube](https://www.youtube.com/watch?v=ncC-GMzF9RY) [[2302.00843] Computational Dualism and Objective Superintelligence](https://arxiv.org/abs/2302.00843) https://twitter.com/dlevenstein/status/1785311847928713579?t=9ryTq9_E8Q8Wd89Vk_4b0g&s=19 Illya next token prediction is enough for AGI https://twitter.com/ns123abc/status/1785504804619608367?t=j74nJZlDvMJasxvreNzAew&s=19 [[2310.13018] Getting aligned on representational alignment](https://arxiv.org/abs/2310.13018) https://twitter.com/ZimingLiu11/status/1785483967719981538?t=08imquqmzvPs2sMuDzAtaA&s=19 [[2404.19756] KAN: Kolmogorov-Arnold Networks](https://arxiv.org/abs/2404.19756) Med Gemini https://twitter.com/alan_karthi/status/1785117444383588823?t=nehPygbWzxnoqw4OXnV5mw&s=19 [[2404.18416] Capabilities of Gemini Models in Medicine](https://arxiv.org/abs/2404.18416) https://twitter.com/Dr_Singularity/status/1785403555525837287?t=TU0gVBq13pM-KFxfkDpdbw&s=19 https://twitter.com/iScienceLuvr/status/1785135037379199162?t=97c7yB2ecBXQombJFCzq0g&s=19 [[2404.18021] CRISPR-GPT: An LLM Agent for Automated Design of Gene-Editing Experiments](https://arxiv.org/abs/2404.18021) US AI regulation https://twitter.com/nearcyan/status/1784864119491100784?t=0hIccKIeyMdfl7bW5Sa2VQ&s=19 LLMs history https://twitter.com/jannchie/status/1784621770018058651?t=kvpbtPXJxnGOnab9Lz3VOw&s=19 https://twitter.com/burny_tech/status/1784291567210988019?t=yuy3mjhATUjWuroNgEf4mg&s=19 https://twitter.com/teortaxesTex/status/1784202972559298895?t=G1qmVwpBXkuZpFO7BAoLoA&s=19 Or https://twitter.com/cheng_pengyu/status/1780965366531006887?t=N-fYiFyMiORTBCPqsyT1lA&s=19 [Quanta Magazine](https://www.quantamagazine.org/pioneering-quantum-physicists-win-nobel-prize-in-physics-20221004/) https://twitter.com/getjonwithit/status/1784258756202688675?t=jb3V1gUPSXsOJJKB0mBGrg&s=19 AI infrastructure landscape https://twitter.com/chiefaioffice/status/1783932905355362745?t=B2mEpTPiwurIRv6fJuPBDA&s=19 [Amazon Grows To Over 750,000 Robots As World's Second-Largest Private Employer Replaces Over 100,000 Humans](https://finance.yahoo.com/news/amazon-grows-over-750-000-153000967.html?guccounter=1) Claude 3 Opus can simulate a Turing Machine. The ability to be a (universal) Turing machine could, in principle, be the foundation of the ability to reliably perform complex rigorous calculation and cognition - the kind of tasks where there is an exact right answer, or exact constraints on what is a valid next step, and so the ability to pattern-match plausibly is not enough. And that is what people always say is missing from LLMs. https://twitter.com/ctjlewis/status/1779740038852690393?t=qmuJ2foWJD3lSA5SYP2dPQ&s=19 Extropic opinion https://twitter.com/0xKyon/status/1784591427462389822?t=NawuhkRNQVRQn-WETT64Nw&s=19 https://twitter.com/Caldwbr/status/1784347239294390389?t=LSfQWUHIAF5N0I0B6rcocg&s=19 https://twitter.com/algekalipso/status/1784359087423291518?t=bjDVd_XMaeMfqUyhFkYwAw&s=19 Statespace of minds https://twitter.com/ukc10014/status/1784497737485955279?t=7KpSzwPUDNQOVlsKIBOztQ&s=19 Consciousness in LLMs [[2402.12422] Simulacra as Conscious Exotica](https://arxiv.org/abs/2402.12422) [[2402.03175] The Matrix: A Bayesian learning model for LLMs](https://arxiv.org/abs/2402.03175) OpenELM - a new open language model that employs a layer-wise scaling strategy to efficiently allocate parameters and leading to better efficiency and accuracy; comes with different sizes such as 270M, 450M, 1.1B, and 3B. https://twitter.com/dair_ai/status/1784608604093292860 https://twitter.com/dair_ai/status/1784608605821370625 https://twitter.com/dair_ai/status/1784608607536848903 Self-Evolution of LLMs - provides a comprehensive survey on self-evolution approaches in LLMs. https://twitter.com/dair_ai/status/1784608616210706916 https://twitter.com/AnsongNi/status/1783311827390070941 [[2403.15796] Understanding Emergent Abilities of Language Models from the Loss Perspective](https://arxiv.org/abs/2403.15796) https://twitter.com/dmvaldman/status/1784699985642307660?t=JsyaaHR2P94RJjvUVRqazA&s=19 https://twitter.com/Kat__Woods/status/1784759081280082067?t=vMmks-MziMZ72ocMFXOtkA&s=19 Maskology is getting better in LLMs https://twitter.com/teortaxesTex/status/1782910115919675778 [[2402.07927] A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications](https://arxiv.org/abs/2402.07927) loop quantum gravity x string theory [Quanta Magazine](https://www.quantamagazine.org/string-theory-meets-loop-quantum-gravity-20160112/) https://twitter.com/burny_tech/status/1785320274130243746?t=BLc_1DiahXmERJugrJAYpg&s=19 [[2303.14617] Neural Graph Reasoning: Complex Logical Query Answering Meets Graph Databases](https://arxiv.org/abs/2303.14617) https://twitter.com/KompendiumProj/status/1773411105429447002?t=I0YWfA3oM1WNoWwzXe2fyw&s=19 Types of antiaging papers [Imgur: The magic of the Internet](https://imgur.com/a/ZpIihwU) [[2404.14394] A Multimodal Automated Interpretability Agent](https://arxiv.org/abs/2404.14394) Mamba from scratch https://youtu.be/N6Piou4oYx8?si=Nv7UDWxzWvL2pNBh Microsoft CEO Satya Nadella: we are in Year 2 of the Intelligence Revolution and scaling laws will bring greater reasoning, planning and memory, leading to a new phase of economic growth https://twitter.com/tsarnick/status/1785415907713429967 https://twitter.com/tsarnick/status/1785243160589021656 Mathematics is the queen of sciences [Imgur: The magic of the Internet](https://imgur.com/AiyjQr4) Machine learning books https://twitter.com/DionysianAgent/status/1785395743039062368 [Imgur: The magic of the Internet](https://imgur.com/ZzpTIt9) Andrew Mack (my MATS mentee) found an *unsupervised* method to elicit latent model capabilities, find backdoored outputs (without knowing how to activate the backdoor!), and override safety training. [Mechanistically Eliciting Latent Behaviors in Language Models — AI Alignment Forum](https://www.alignmentforum.org/posts/ioPnHKFyy4Cw2Gr2x/mechanistically-eliciting-latent-behaviors-in-language-1) mathematics vs physics notation [Imgur: The magic of the Internet](https://imgur.com/3Lb4coU) [[1706.01428] On the correspondence between thermodynamics and inference](https://arxiv.org/abs/1706.01428) https://twitter.com/burny_tech/status/1783552920664908038 https://twitter.com/burny_tech/status/1783567241847435661 na některých bylo pár pokusů o empirických měření [Quanta Magazine](https://www.quantamagazine.org/what-a-contest-of-consciousness-theories-really-proved-20230824/) Let's synthesize the of best of both of these [Imgur: The magic of the Internet](https://imgur.com/o2CZHC7) there is now a cow-demic. A large number of cow herds in the US appear to be infected with H5N1. https://twitter.com/Plinz/status/1783580417188327683 https://twitter.com/tsarnick/status/1783653775233921136?t=1Q0405nRVdiajFmP0eDCPQ&s=19 [Quanta Magazine](https://www.quantamagazine.org/how-selective-forgetting-can-help-ai-learn-better-20240228/) [[2310.02304] Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation](https://arxiv.org/abs/2310.02304) [[2310.04444] What's the Magic Word? A Control Theory of LLM Prompting](https://arxiv.org/abs/2310.04444) [[2404.15676] Beyond Chain-of-Thought: A Survey of Chain-of-X Paradigms for LLMs](https://arxiv.org/abs/2404.15676) [[2402.17762] Massive Activations in Large Language Models](https://arxiv.org/abs/2402.17762) [[2404.16014] Improving Dictionary Learning with Gated Sparse Autoencoders](https://arxiv.org/abs/2404.16014) https://twitter.com/sen_r/status/1783497788120248431?t=nZeA3AGnb8Dgv6nxHMzm0A&s=19 https://twitter.com/Francis_YAO_/status/1783446286479286700?t=5U6RbQoNPA7SdBTJxKwqNQ&s=19 [[2404.15574] Retrieval Head Mechanistically Explains Long-Context Factuality](https://arxiv.org/abs/2404.15574) [David Wolpert: What Can We Really Know About That Which We Cannot Even Imagine? - YouTube](https://youtu.be/CCmeah2_I_s?si=nMCgfC7pBe0aAJIX) [COUNTERFEIT PEOPLE. DANIEL DENNETT. (SPECIAL EDITION) - YouTube](https://youtu.be/axJtywd9Tbo?si=IZWQW_kSx4vIOxsy) https://twitter.com/loreloc_/status/1783532892447994057?t=0CKEUSu7ep1LYvA-aAkrXQ&s=19 https://twitter.com/MarcosArrut/status/1783839110173737088?t=NHvr7LTdJbkP2_pXH0Obkg&s=19 [Imgur: The magic of the Internet](https://imgur.com/5sYRlKe) [[2404.15758] Let's Think Dot by Dot: Hidden Computation in Transformer Language Models](https://arxiv.org/abs/2404.15758) https://twitter.com/pmddomingos/status/1783970487271645216?t=kjD9crg_VctRHSDtD59GTQ&s=19 https://twitter.com/burny_tech/status/1782939907427557467?t=1_jePfKdwpUxDIWce_6cew&s=19 Fusion explained https://twitter.com/Andercot/status/1782888362757558549 [Quanta Magazine](https://www.quantamagazine.org/physics-experiments-spell-doom-for-quantum-collapse-theory-20221020/?fbclid=IwZXh0bgNhZW0CMTEAAR1CKljB88xv-u08ZknOTeGEtat_lGU9lQbU00Svqg2ovBUqj9dFO9lgqY8_aem_AeEr61Akxjke9dbZWPy4UhjenDhf5EbhPmpo6rYEeUUKZKwR8l9TJ9UFD8BltzMvrUg9asMMq5i-RLFcSB-ZdsvW) [[2404.14387] A Survey on Self-Evolution of Large Language Models](https://arxiv.org/abs/2404.14387) recursive selfimprovement https://twitter.com/iamgingertrash/status/1782917758377979937?t=jrAD_uLgvx-4_E8ok2B9sw&s=19 distributed AI computing https://twitter.com/PrimeIntellect/status/1782772983712379328 [[2404.10642] Self-playing Adversarial Language Game Enhances LLM Reasoning](https://arxiv.org/abs/2404.10642) Ai models landscape https://twitter.com/chiefaioffice/status/1782799567689232764?t=qE9mWlMX_yeQgnf5Wb0yfA&s=19 [[1611.01576] Quasi-Recurrent Neural Networks](https://arxiv.org/abs/1611.01576) [[2404.14423] A Compositional Approach to Higher-Order Structure in Complex Systems: Carving Nature at its Joints](https://arxiv.org/abs/2404.14423) https://twitter.com/Abelaer/status/1783088636751151367 [[2312.04030] Modeling Boundedly Rational Agents with Latent Inference Budgets](https://arxiv.org/abs/2312.04030) [Quanta Magazine](https://www.quantamagazine.org/ai-starts-to-sift-through-string-theorys-near-endless-possibilities-20240423/) [[2404.15059] Using deep reinforcement learning to promote sustainable human behaviour on a common pool resource problem](https://arxiv.org/abs/2404.15059) https://twitter.com/burny_tech/status/1492706452292739073 [Imgur: The magic of the Internet](https://imgur.com/a/U2CN1nt) [Imgur: The magic of the Internet](https://imgur.com/a/BM4njkE) [[2404.14928] Graph Machine Learning in the Era of Large Language Models (LLMs)](https://arxiv.org/abs/2404.14928) [[2404.10369] Biological computations: limitations of attractor-based formalisms and the need for transients](https://arxiv.org/abs/2404.10369) [AI-engineered enzyme eats entire plastic containers | Research | Chemistry World](https://www.chemistryworld.com/news/ai-engineered-enzyme-eats-entire-plastic-containers/4015620.article) [Scientists baffled as two lifeforms merge in 'once-in-a-billion-year' event - JOE.co.uk](https://www.joe.co.uk/science/scientists-baffled-as-two-lifeforms-merge-in-once-in-a-billion-year-event-433306) AI outcomes https://twitter.com/burny_tech/status/1782666851044000178?t=MxHW87WLpYpYDKUnpIyEgw&s=19 https://twitter.com/SpencrGreenberg/status/1781702814500016486?t=5rkC08Y7MjEoabKNEBx7Ig&s=19 >Our latest experiments show that our Lean co-pilot can automate >80% of the mathematical proof steps (2.3 times better than the previous rule-based baseline aesop). [[2404.12534] Towards Large Language Models as Copilots for Theorem Proving in Lean](https://arxiv.org/abs/2404.12534) <https://twitter.com/AnimaAnandkumar/status/1782518528098353535?t=WweTdkLQwVhUQCmYTH4Feg&s=19> LLM jailbreaking strategies https://twitter.com/NannaInie/status/1782358650641600762?t=7JjYczfC3NbK2E_MM-7pSQ&s=19 Genetically engineered soybeans with animal protein https://twitter.com/simonmaechling/status/1782512994997436840?t=WUsd25xNsKC8uTkb9Vq-ZQ&s=19 <[Natural language instructions induce compositional generalization in networks of neurons | Nature Neuroscience](https://www.nature.com/articles/s41593-024-01607-5>) List of smarts https://twitter.com/tsarnick/status/1782615380470722940?t=BB3sb4wsgMkwXp4mpgax2w&s=19 [Quanta Magazine](https://www.quantamagazine.org/ai-starts-to-sift-through-string-theorys-near-endless-possibilities-20240423/?fbclid=IwZXh0bgNhZW0CMTEAAR1UyAH_xPRturepxopZR-AET7UrbJhjNRcUeNzKBBTcsQ8tIzBiwDZ8oEA_aem_Ac2sXrB9n8HVcGoUDreRkWJkKk1eNvImhkIOwo1jbT9qmf8I4SX2AUpevuzQxnrsfYDWqHZlrUJOcuvPV8bF5LBG) [Meat Meets Machine! Multiscale Competency Enables Causal Learning - TechRxiv](https://www.techrxiv.org/users/684323/articles/848527-meat-meets-machine-multiscale-competency-enables-causal-learning) [[2209.06862] Deep learning in a bilateral brain with hemispheric specialization](https://arxiv.org/abs/2209.06862) [The World as a Neural Network. Frequently Asked Questions. - YouTube](https://m.youtube.com/watch?v=VDX4-CUg1yA&fbclid=IwZXh0bgNhZW0CMTEAAR1iKV18k2siqhe39sKk3VtLL4SbZWPmVAfGi1ynRR7AB4u5t6zJsRbiQjU_aem_AVd3e8HYHcN34UeZRwCHzWYZLFTd7T-E7c1Olbn43z59HMTbc4l9ZfsDFL_WEjaUaE3Q6FOM7kWIaR8v6KCXcVap) [[2404.09173] TransformerFAM: Feedback attention is working memory](https://arxiv.org/abs/2404.09173) Attention explained to a child https://twitter.com/hausdorff_space/status/1778828850686046337?t=UFDfYoKPrRWtSq9Lmk2WIA&s=19 https://twitter.com/martin_casado/status/1782094516335550480?t=o5gw5KJiGS3XHHEgCOlYXQ&s=19 https://twitter.com/emollick/status/1782137189390017010?t=wft1LqtOhnfSZcxoWMLOIA&s=19 Landscape of robotics https://twitter.com/bindureddy/status/1781879387476070446?t=Ci0pOMK-xesAlN1b-u1CMA&s=19 https://twitter.com/sporadicalia/status/1781746318852886922?t=iYfjvXIdX1ophgcgwQIU8w&s=19 https://twitter.com/BasedBeffJezos/status/1781953736245756070?t=vFF_bOSyN0ULML5stRRqDA&s=19 https://twitter.com/fly51fly/status/1782156857383301371?t=T1ZvR_P0B-p-JGAJHKPQ4w&s=19 Chaos theory books https://twitter.com/alec_helbling/status/1782170558538391993?t=AEvSukqs5Q1dNzFwRtAeRA&s=19 [[2404.11068] ScaleFold: Reducing AlphaFold Initial Training Time to 10 Hours](https://arxiv.org/abs/2404.11068) https://twitter.com/LordDreadwar/status/1782191490468372577?t=2fgHO6CNS7izyVaPpp1dow&s=19 https://twitter.com/LordDreadwar/status/1782166643931533720?t=SwHhanLEMnwkJf8gBsrSjg&s=19 AGI will be bound EM field processing https://twitter.com/LordDreadwar/status/1778537428283994416?t=CQKfY5rI5eoV3SBR_Xx-IA&s=19 https://twitter.com/teortaxesTex/status/1780357432452989275?t=BwDhHtFj6vALrPb84YJ51w&s=19