To navigate this uncertain landscape, we will need wisdom as well as knowledge. We must learn to use our power with responsibility and foresight, to consider the long-term consequences of our actions, and to work together across borders and disciplines to solve global problems. We must also cultivate a sense of wonder and curiosity, a love of learning and discovery for its own sake. For it is only by constantly pushing the boundaries of what is known that we can hope to create a better future for ourselves and for generations to come. In the words of Carl Sagan, "Somewhere, something incredible is waiting to be known." Let us go forth and seek that knowledge, and use it to build a world worthy of our highest aspirations. [[2403.04704] Quantum Advantage in Reversing Unknown Unitary Evolutions](https://arxiv.org/abs/2403.04704) [[2403.04121] Can Large Language Models Reason and Plan?](https://arxiv.org/abs/2403.04121) [[2403.03925] Consciousness qua Mortal Computation](https://arxiv.org/abs/2403.03925) [Whatever happened to string theory? - YouTube](https://www.youtube.com/watch?v=eRzQDyw5C3M) [Why Is This Basic Computer Science Problem So Hard? - YouTube](https://www.youtube.com/watch?v=IzSs_gJDVzI) quanta magazine https://royalsocietypublishing.org/doi/10.1098/rsta.2020.0410 overview and mathematics of emergence [[2311.17030] Is This the Subspace You Are Looking for? An Interpretability Illusion for Subspace Activation Patching](https://arxiv.org/abs/2311.17030) [The String Theory Iceberg EXPLAINED - YouTube](https://youtu.be/X4PdPnQuwjY?si=a1W6o8nq5_zy5zay) [[2403.03230] Large language models surpass human experts in predicting neuroscience results](https://arxiv.org/abs/2403.03230) Depths of deep learning general theories memes [Discord](https://discord.com/channels/937356144060530778/939939794736271452/1211925068549062666) Tohle je taky moje oblíbená konspirační teorie [Imgur: The magic of the Internet](https://imgur.com/91uTv43) [[2403.00504] Learning and Leveraging World Models in Visual Representation Learning](https://arxiv.org/abs/2403.00504) [[2402.15809] Empowering Large Language Model Agents through Action Learning](https://arxiv.org/abs/2402.15809) Rikate, ze je to "jen statistika", "jen jasne definovane algoritmy", ale kdyby to bylo tak jasne, tak neexistuje spousta otevrenych otazek o tom, jak neuronky funguji. Ta analogie s evoluci u lidi mela ukazat proc videt neuronky jenom jako jejich ucici algoritmus neni kompletni. Viz napr spoustu otevrenych problemu snazici se zjistit jak funguji: [Open Problems in Mechanistic Interpretability](https://coda.io/@firstuserhere/open-problems-in-mechanistic-interpretability) Je napr hodne prace kolem toho ze se uci slabsi reprezentace fyziky [[2311.17137] Generative Models: What do they know? Do they know things? Let's find out!](https://arxiv.org/abs/2311.17137) nebo proc generalizuji, ale porad poradme nevime proc se emergentne uci generalizujici obvody a jak to vic zlepsit [[2310.16028] What Algorithms can Transformers Learn? A Study in Length Generalization](https://arxiv.org/abs/2310.16028) Vidim tyto emergentni features jako formu porozumeni, protoze feature learning se deje i u lidí, má to hodně podobností, ale i hodně odlisnoti, ktery pomalu zjistujeme, diky cemuz jde dosavadni neuronky napriklad vic zefektivnit... [To de-risk AI, the government must accelerate knowledge production | by Greg Fodor | Medium](https://gfodor.medium.com/to-de-risk-ai-the-government-must-accelerate-knowledge-production-49c4f3c26aa0) [Quanta Magazine](https://www.quantamagazine.org/how-selective-forgetting-can-help-ai-learn-better-20240228/) https://twitter.com/RobertTLange/status/1765391351854551523 [[2402.18381] Large Language Models As Evolution Strategies](https://arxiv.org/abs/2402.18381) [Quanta Magazine](https://www.quantamagazine.org/new-breakthrough-brings-matrix-multiplication-closer-to-ideal-20240307/) [[2301.04690] A Functorial Perspective on (Multi)computational Irreducibility](https://arxiv.org/abs/2301.04690) [[2402.19473v1] Retrieval-Augmented Generation for AI-Generated Content: A Survey](https://arxiv.org/abs/2402.19473v1) [[2403.03101] KnowAgent: Knowledge-Augmented Planning for LLM-Based Agents](https://arxiv.org/abs/2403.03101) [Dr. Michael Levin on Embodied Minds and Cognitive Agents - YouTube](https://youtu.be/LYyGG9xXpPA?si=j-38R-3qiKv9R0t0) [The String Theory Iceberg EXPLAINED - YouTube](https://www.youtube.com/watch?v=X4PdPnQuwjY) Depends on how you define sentience 🤷‍♂️ There are 465498431654 cognitivist and behaviorist definitions of these words and each seems to be motivated by different predefinitions asking different questions wanting to solve different problems. This review goes over some empirical definitions of consciousness (which is usually considered as different from sentience) used in neuroscience and the state of LLMs in relation to these, but it's almost a year old [[2308.08708] Consciousness in Artificial Intelligence: Insights from the Science of Consciousness](https://arxiv.org/abs/2308.08708) and this one looks at it more philosophically, which is similarly old [[2303.07103] Could a Large Language Model be Conscious?](https://arxiv.org/abs/2303.07103) My favorite practical set of assumptions on which you then build empirical models for all of this is probably what free energy principle camp, Friston et. al, [CAN AI THINK ON ITS OWN? - YouTube](https://youtu.be/zMDSMqtjays) are using with Markov blankets with more and more complex dynamics creating more complex experiences, or Joscha Bach's coherence inducing operator [Synthetic Sentience: Can Artificial Intelligence become conscious? | Joscha Bach | CCC #37c3 - YouTube](https://youtu.be/Ms96Py8p8Jg) , but I'm open to this whole landscape and don't think any set of assumptions is inherently more true than others, because I don't see a way to falsify assumptions that are living before empirical models that you can falsify. The commonly used benchmarks are so abused (plus pretraining on the benchmarks is all you need), but apparently the GPQUA diamond benchmark that Claude 3 dominated is pretty new for graduate level tasks and more harder to abuse than others. I think this guy analyzes it pretty well critically. [The New, Smartest AI: Claude 3 – Tested vs Gemini 1.5 + GPT-4 - YouTube](https://youtu.be/ReO2CWBpUYk?si=m5HoJF6oZnRVUHor) [AI era - Wikipedia](https://en.wikipedia.org/wiki/AI_era) [Towards General Computer Control: A Multimodal Agent For Red Dead Redemption II As A Case Study](https://baai-agents.github.io/Cradle/) The era of quantum gravity computing has arrived (with "an exponential speedup over standard quantum computation"). Any startups in this space already? [[2403.02937] Quantum Algorithms in a Superposition of Spacetimes](https://arxiv.org/abs/2403.02937) [[2403.00910] Computational supremacy in quantum simulation](https://arxiv.org/abs/2403.00910) https://www.scientificamerican.com/article/blood-flow-may-be-key-player/ [Pete Mandik, Meta-Illusionism and Qualia Quietism - PhilArchive](https://philarchive.org/rec/MANMAQ) [To de-risk AI, the government must accelerate knowledge production | by Greg Fodor | Medium](https://gfodor.medium.com/to-de-risk-ai-the-government-must-accelerate-knowledge-production-49c4f3c26aa0) " [ComPromptMized](https://sites.google.com/view/compromptmized) [[2402.19450] Functional Benchmarks for Robust Evaluation of Reasoning Performance, and the Reasoning Gap](https://arxiv.org/abs/2402.19450) [[2402.18312] How to think step-by-step: A mechanistic understanding of chain-of-thought reasoning](https://arxiv.org/abs/2402.18312) [[1912.01937] Quantum-Inspired Hamiltonian Monte Carlo for Bayesian Sampling](https://arxiv.org/abs/1912.01937) [[2402.10416] Grounding Language about Belief in a Bayesian Theory-of-Mind](https://arxiv.org/abs/2402.10416) [[2403.00745] AtP*: An efficient and scalable method for localizing LLM behaviour to components](https://arxiv.org/abs/2403.00745) Paper: [[2402.02364] The Developmental Landscape of In-Context Learning](https://arxiv.org/abs/2402.02364) [Nova classification - Wikipedia](https://en.wikipedia.org/wiki/Nova_classification) Můj hlavní cíl je vytvořit velkou vizuální mapu, která má co nejvíc informací o matematice přes ukázání těch nejdůležitějších matematických struktur, definic, rovnic, s co nejmenší textovou omáčkou kolem, chci aby co největší % mapy byly hlavně nejvíc matematický symboly, celý v jednom obřím čitelným plagátu! Což jsem ještě neviděl. Jsou nějaký mapy, tabulky, seznamy, wikiny apod. matematiky, ze kterých se inspiruju, ale neviděl jsem nic jako vizuální mapu tuny definicí a rovnic z hlavně foundations matiky, čistou matiku a aplikovanou matiku: teoretickou fyziku, teorii systémů, matematickou biologii, AI, ostatní matematický aplikovaný vědy a engineering obory, který vidím za nejdůležitější. Často jsou moc obecný nebo moc konkrétní jinde než chci. Existuje třeba mapa matiky od [domain of science](<[The Map of Mathematics - YouTube](https://www.youtube.com/watch?v=OmJ-4B-mS-Y>),) nebo [fyziky](<[The Map of Physics - YouTube](https://www.youtube.com/watch?v=ZihywtixUYo>)) (má jich [víc](<[Než budete pokračovat do Vyhledávání Google](https://www.google.com/search?sca_esv=f032846a98b531f9&sxsrf=ACQVn0976HXiiNvRPJyyV5C4j7DIBC8eyQ:1709279314434&q=map+of+physics&tbm=vid>)),) [Mathematopia](<https://tomrocksmaths.com/2020/12/21/mathematopia-the-adventure-map-of-mathematics/>), [geometric representation of mathematics](<[Imgur: The magic of the Internet](https://imgur.com/Tgd6HmA>)) http://srln.se/mapthematics.pdf , od [Zooga](<https://www.reddit.com/r/math/comments/2av79v/map_of_mathematistan_source_in_comments/>), [tahle Laglands nádhera](https://bastian.rieck.me/blog/2020/langlands/), nebo [tady jich pár je listed v math stackexchange](<[big list - Mind maps of Advanced Mathematics and various branches thereof - Mathematics Stack Exchange](https://math.stackexchange.com/questions/124709/mind-maps-of-advanced-mathematics-and-various-branches-thereof>)) nebo [google search nachází nějaký další](https://www.google.com/search?sca_esv=6416b2a2bca84fa5&sxsrf=ACQVn08EYgLRVx_d0OEctey6oKUsAtsrOg:1709276455770&q=map+of+mathematics&tbm=isch&source=lnms&sa=X&ved=2ahUKEwjUgeD_vtKEAxXS0AIHHaQOBFoQ0pQJegQICxAB&biw=1920&bih=878&dpr=1). Plus Peak math staví velkou vizuální interaktivní [mapu](https://www.peakmath.org/peakmath-landscape). Nebo [The Princeton Companion to Mathematics](<https://www.amazon.com/Princeton-Companion-Mathematics-Timothy-Gowers/dp/0691118809>)([pdf](<https://sites.math.rutgers.edu/~zeilberg/akherim/PCM.pdf>)) https://www.amazon.com/Princeton-Companion-Applied-Mathematics/dp/0691150397?ref=d6k_applink_bb_dls&dplnkId=352f8fc3-ee97-4716-817a-e8feea9cd8c2 kniha vypadá zajímavě, nebo je ještě [Mathematical Promenade](https://arxiv.org/abs/1612.06373). Nebo je ještě [proof wiki](https://proofwiki.org/wiki/Category:Proofs), ale to je hlavně na proofs, já chci hlavně dát na jedno místo ty výsledky co nejvíc kompresovaně, aby se těch výsledných definicí, rovnic a různých propojeních vešlo co nejvíc na co nejmíň místa. Nebo quanta magazine má mapu na trochu [matiky](<https://mathmap.quantamagazine.org/map/>) a [fyziky](<[404 Not Found](https://www.quantamagazine.org/theories-of-everything-mapped-20150803/>).) Wiki má ještě fajn [theoretical physics](<https://en.wikipedia.org/wiki/Theoretical_physics>) a [mathematical physics](https://en.wikipedia.org/wiki/Mathematical_physics) nebo https://en.wikipedia.org/wiki/Mathematical_and_theoretical_biology, trillion [AI theory matiky](https://arxiv.org/abs/2106.10165) (principles of deep learning theory, statistical learning theory), [free energy principle](https://arxiv.org/abs/2201.06387),... Dynamical systems, systems theory,... Nebo je ještě [nlab](https://ncatlab.org/nlab/show/HomePage) ([matika](https://ncatlab.org/nlab/show/mathematics), [fyzika](https://ncatlab.org/nlab/show/higher+category+theory+and+physics)) ale to hlavně magie od šílenců z teorií kategorií, toho chci mít jenom část mý mapy, teorie kategorií svěle umožňuje propojovat jednotlivý matematický vesmíry ([od Math3ma](https://www.math3ma.com/blog/what-is-category-theory-anyway), [od Southwella](https://www.youtube.com/playlist?list=PLCTMeyjMKRkoS699U0OJ3ymr3r01sI08l)). Tenhle týpek má fajn list [konkrétnějších knížek podoborů matiky](https://www.reddit.com/r/math/comments/kqnfn5/suggestions_for_starting_a_personal_library/gi9k4gj/?context=3). p [Why AI still doesn't have creativity like humans do. - YouTube](https://youtu.be/5cBS6COzLN4?si=If5qPSY4ZfqDGXRK) [[2402.11753] ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs](https://arxiv.org/abs/2402.11753) [[2402.17764] The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits](https://arxiv.org/abs/2402.17764) [[2212.13836] Cyclification of Orbifolds](https://arxiv.org/abs/2212.13836) viz [Weak-to-strong generalization](https://openai.com/research/weak-to-strong-generalization) [Streamlit](https://fmcheatsheet.org/) Foundation Model Development Cheatsheet [[2402.19155] Beyond Language Models: Byte Models are Digital World Simulators](https://arxiv.org/abs/2402.19155?fbclid=IwAR0Ew9rsgxdHeZo5T85TqSOyp0l_1QzNf3_Kd0anIdlFcP-oVFhAGLwngvU) [AI Outshines Humans in Creative Thinking - Neuroscience News](https://neurosciencenews.com/ai-creative-thinking-25690/) [[2402.18041] Datasets for Large Language Models: A Comprehensive Survey](https://arxiv.org/abs/2402.18041) [[2402.15555] Deep Networks Always Grok and Here is Why](https://arxiv.org/abs/2402.15555) [[2402.15809] Empowering Large Language Model Agents through Action Learning](https://arxiv.org/abs/2402.15809) https://www.g2.com/articles/self-driving-vehicle-statistics [[2402.16845] Neural Operators with Localized Integral and Differential Kernels](https://arxiv.org/abs/2402.16845) [Klarna AI assistant handles two-thirds of customer service chats in its first month](https://www.klarna.com/international/press/klarna-ai-assistant-handles-two-thirds-of-customer-service-chats-in-its-first-month/) [OpenAI Accuses New York Times of Hiring Someone to 'Hack' Its Products](https://www.businessinsider.com/openai-accuses-new-york-times-hiring-someone-hack-its-products-2024-2?international=true&r=US&IR=T) [Reddit - Dive into anything](https://reddit.com/r/singularity/comments/1b10p2i/chinese_robots_faster_than_optimus/) halting problem trolley meme [Reddit - Dive into anything](https://www.reddit.com/r/okbuddyphd/comments/1b1ostp/your_halting_problem_is_damn_undecidable/) [[2301.05638] Deep Learning Symmetries and Their Lie Groups, Algebras, and Subalgebras from First Principles](https://arxiv.org/abs/2301.05638) [[2402.18041] Datasets for Large Language Models: A Comprehensive Survey](https://arxiv.org/abs/2402.18041) [GitHub - lucidrains/ring-attention-pytorch: Explorations into Ring Attention, from Liu et al. at Berkeley AI](https://github.com/lucidrains/ring-attention-pytorch?tab=readme-ov-file) [[2402.18491] Dynamical Regimes of Diffusion Models](https://arxiv.org/abs/2402.18491) [[2402.18659] Large Language Models and Games: A Survey and Roadmap](https://arxiv.org/abs/2402.18659) "Folks, something seems to be happening... We show that our theory of gravity is valid down to the shortest distances arxiv.org/abs/2402.17844. and that it can explain the expansion of the universe and galactic rotation without dark matter or dark energy arxiv.org/abs/2402.19459" Robotics: [Mobile ALOHA - A Smart Home Robot - Compilation of Autonomous Skills - YouTube](<[Mobile ALOHA - A Smart Home Robot - Compilation of Autonomous Skills - YouTube](https://www.youtube.com/watch?v=zMNumQ45pJ8>),) [Eureka! Extreme Robot Dexterity with LLMs | NVIDIA Research Paper - YouTube](<[Eureka! Extreme Robot Dexterity with LLMs | NVIDIA Research Paper - YouTube](https://youtu.be/sDFAWnrCqKc?si=LEhIqEIeHCuQ0W2p>),) [Shaping the future of advanced robotics - Google DeepMind](<https://deepmind.google/discover/blog/shaping-the-future-of-advanced-robotics/>), [Optimus - Gen 2 - YouTube](<[Optimus - Gen 2 - YouTube](https://www.youtube.com/watch?v=cpraXaw7dyc>),) [Atlas Struts - YouTube](<https://www.youtube.com/shorts/SFKM-Rxiqzg>), [Figure Status Update - AI Trained Coffee Demo - YouTube](<[Figure Status Update - AI Trained Coffee Demo - YouTube](https://www.youtube.com/watch?v=Q5MKo7Idsok>),) [Curiosity-Driven Learning of Joint Locomotion and Manipulation Tasks - YouTube](<[Curiosity-Driven Learning of Joint Locomotion and Manipulation Tasks - YouTube](https://www.youtube.com/watch?v=Qob2k_ldLuw>)) Agency: [[2305.16291] Voyager: An Open-Ended Embodied Agent with Large Language Models](<https://arxiv.org/abs/2305.16291>), [[2309.07864] The Rise and Potential of Large Language Model Based Agents: A Survey](<https://arxiv.org/abs/2309.07864>), [Agents | Langchain](<https://python.langchain.com/docs/modules/agents/>), [GitHub - THUDM/AgentBench: A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)](<https://github.com/THUDM/AgentBench>), [[2401.12917] Active Inference as a Model of Agency](<https://arxiv.org/abs/2401.12917>), [CAN AI THINK ON ITS OWN? - YouTube](<[CAN AI THINK ON ITS OWN? - YouTube](https://www.youtube.com/watch?v=zMDSMqtjays>),) [Artificial Curiosity Since 1990](<https://people.idsia.ch/~juergen/artificial-curiosity-since-1990.html>) Generalizing: [[2402.10891] Instruction Diversity Drives Generalization To Unseen Tasks](<https://arxiv.org/abs/2402.10891>), [Automated discovery of algorithms from data | Nature Computational Science](<https://www.nature.com/articles/s43588-024-00593-9>), [[2402.09371] Transformers Can Achieve Length Generalization But Not Robustly](<https://arxiv.org/abs/2402.09371>), [[2310.16028] What Algorithms can Transformers Learn? A Study in Length Generalization](<https://arxiv.org/abs/2310.16028>), [[2307.04721] Large Language Models as General Pattern Machines](<https://arxiv.org/abs/2307.04721>), [A Tutorial on Domain Generalization | Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining](<https://dl.acm.org/doi/10.1145/3539597.3572722>), [[2311.06545] Understanding Generalization via Set Theory](<https://arxiv.org/abs/2311.06545>), [[2310.08661] Counting and Algorithmic Generalization with Transformers](<https://arxiv.org/abs/2310.08661>), [Neural Networks on the Brink of Universal Prediction with DeepMind’s Cutting-Edge Approach | Synced](<https://syncedreview.com/2024/01/31/neural-networks-on-the-brink-of-universal-prediction-with-deepminds-cutting-edge-approach/>), [[2401.14953] Learning Universal Predictors](<https://arxiv.org/abs/2401.14953>), [Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks | Nature Communications](<[Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks | Nature Communications](https://www.nature.com/articles/s41467-021-23103-1>)) [[2310.04560] Talk like a Graph: Encoding Graphs for Large Language Models](https://arxiv.org/abs/2310.04560) [[2402.05232] Universal Neural Functionals](https://arxiv.org/abs/2402.05232) Order, all the metastable patterns, on the egde of chaos, is harmony Opensourced OpenAI tool leads to nuclear fusion control https://twitter.com/8teAPi/status/1762190250355658885