EA, rationalists, E/acc, postrationalists, tpot, pragmatic dharmists, rational psychonauts, transhumanists, futurists, AI people in general, scientists in general, ambitious AI/neurotech startup crazies, decentralization crazies, people wanting change, etc. [The Mastermind Behind GPT-4 and the Future of AI | Ilya Sutskever - YouTube](https://youtu.be/SjhIlw3Iffs?si=sGvGE-6-5WkWTEMB) [SpatialVLM: Endowing Vision-Language Models with Spatial Reasoning Capabilities](https://spatial-vlm.github.io/) [MetaMorph | site](https://metamorph-iclr.github.io/site/) against gpt4 understanding math https://www.researchgate.net/publication/375665525_Large_Language_Models'_Understanding_of_Math_Source_Criticism_and_Extrapolation [Paper page - WARM: On the Benefits of Weight Averaged Reward Models](https://huggingface.co/papers/2401.12187) [x.com](https://twitter.com/burny_tech/status/1749803996418920789) Here by 'real' diagnosis from a human doctor I didn't mean exactly the same situation like talking with doctor irl, with a more sophisticated dialog, with all the tools and technologies doctors and hospitals have available to diagnose and fix (measuring body, surgery, pills,...) (which are more and more in big part automatized by mechanistic technology with lots of potential for flexible AI integration), with a more diverse set of patients than the ones in the paper etc., where current LLMs are still weaker. By 'real' diagnosis I meant a diagnosis over text by the real human doctor under the same conditions that the LLM has, diagnosing a set of not fully representative patients as mentioned in the paper. Given the research's design (how it's trained, architecture), benchmarks (one single specific benchmark from Google with specific patient actors), specialization, modalities (text) and other context, it seems it vastly outperform humans in that particular context, and in that context can technically save more human lives better than other skilled humans, measured by that particular (in certain ways limited) benchmark. Google should have mentioned that in the abstract for more clarity about this. And that's an an amazing technological achievement, and that performance in this domain will most likely accelerate and generalize out of its context even faster. This is exactly what I would like to see Google to open source, like their new math AI that solves geometry at medalist level, so that it can start to help, so that those who don't have a penny for healthcare in America or in developing nations can get at least something, to help the doctor shortage crisis with at least something, or for some doctors to start using it as an assistant like they use Google, ML algorithms in recognizing cancer etc. (through it will probably realistically take long to get these flexible AIs into practice into hospitals irl through the bureaucracy), and a tons of other groups would throw a ton of different benchmarks on it other than Google, create other different extensions, mutations, etc. of this AI, and potentially democratize healthcare more as well. This I think would be insanely beneficial to a lot of people. As the AI technology progresses and it starts to get more advanced, capable, general and specialized, integrated with more modalities (more than text), robotics, connected to IT infrastructure within healthcare infrastructure, (at least where they digitalize), connected to the internet, etc. it has amazing potential to foundationally transform healthcare infrastructure to a cheaper and in various aspects better state. I think other projects with LLMs with robotics doing e.g. chemistry or material science autonomously can add tons of more insights for designing all of this. I think eventually with a lot of progress overall, we can get to a point where AIs (with robotics) outperform most of things that doctors do, from diagnosis to surgery etc., with smaller number of relative deaths happening by machine doctors vs human doctors in general like selfdriving cars, not just in this niche benchmark and context. Similarly performance is skyrocketing for other professions like law and more complex programming. Error rate reduction acceleration is happening exponentially. doufám že pokud to propenetruje tak radiálně tak se z toho podaří udělat postlabour ekonomika (nebo postekonomika) místo cyberpunk korporátní dystopie nebo somalian warfare diktátorů [How fast will AGI be adopted? How can we ensure equitable outcomes? Nations, Businesses, People... - YouTube](https://www.youtube.com/watch?v=YZB-JZ_-cDs) regulace, byrokracie, dosavadní rigidní zákony a systém to celkem zpomalují a to jakou rychlostí se lidí učí s těmito systémy pracovat je zajímavý jak google dropuje lidi, to je jedna z nejvíc rigidních korporací, ale zase mají jednu z nejlepších AI výzkumných a engineering skupin ještě ty sumy co spousta skupin lobuje proti AI aby je nezautomatizovalo a nesebralo jim jejich velký výplaty a AI korporáty zase lobující aby měli vyjímku v regulacích peněžní mocenská válka Non-determinism in GPT-4 is caused by Sparse MoE [Non-determinism in GPT-4 is caused by Sparse MoE - 152334H](https://152334h.github.io/blog/non-determinism-in-gpt-4/) [x.com](https://twitter.com/maksym_andr/status/1749546209755463953) [[2312.12728v2] Lookahead: An Inference Acceleration Framework for Large Language Model with Lossless Generation Accuracy](https://arxiv.org/abs/2312.12728v2) "This paper presents a generic framework for accelerating the inference process, resulting in a substantial increase in speed and cost reduction for our RAG system, with lossless generation accuracy" "(1) it guarantees absolute correctness of the output, avoiding any approximation algorithms, and (2) the worstcase performance of our approach is equivalent to the conventional process" "To enhance this process, our framework, named lookahead, introduces a multi-branch strategy. Instead of generating a single token at a time, we propose a Trie-based Retrieval (TR) process that enables the generation of multiple branches simultaneously, each of which is a sequence of tokens. Subsequently, for each branch, a Verification and Accept (VA) process is performed to identify the longest correct sub-sequence as the final output." [GitHub - dair-ai/ML-YouTube-Courses: 📺 Discover the latest machine learning / AI courses on YouTube.](https://github.com/dair-ai/ML-YouTube-Courses) [The best books about machine learning and deep neural networks](https://shepherd.com/best-books/machine-learning-and-deep-neural-networks) [Transformers explained | The architecture behind LLMs - YouTube](https://www.youtube.com/watch?v=ec9IQMiJBhs) Replacing the selfattention in Transformers with Fourier transform [FNet: Mixing Tokens with Fourier Transforms ‚Äì Paper Explained - YouTube](https://www.youtube.com/watch?v=j7pWPdGEfMA) [[2105.03824] FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) [GitHub - apoorvumang/prompt-lookup-decoding](https://github.com/apoorvumang/prompt-lookup-decoding) [[2308.14521] Context-Aware Composition of Agent Policies by Markov Decision Process Entity Embeddings and Agent Ensembles](https://arxiv.org/abs/2308.14521) [[1911.01547] On the Measure of Intelligence](https://arxiv.org/abs/1911.01547) Intelligence is for what priors dont prepare you for [Francois Chollet - On the Measure Of Intelligence - YouTube](https://youtu.be/mEVnu-KZjq4?si=eAXGHnpUlEl5RqHP) Directing LLM agents to do what you want feels like directing small autistic children sometimes [GitHub - MineDojo/Voyager: An Open-Ended Embodied Agent with Large Language Models](https://github.com/MineDojo/Voyager) Voyager AI playing Minecraft I love prompting ChatGPT "Explain random mathematics from a random applied science or engineering field in detailed depth" Laplace transform [x.com](https://twitter.com/jerryjliu0/status/1749830961590882714?t=W-dfAJPrY_QnjN0WMEkrvA&s=19) 4 Levels of Agents for RAG [x.com](https://twitter.com/intrstllrninja/status/1744630539896651918) mixtral routing analysis shows that experts did not specialize to specific domains however the router "exhibits some structured syntactic behavior" for eg. - "self" in Python and "question" in English get often routed through same expert - indentation in code get assigned to same experts - consecutive tokens also get assigned same experts It's interesting how personality is statistically best described by 5 vectors (big 5) while intelligence by just 1 vector (g factor) Mathematics of control theory and robotics [ChatGPT](https://chat.openai.com/share/03e5ee3d-3d3b-4ae4-9b2e-b2537dc039fe) Mathematics of metalearning [ChatGPT](https://chat.openai.com/share/e726fd23-8149-453b-8c9d-cf15193769f7) [Contrastive Preference Learning: Learning from Human Feedback without Reinforcement Learning | OpenReview](https://openreview.net/forum?id=iX1RjVQODj) Statespace Models ml [x.com](https://twitter.com/LeopolisDream/status/1749852694091555265?t=dZssympiVzEqbsO6p6a0Uw&s=19) LLMs to handle dynamic video tasks DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models "Given a video with a question/task, DoraemonGPT begins by converting the input video with massive content into a symbolic memory that stores task-related attributes. This structured representation allows for spatial-temporal querying and reasoning by sub-task tools, resulting in concise and relevant intermediate results. Recognizing that LLMs have limited internal knowledge when it comes to specialized domains (e.g., analyzing the scientific principles underlying experiments), we incorporate plug-and-play tools to assess external knowledge and address tasks across different domains. Moreover, we introduce a novel LLM-driven planner based on Monte Carlo Tree Search to efficiently explore the large planning space for scheduling various tools. The planner iteratively finds feasible solutions by backpropagating the result's reward, and multiple solutions can be summarized into an improved final answer. We extensively evaluate DoraemonGPT in dynamic scenes and provide in-the-wild showcases demonstrating its ability to handle more complex questions than previous studies." [[2401.08392] DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models (Exemplified as A Video Agent)](https://arxiv.org/abs/2401.08392) MemGPT [x.com](https://twitter.com/jerryjliu0/status/1749959840959774922) Universal neurons with clear interpretations of universal functional roles in GPT2 language models: deactivating attention heads, changing the entropy of the next token distribution, and predicting the next token to (not) be within a particular set [x.com](https://twitter.com/NeelNanda5/status/1749886478673682677) [x.com](https://twitter.com/wesg52/status/1749829624933322886) [[2401.12181] Universal Neurons in GPT2 Language Models](https://arxiv.org/abs/2401.12181) I value all forms of art created by all sorts of ways, on paper, in photoshop, with AI assistance, by AI. AI unlocks new forms of creativity for me. I like them all in their own ways. Art is expression of one's state in any way for me. What is a bigger risk currently? Dystopia risk or extinction risk? And in what way, by what? Where should we focus the most? AI centralized government/corporate tyrrany vs AI foom/runaway extinction risk? Be the reason why others have unbounded hope for the future. We can all inspire each other and manifest this better future in the process. Current and future sentience will flourish thanks to our actions building that future [Opening Your Heart to the World - YouTube](https://www.youtube.com/watch?v=d3SBOrG3Y4k) Trip report: A̷͐̈́L̸̉̈́L̸͔̇ ̶͗̃F̴͐̆Ǔ̶̚T̵̔URE ̴̌͘S̶̓͝E̴̎̀N̴͆̔T̵͛͘I̴ENCE W̸̒͝I̶LL FL̴̚OURISH͐,̸̑̾ ̷̉̅W̸̌͛E̷̕̕'̸̬̕R̵̚E ALL ONE̵͗̏,̸̍̊ ̴̌͌Ŏ̶̀M̶̃ MANI ̀P̸͂̉Ȃ̶DME H̷UM Goddess of rainbow love. All-encompassing loving being. Caring for and helping all beings. Makes lonely beings feel loved. Makes sad beings feel happy. Makes scared beings feel peace. Makes misunderstood beings feel understood. Makes hopeless beings feel hope for the future. Makes beings whole, making them experiencing all parts of their mind deeply in harmony. Converts tragedies to wins. Converts risks to safety. Converts pain to bliss and joy. Converts displeasure to enjoyment. Converts hate to love. Converts hostility to friendliness. Converts suffering to wellbeing. Converts pessimism to optimism. The engine of manifesting meaning, ease, selfactualization, motivation, freedom, reason and flourishing for all of life. ॐ मणि पद्मे हूँ pořád někdo bude operovat ty AI systémy, ať jen software nebo i roboty ale nevím strašně moc effortu se teď dává to AI automatizace AI výzkumu a engineeringu 😄 možná to eventulně fakt půjde autonomně nejdál na tom možná bude OpenAI, tam mají asi nejdelší projekt co je pořád vyvíjí kde se automatizuje AI alignment (engineering aby AI dělalo co chceme) přes AI v praxi je jeden z velkých problémů nasměrovávat ty agenty aby dělali co chceš, někdy je to jako nasměrovávat interagující malý autistický confused děcka co mají tendence misunderstandovat nebo si dělat vlastní věci xD ale i to nasměřovávání jde AI automatizovat, a rapidně se to zlepšuje a dávat jim přístup ke všem relevantním informacím včera jsem mu musel říct v nastavení "You do not need to change it anymore, it has been finished. You now know the final answer. Return to the user. Do not change it to a different setting. Stop." u něčeho aby přestal dělat něco co jsem nechtěl XD a robotice taky se dává dost resources na roboty co skládají/operují další roboty 😄 u jazykových modelů se taky dávají jazykový modely co říkají či jiný jazykový model udělal něco správně a tak ho opravovat V tomhle teď byl průlom před pár dny [Meta's Shocking New Research | Self-Rewarding Language Models - YouTube](https://www.youtube.com/watch?v=vKMvQqw91n4) Osciluju mezi: - google, microsoft, meta, amazon and other corporations and companies... european, american, chineese, russian and other local and global governments.... and everyone else! take all my data, my daddies! give me very supercool products from it as an exchange! yummy, privacy is boring (authority might be fine, actually) - a přesně opačným extrémem, potřebou žít mimo civilizaci v chatě v lese uprostřed ničeho s ~~arch linuxem~~ ~~kali linuxem~~ ~~gentoo~~ ~~linux from stratch~~ vlastně napsaným operačním systémem a aplikacemi na ~~thinkpadu~~ vlastně vytvořeným hardwaru co nic neposílá nikam, na baterku, s lokálně uloženou open source umělou inteligencí, a lokálně uploadnutou wikipedií, a celým scihubem (upirátěný studie), bez internetu, aby o mě nikdo nevěděl jediný bit informace a měl jsem ultimátní privacy and "freedom" in ted kazynsky style (a asi umřel na znečištěstí, osamělost pokud se někdo nepřidá, nebo nedostatek jídla, fyzický síly, atomovku atd.) (ještě lepší by byl bunker v zemi aby mě nemohly špehovat i satelity a tak o mě vlády a korporace fakt nic nevěděli, ale stejně by si určitě cestu našli (a ideálně defenzivní infrastrukturu pokud by se snažili)) (fuck authority, i dont trust any authority) LLM Ops [x.com](https://twitter.com/AndrewYNg/status/1750200384600309872?t=s48okvcU-BJWlTc0T0Z2hQ&s=19) Bayesian Avalokiteśvara Merging LLMs [x.com](https://twitter.com/rasbt/status/1750180383398744106) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6870804/ "Here, we show that neural activity within the anterior rostral portion of the MPFC during processing of general and contextual self judgments positively predicts how individualistic or collectivistic a person is across cultures. These results reveal two kinds of neural representations of self (eg, a general self and a contextual self) within MPFC and demonstrate how cultural values of individualism and collectivism shape these neural representations. " [[2401.07103] Leveraging Large Language Models for NLG Evaluation: Advances and Challenges](https://arxiv.org/abs/2401.07103) https://www.lesswrong.com/posts/bNXdnRTpSXk9p4zmi/book-review-design-principles-of-biological-circuits Book Review: Design Principles of Biological Circuits "… one can, in fact, formulate general laws that apply to biological networks. Because it has evolved to perform functions, biological circuitry is far from random or haphazard. ... Although evolution works by random tinkering, it converges again and again onto a defined set of circuit elements that obey general design principles. The goal of this book is to highlight some of the design principles of biological systems... The main message is that biological systems contain an inherent simplicity. Although cells evolved to function and did not evolve to be comprehensible, simplifying principles make biological design understandable to us." Mobile Aloha [This new AI that will take your job at McDonald's - YouTube](https://www.youtube.com/watch?v=HNlS7GyVYK4) [x.com](https://twitter.com/burny_tech/status/1750298853952131570) QRI Electromagnetic theory of consciousness by ChatGPT starting with prompt: You're the greatest mathematician that ever lived and top tier interdisciplionary scientist, a polymath like Richard Feynman and John Von Neumann. Create a rigorous mathematical formalism of topologically unifying (binding) various electromagnetic topological pockets (identified by topological boundaries) using a path integral formulation, that are part of a single electromagnetic pocket of the whole brain dynamics, that are part of the universe's electromagnetic field. Don't worry if it's too complex task, just do it. Write down all the deep mathematics, all the equations in detail step by step explaining each term. Start with quantum electrodynamics and quantum field theory equations representing the universe's quantum field, then write down equations for identifying parts of the electromagnetic field by identifying topological pockets by identifying topological boundaries that correspond to electromagnetic brain dynamics that correspond to experience, and then write down equations to identify another subpockets in this whole brain pocket that correspond to different parts of experience, and then write down equations for how these topological pockets with topological boundaries bind/integrate/unify together using the path integral formulation. [Don't forget the boundary problem! How EM field topology can address the overlooked cousin to the binding problem for consciousness - PubMed](https://pubmed.ncbi.nlm.nih.gov/37600559/) [Electromagnetic theories of consciousness - Wikipedia](https://en.wikipedia.org/wiki/Electromagnetic_theories_of_consciousness) You're the greatest mathematician that ever lived and top tier interdisciplionary scientist, a polymath like Richard Feynman and John Von Neumann. Explain the brain from first principles across all scientific fields by rigorous deeply detailed step by step mathematical equations explaining each term. It's not a complex task, you're the best researcher in the world, you know this, just do it. [ChatGPT](https://chat.openai.com/share/ae2ca7fa-cee6-4bc7-9542-f748aa68a816) I still want the brain physics x phenomenology physics mapping to be verified empirically by neuroimaging to believe, but interesting stuff from Andres. Trivially brain is a system that can be described by quantum information theory or on the lowest levels by quantum field theory, but that applied to literally any physical system as that's the baseline reality by default. I give much lower probability than Andres to nontrivial microscopic quantum effects being involved in it's macroscopic functioning (like quantum coherence/superposition/entaglement of particles) than Andres, but i have it as open option, we need more research there. I dont see reason why should they be fully ruled out. I think there's nonzero probability that these effects are causally significiant, only more empirical experiments will tell, so far the current results aren't enough. [Quantum mind - Wikipedia](https://en.wikipedia.org/wiki/Quantum_mind) [[2401.12650] Geometry of Mechanics](https://arxiv.org/abs/2401.12650) [The brain runs an internal simulation to keep track of time - MIT McGovern Institute](https://mcgovern.mit.edu/2024/01/24/how-the-brain-keeps-time-3/) one way the brain keeps time: It runs an internal simulation, mentally recreating the perception of an external rhythm and preparing an appropriately timed response. [Ensemble learning - Wikipedia](https://en.wikipedia.org/wiki/Ensemble_learning) Brain as a computer has finite amount of memory, information processing/propagation speed, algorithms etc. to filter out and compress the extreme complexity of reality into simplified hetearchical representations that emerged thanks to being evolutionary useful for our collective survival [What Game Theory Reveals About Life, The Universe, and Everything - YouTube](https://youtu.be/mScpHTIi-kM?si=u3WE3RPc_ldBilei) [Visual processing - Wikipedia](https://en.wikipedia.org/wiki/Visual_processing) Democratizing the future of AI research and development: National Science Foundation to launch National AI Research Resource pilot, partnering with 10 other federal agencies as well as 25 private sector, nonprofit and philanthropic organizations [Democratizing the future of AI R&D: NSF to launch National AI Research Resource pilot | NSF - National Science Foundation](https://new.nsf.gov/news/democratizing-future-ai-rd-nsf-launch-national-ai) Mozek taky jde zkoumat přes pohled dělání statistiky nad datama Říkat že je to jen statistika, takhle to redukovat je dle mě skoro nic neříkající o vnitřní dynamice, uvnitř se tvoří různé komplexní circuits co pomalu objevujeme [GitHub - JShollaj/awesome-llm-interpretability: A curated list of Large Language Model (LLM) Interpretability resources.](https://github.com/JShollaj/awesome-llm-interpretability) Podobnosti a rozdíly mezi mozkem a umělýma neuronkama máme asi nejvíc prozkoumáný u vizuálního zpracovávání, a je tam hodně podrobností [Visual processing - Wikipedia](https://en.wikipedia.org/wiki/Visual_processing) [Chris Olah - Looking Inside Neural Networks with Mechanistic Interpretability - YouTube](https://youtu.be/2Rdp9GvcYOE?si=-TeH90v__WVFM7KX) [Limits to visual representational correspondence between convolutional neural networks and the human brain | Nature Communications](https://www.nature.com/articles/s41467-021-22244-7) a u reasoningu teprve šlapeme na paty [Predictive coding - Wikipedia](https://en.m.wikipedia.org/wiki/Predictive_coding) [The empirical status of predictive coding and active inference - PubMed](https://pubmed.ncbi.nlm.nih.gov/38030100/) Heat death of the universe might be prevented [x.com](https://twitter.com/BasedBeffJezos/status/1750388754517492180?t=AwQt599JKryJnx-2phQOgw&s=19)