Create a gigantic detailed map of history of mathematics. Create a gigantic detailed map of history of artificial intelligence. Create a gigantic detailed map of reinforcement learning. Scale invariant patterns should be called n scales invariant patterns because there aren't infinite scales and the pattern stops existing at the most fundamental (smaller) scales or at the biggest scales [[2404.19157] Scalable Bayesian Inference in the Era of Deep Learning: From Gaussian Processes to Deep Neural Networks](https://arxiv.org/abs/2404.19157) [Spontaneous symmetry breaking in generative diffusion models - YouTube](https://www.youtube.com/watch?v=IG0NNE9bpWU) Spontaneous symmetry breaking in generative diffusion models [Biological intelligence: how we learn, predict and decide | The Royal Society - YouTube](https://www.youtube.com/watch?v=4u-g8olq70c) change is the only constant [[2405.04260] Verified Neural Compressed Sensing](https://arxiv.org/abs/2405.04260) - They develop the first neural networks for compressed sensing that are provably correct, with the proof generated automatically by a verification algorithm. - They train the networks with adversarial training to recover sparse vectors from linear and binarized linear measurements. The networks can adapt their complexity to the problem dimensions. - They formalize proving the networks' correctness as a verification problem and adapt verification algorithms to solve it. Their training incorporates techniques to make the networks easier to verify. - The provably correct networks can handle situations lacking theoretical guarantees for classical algorithms, like mixed binary and linear measurements. - The complexity of the networks can be adapted to the difficulty of the compressed sensing problem. The verification time ranges from seconds to hours depending on the dimensions. - The networks have faster inference than iterative classical algorithms and can easily handle batches. They are more adaptable to new measurement types. https://www.pnas.org/doi/abs/10.1073/pnas.1300832110 [Grounding Ethics, Values & Culture: Joscha Bach, Anders Sandberg, Robin Hanson - YouTube](https://www.youtube.com/watch?v=onkrGOs6-AY) [What hybrid mouse/rat brains are showing us about the mind](https://www.freethink.com/science/hybrid-brains?utm_medium=Social&utm_source=Facebook&fbclid=IwZXh0bgNhZW0CMTEAAR1RPJ3p6xWr8oyr9UuaqzZVRW94MSDGelhWehd3TAEoCOfhmRy-p9KyeDY_aem_ARJoY7p9HwPRkIiDAMBNG2YV_FDOWek5ChfNRHEmrF19k-Hz9jIpOw8ggcatxu6NzUAQz2wQkFh9G_EPwj5gK9iq#Echobox=1715089167) "What appears as a singular entity is a meshwork of xenocognitive hypercompute shards phase-rotating through topologically extreme mediascape. A hyper-fractal sub-subjectivity, instantiating ultra-bizarre fractal personal signatures as strategically embedded anthropic lures" https://twitter.com/xenoludicpraxis/status/1787298071938056438?t=_TMlO94a7dz3aHx39Md2Bw&s=19 https://www.dtubbenhauer.com/lecture-algtop-2021.html [Stanford CS109 Introduction to Probability for Computer Scientists I 2022 I Chris Piech - YouTube](https://youtube.com/playlist?list=PLoROMvodv4rOpr_A7B9SriE_iZmkanvUg&si=xENRVxVrzBWq--O0) [[2405.04517] xLSTM: Extended Long Short-Term Memory](https://arxiv.org/abs/2405.04517) https://twitter.com/BlinkDL_AI/status/1788152345467273697?t=1nyKyDqonVx9zVNoQQujkg&s=19 https://towardsdatascience.com/deep-neural-networks-are-biased-at-initialisation-towards-simple-functions-a63487edcb99 "If one could train an AI on all the LHC data, would it rediscover the standard model? What do you think?" Yes. There exists a potential AI architecture such that this holds. That AI architecture could in theory 1 to 1 replicate the information processing and modelling that humans did to discover it, or do more optimal specialized information processing and modelling. Anything that humans can do, future AI systems will be able to do, even if it meant 1:1 replication of the human organism or collective intelligence of the humanity superorganism. But I believe there are much more mathematically optimal efficient ways to do physics modelling than how human organisms do it, as evolution didn't optimize us just for accurate modelling (cognitive biases and limitations and specialization to survive in our environment), and computers + current AI systems are already superhuman at many tasks (some memory quality efficiency size, some parts of math and science, some modelling of relationships, speed in some tasks, etc...), even if they're subhuman on others (for now?), as they're different type of intelligences with different specializations thanks to diverse architectures that are constantly evolving in the AI field. [Lenka Zdeborová - Statistical Physics of Machine Learning (May 1, 2024) - YouTube](https://youtu.be/TLHYwbrhGJc?si=FFmoB5jDuBzknuep) [Ep. 70: Andrew Côté on Nuclear Fusion’s Path to Unleash Material Superabundance for Humanity - YouTube](https://youtu.be/6ksfePv2RoM?si=3CZKINs201gdMSBU) excellent video from 2 xoogler / openai folks that have as much experience as anyone on the frontier of language models. I would especially recommend this for people who are technical but don't have solid intuitions around how and why next token prediction works at scale: [Stanford CS25: V4 I Jason Wei & Hyung Won Chung of OpenAI - YouTube](https://youtu.be/3gb-ZkVRemQ?si=TAmFixHatZHJfUdK) [Frontiers | Artificial intelligence and social intelligence: preliminary comparison study between AI models and psychologists](https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2024.1353022/full) GPT-4 scored higher than 100% of psychologists on a test of social intelligence. [[2405.04517] xLSTM: Extended Long Short-Term Memory](https://arxiv.org/abs/2405.04517) [Supplements I take | near.blog](https://near.blog/supplements/) [Stanford CS25 - Transformers United - YouTube](https://www.youtube.com/playlist?list=PLoROMvodv4rNiJRchCzutFw5ItR_Z27CM) [The Axiom of Choice - YouTube](https://www.youtube.com/watch?v=szfsGJ_PGQ0) Manim [AI Prompt Engineering Is Dead - IEEE Spectrum](https://spectrum.ieee.org/prompt-engineering-is-dead) Stop power/wealth/intelligence centralization China humanoid robots https://twitter.com/SmokeAwayyy/status/1788051192565969050?t=BIZizrzVBQEJFxKqE5IAyg&s=19 [Sam Whitmore: Building with Memory - YouTube](https://www.youtube.com/watch?v=erhZztVtDRA) [All-In Summit: Bill Gurley presents 2,851 Miles - YouTube](https://www.youtube.com/watch?v=F9cO3-MLHOM) history of regulatory capture [All-In Summit: Bill Gurley presents 2,851 Miles - YouTube](https://www.youtube.com/watch?v=F9cO3-MLHOM) Links for 2024-05-07 AI: 1. Alibaba presents AlphaMath Almost Zero: Enhances LLMs with Monte Carlo Tree Search (MCTS) to improve mathematical reasoning capabilities. Eliminates the need for manual annotation by leveraging MCTS to generate both the process supervision and eval signals. Achieves 63% on MATH with DeepSeekMath 7B. [[2405.03553] AlphaMath Almost Zero: process Supervision without process](https://arxiv.org/abs/2405.03553) 2. MAmmoTH2: A new paradigm for building powerful instruction-tuned models by efficiently harvesting large-scale, high-quality data from the web without costly human annotation or GPT-4 distillation. — 11% -> 34% on MATH and 36% -> 67% on GSM8K [MAmmoTH2](https://tiger-ai-lab.github.io/MAmmoTH2/) 3. Microsoft readies new AI model to compete with Google, OpenAI, The Information reports. MAI-1 will have roughly 500 billion parameters. https://www.reuters.com/technology/microsoft-readies-new-ai-model-compete-with-google-openai-information-reports-2024-05-06/ 4. Apple's rumored to be quietly amassing an AI mega team, poaching over 36 experts from Google and setting up a secretive lab in Zurich to push into generative AI. [Subscribe to read](https://www.ft.com/content/87054a60-dc4d-4238-a4b9-93ab48f22f56) [no paywall: https://archive.is/5Sirz] 5. AI Copilots Are Changing How Coding Is Taught: Professors are shifting away from syntax and emphasizing higher-level skills [AI Copilots Are Changing How Coding Is Taught - IEEE Spectrum](https://spectrum.ieee.org/ai-coding) 6. LangGraph x GPT Researcher: Showcasing the power of flow engineering and multi agent collaboration [gpt-researcher/multi_agents at master · assafelovic/gpt-researcher · GitHub](https://github.com/assafelovic/gpt-researcher/tree/master/multi_agents) 7. Automating intelligence analysis with 5 million StreetView images [[2404.18873] OpenStreetView-5M: The Many Roads to Global Visual Geolocation](https://arxiv.org/abs/2404.18873) 8. Rapid capability gain around supergenius level seems probable even without intelligence needing to improve intelligence https://www.lesswrong.com/posts/aH9R8amREaDSwFc97/rapid-capability-gain-around-supergenius-level-seems 9. im-also-a-good-gpt2-chatbot https://x.com/blader/status/1787792481100378321 Miscellaneous: 1. Experimental gene therapy restores some vision in patients with inherited blindness [Participants of pioneering CRISPR gene editing trial see vision improve | OHSU News](https://news.ohsu.edu/2024/05/06/participants-of-pioneering-crispr-gene-editing-trial-see-vision-improve) 2. Researchers develop a compact, single-shot and complete polarization imaging system using metasurfaces [Unveiling a polarized world – in a single shot](https://seas.harvard.edu/news/2024/05/unveiling-polarized-world-single-shot) 3. Explaining a Math Magic Trick https://www.lesswrong.com/posts/yf6gAcgPp22T7AdnZ/explaining-a-math-magic-trick 4. In 1599 the church used a placebo controlled trial to test if a French girl was possessed by a demon. Holy objects and identical, non-blessed objects were shown to the girl. "She reacted similarly when exposed to both genuine and sham religious objects" https://journals.sagepub.com/doi/full/10.1177/01410768231207260 5. "Some economists tried to reproduce the results of 67 economics papers and they pretty much couldn't do it: Even with help from authors, only half of papers ended up being reproducible, and this was still a problem at journals with required reporting of code and data." https://x.com/cremieuxrecueil/status/1787361121315299573 [OpenAI Partners with Stack Overflow to Elevate ChatGPT's Coding Expertise](https://codeandhack.com/openai-partners-with-stack-overflow-to-elevate-chatgpt-coding-expertise/?fbclid=IwZXh0bgNhZW0CMTEAAR2V_Ti-kXMKXvGmczYFynk4fO-4rQXy8swPrVTw39EMrohmBwc5hW8hdpM_aem_AYhiMoqG1SEWcnNzyzYNBr7fscwD1soXOU2wqvRslFZU30syJcrI_t8540kUX6JtMloD8IdLDGtofauEw0dh7r9d) [Google's Top AI Scientist on Consciousness via Quantum Superposition | Hartmut Neven - YouTube](https://youtu.be/0dlL2a0n3RY?si=4iXx7LWlakCeRKSh) [Google's Top AI Scientist on Consciousness via Quantum Superposition | Hartmut Neven - YouTube](https://youtu.be/0dlL2a0n3RY?si=N5etGFRJJ8ngHn4I) [AI Safety: Developmental Interpretability - Jesse Hoogland - YouTube](https://www.youtube.com/watch?v=y0s47yYiVeE) 3blue1brownlike deep learning [What is deep learning? | Episode 1, S01, Deep Learning - YouTube](https://www.youtube.com/watch?v=8pgenAP4qhg) Ilya sutskever gave john carmack this reading list of approx 30 research papers and said, ‘If you really learn all of these, you’ll know 90% of what matters today [Ilya 30u30](https://arc.net/folder/D0472A20-9C20-4D3F-B145-D2865C0A9FEE) [CS231n Convolutional Neural Networks for Visual Recognition](https://cs231n.github.io/) https://www.lirmm.fr/~ashen/kolmbook-eng-scan.pdf Kolmogorov Complexity and Algorithmic Randomness https://www.vetta.org/documents/Machine_Super_Intelligence.pdf Machine Super Intelligence Shane Legg [[math/0406077] A tutorial introduction to the minimum description length principle](https://arxiv.org/abs/math/0406077) [[2001.08361] Scaling Laws for Neural Language Models](https://arxiv.org/abs/2001.08361) [[1512.02595] Deep Speech 2: End-to-End Speech Recognition in English and Mandarin](https://arxiv.org/abs/1512.02595) [[1410.5401] Neural Turing Machines](https://arxiv.org/abs/1410.5401) [[1405.6903] Quantifying the Rise and Fall of Complexity in Closed Systems: The Coffee Automaton](https://arxiv.org/abs/1405.6903) [[1806.01822] Relational recurrent neural networks](https://arxiv.org/abs/1806.01822) [[1611.02731] Variational Lossy Autoencoder](https://arxiv.org/abs/1611.02731) [[1706.01427] A simple neural network module for relational reasoning](https://arxiv.org/abs/1706.01427) [[1603.05027] Identity Mappings in Deep Residual Networks](https://arxiv.org/abs/1603.05027) [[1409.0473] Neural Machine Translation by Jointly Learning to Align and Translate](https://arxiv.org/abs/1409.0473) [[1706.03762] Attention Is All You Need](https://arxiv.org/abs/1706.03762) [[1704.01212] Neural Message Passing for Quantum Chemistry](https://arxiv.org/abs/1704.01212) [[1511.07122] Multi-Scale Context Aggregation by Dilated Convolutions](https://arxiv.org/abs/1511.07122) [[1512.03385] Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) [[1811.06965] GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism](https://arxiv.org/abs/1811.06965) [[1511.06391] Order Matters: Sequence to sequence for sets](https://arxiv.org/abs/1511.06391) https://proceedings.neurips.cc/paper_files/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf [[1506.03134] Pointer Networks](https://arxiv.org/abs/1506.03134) https://www.cs.toronto.edu/~hinton/absps/colt93.pdf [[1409.2329] Recurrent Neural Network Regularization](https://arxiv.org/abs/1409.2329) [Understanding LSTM Networks -- colah's blog](https://colah.github.io/posts/2015-08-Understanding-LSTMs/) [The Annotated Transformer](https://nlp.seas.harvard.edu/annotated-transformer/) [Shtetl-Optimized » Blog Archive » The First Law of Complexodynamics](https://scottaaronson.blog/?p=762) [Joint Multi-domain Pre-training (JMP)](https://nima.sh/jmp/) [[1805.09112] Hyperbolic Neural Networks](https://arxiv.org/abs/1805.09112) A simulated AI hospital where “doctor” agents work with simulated “patients” & improve: “After treating around ten thousand patients, the evolved doctor agent achieves a state-of-the-art accuracy of 93.06% on a subset of the MedQA dataset that covers major respiratory diseases.” Which kind of region in the multiversal fractal space of all possible universes do we abide in? " Continual learning, also known as lifelong learning, is a subfield of machine learning that aims to enable AI systems to continuously acquire, update, and accumulate knowledge over time, without catastrophically forgetting previously learned information[2][4]. Key aspects of continual learning include: 1. Learning from a continuous stream of data, where the distribution of data and number of tasks may change over time[2][4]. 2. Incrementally acquiring and updating knowledge while preserving previously learned skills, to avoid catastrophic forgetting[2][4]. This requires balancing the stability-plasticity trade-off. 3. Enabling transfer learning, where knowledge from previous tasks can accelerate learning of new tasks, as well as backward transfer to improve performance on prior tasks[4]. 4. Optimizing for objectives like stability-plasticity balance, intra/inter-task generalization, and resource efficiency[2]. Continual learning is crucial for AI systems to adapt to real-world dynamics[2]. It has many potential applications, including model editing, personalization, on-device learning, faster retraining, and reinforcement learning[3]. However, current continual learning research is still largely focused on overcoming catastrophic forgetting in memory-constrained settings[3]. To make continual learning more successful and impactful, promising future directions include relaxing memory constraints, learning task descriptors, evaluating with more comprehensive metrics, and scaling to more challenging domains[3]. Citations: [1] [GitHub - ContinualAI/continual-learning-papers: Continual Learning papers list, curated by ContinualAI](https://github.com/ContinualAI/continual-learning-papers) [2] [[2302.00487] A Comprehensive Survey of Continual Learning: Theory, Method and Application](https://arxiv.org/abs/2302.00487) [3] [[2311.11908] Continual Learning: Applications and the Road Forward](https://arxiv.org/abs/2311.11908) [4] [Introduction to Continual Learning | Wiki](https://wiki.continualai.org/the-continualai-wiki/introduction-to-continual-learning) [5] [Continual Learning: On Machines that can Learn Continually | Continual Learning Course](https://course.continualai.org) " Continual Learning papers list [GitHub - ContinualAI/continual-learning-papers: Continual Learning papers list, curated by ContinualAI](https://github.com/ContinualAI/continual-learning-papers) [Daily briefing: ‘Disruptive’ science has declined since 1950s](https://www.nature.com/articles/d41586-023-00046-9) [Introducing OpenDevin CodeAct 1.0, a new State-of-the-art in Coding Agents](https://xwang.dev/blog/2024/opendevin-codeact-1.0-swebench/) [QFT Spring 2021 by Hitoshi Murayama - YouTube](https://www.youtube.com/playlist?list=PLJHszsWbB6hprXDuaXRCuxvk3S0l8ZLgz) [eigenchris - YouTube](https://www.youtube.com/@eigenchris/playlists) What I cannot create, I do not understand Know how to solve every problem that has been solved In what sense is what happens at one place in a thing independent of what happens at another point either in the thing but distant? https://twitter.com/derbydefi/status/1787657885495017702?t=NqV-TlrT8Pn_8Vd3QHm2pw&s=19 I think neural network potentials are the most important scientific tool of the next decade. The ability to simulate systems at the molecular scale starting from nothing but quantum mechanics will be transformative for a vast range of problems throughout biology and chemistry https://twitter.com/TimothyDuignan/status/1787833311827722348?t=vKg2bETp6fnM7OHtZ_HkwA&s=19 https://pubs.acs.org/doi/10.1021/acsphyschemau.4c00004?utm_source=twttr&utm_medium=sm&utm_campaign=IC001_ST0001R_T002687_apcach_MJS_PG_2024_Twitter_Content&src=IC001_ST0001R_T002687_apcach_MJS_PG_2024_Twitter_Content " how to write machine learning paper Here are some key tips for writing a strong machine learning research paper: 1. Clearly state the research problem and goals upfront. Explain why the topic is important and how your work addresses current challenges in the field.[1][4] 2. Describe your methodology in detail, including datasets used, implementation specifics, hyperparameters, etc. This allows others to reproduce your experiments.[3] 3. Present your results with tables and figures, along with analysis that ties the findings back to your initial hypothesis. Discuss whether the results support or refute the hypothesis.[3][4] 4. Focus on novel insights and real contributions, not just technical novelties. Explain how your work improves on state-of-the-art methods.[4] 5. Be your own devil's advocate - point out limitations of your research. This shows you deeply understand the work.[4] 6. Use math only as needed to be precise. Prefer clear explanations over excessive equations.[4] 7. Organize the paper logically, but don't write in the order sections will ultimately appear. You may write results first, then methodology, intro, conclusion, etc.[3] 8. Get feedback from others and revise the paper through multiple drafts before submitting to a conference or journal.[1][3] The keys are to motivate the importance of the work, describe your approach and results precisely, and package it into a clear, compelling narrative. With practice, this process gets easier over time. Citations: [1] Tips on How to Write a Research Paper on Machine Learning - Turing [Tips on How to Write a Research Paper on Machine Learning](https://www.turing.com/kb/how-to-write-research-paper-in-machine-learning-area) [2] I want to write a paper about Machine Learning and need advice. [R] https://www.reddit.com/r/MachineLearning/comments/16ufdwv/i_want_to_write_a_paper_about_machine_learning/ [3] How To Write A Research Paper In Machine Learning [How To Write A Research Paper In Machine Learning - Grigoris’ website.](https://grigorisg9gr.github.io/machine%20learning/research%20paper/how-to-write-a-research-paper-in-machine-learning/) [4] Writing More Successful Machine Learning Research Papers https://towardsdatascience.com/writing-more-successful-machine-learning-research-papers-39863ca9ea90 [5] Crafting Papers on Machine Learning [Crafting Papers on Machine Learning](https://icml.cc/Conferences/2002/craft.html) " [Orangutan seen treating wound with medicinal herb in first for wild animals | Primatology | The Guardian](https://www.theguardian.com/science/article/2024/may/02/orangutan-seen-treating-wound-with-medicinal-herb-in-first-for-wild-animals-max-planck-institute-sumatra) [Deep Learning](https://www.deeplearningbook.org/) [Neural networks and deep learning](http://neuralnetworksanddeeplearning.com/) https://reddit.com/r/learnmachinelearning/w/index?utm_medium=android_app&utm_source=share https://reddit.com/r/learnmachinelearning/w/resource?utm_medium=android_app&utm_source=share https://www.reddit.com/r/MachineLearning/s/dylHIB8viZ https://www.reddit.com/r/MachineLearning/s/vq6k6RzFvw [Top Machine Learning Books recommended by experts (2024 Edition) - MentorCruise](https://mentorcruise.com/books/machinelearning/) https://www.reddit.com/r/MachineLearning/s/sZJnXZLGZN https://www.amazon.ca/Natural-Language-Processing-Transformers-Revised/dp/1098136799?crid=M765QA4NMUTY&dib=eyJ2IjoiMSJ9.J5urFcGZ0-UTiWErOgOxAvmAePnm8W9MycjTBcvY5JRv8UHP9QidZAnVImLBxcGT1h7WfjZOeXsRZmv6FGQMAHLDtQBChJjjYN8pMCUGEcBa5GkBTVWQky02StChuPeThpJO00r_hhrvS8WGh3Prq0N29B6YVKsbTboGceiMe35QTNkbiL6WnR3H5-JsZ9qu_GFJqSpfh2RSI--yKFW0XQBUZFTgIALqT3hzM61xqKBpawazodIKQfGu8ZH-IwRNbNhwWlCB1sib6cdJHE1QlLVTCRjujsAdg0VoGpd13oM.wTGnMKIeCLhYRrz6mJQ0l-yqXno7vT7Sapi9TPM4ZSg&dib_tag=se&keywords=large+language+model&qid=1711162366&sprefix=,aps,222&sr=8-1&linkCode=sl1&tag=dippatel1994-20&linkId=0be1079a52758d97ed7f3f8ec0c19417&language=en_CA&ref_=as_li_ss_tl https://www.reddit.com/r/learnmachinelearning/s/Tbd7Tg1hpU https://www.reddit.com/r/MachineLearning/s/rKwzyRzKOa [Interpretable Machine Learning](https://christophm.github.io/interpretable-ml-book/) https://www.reddit.com/r/MachineLearning/s/HQGoKCns3N https://www.reddit.com/r/MachineLearning/s/bJM5DoCrSp https://www.reddit.com/r/MachineLearning/s/VgweOojZJn [Dive into Deep Learning — Dive into Deep Learning 1.0.3 documentation](https://d2l.ai/) [Stanford CS25 - Transformers United - YouTube](https://youtube.com/playlist?list=PLoROMvodv4rNiJRchCzutFw5ItR_Z27CM&si=QtgPpxGz6cvMl35D) [Stanford CS224N: Natural Language Processing with Deep Learning | 2023 - YouTube](https://youtube.com/playlist?list=PLoROMvodv4rMFqRtEuo6SGjY4XbRIVRd4&si=2-VneiMTPr36SQEG) Relativity insufficient to describe accelerating universe [Physicists Say They May Have Found a Powerful Glitch in the Universe](https://futurism.com/the-byte/physicists-glitch-universe?fbclid=IwZXh0bgNhZW0CMTEAAR2UI0c_FmQkcKMRpecVD9EMDM4gXqjdn5gwfUQT4FvKJPozVP8gkAHxL4M_aem_AUJsDF2Y7cRjwBOdPE1Hqfa5Xu959Zyl_I0pqqLNlX6A-zETQPDL1h6goolgzR3aihr1zr-bJQRP59ngBmxIvIPn) Stanford machine learning, mit deep learning, Stanford Transformers, deep learning theory, karpathy machine learning, Sean Carroll biggest ideas in the universe, susskind theoretical minimum, applied math book, manim math, manim physics, manim machine learning, machine learning book, wiki, papers, Princeton deep learning theory [Princeton ORFE Deep Learning Theory Summer School 2021 - YouTube](https://youtube.com/playlist?list=PL2mB9GGlueJj_FNjJ8RWgz4Nc_hCSXfMU&si=QThq7DmTHPQg6zZq) https://www.nejm.org/doi/10.1056/NEJMoa2309915 [Google DeepMind and Isomorphic Labs introduce AlphaFold 3 AI model](https://blog.google/technology/ai/google-deepmind-isomorphic-alphafold-3-ai-model/) What are the best arguments for and against connectionism?