## Tags - Part of: [[Artificial Intelligence x Everything]] - Related: - Includes: - Additional: ## Definitions - [[Science]] x [[Natural science]] x [[Artificial Intelligence]] x [[Machine learning]] x [[Data science]] x [[Statistics]] ## Landscape - [[Artificial Intelligence x Mathematics]] - [[Artificial Intelligence x Physics]] - [[Artificial Intelligence x Chemistry]] - [[Artificial Intelligence x Biology]] - [[Artificial Intelligence x Neuroscience]] - [[Artificial Intelligence x Quantum computing]] - [[Artificial intelligence x Psychology]] - [The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery](https://sakana.ai/ai-scientist/) - [Autonomous chemical research with large language models | Nature](https://www.nature.com/articles/s41586-023-06792-0) - [AlphaFold - Google DeepMind](https://deepmind.google/technologies/alphafold/) - [AlphaProteo generates novel proteins for biology and health research - Google DeepMind](https://deepmind.google/discover/blog/alphaproteo-generates-novel-proteins-for-biology-and-health-research/) - [Millions of new materials discovered with deep learning - Google DeepMind](https://deepmind.google/discover/blog/millions-of-new-materials-discovered-with-deep-learning/) ## Brainstorming I want machine scientists developing theories and experiments about the universe that transcend human limitations AI will model the world in ways completely incomprehensible to how humans model the world. And it will do it in much more optimal ways, it will grok physics much more optimally, in such alien ways compared to how human brains evolved to do it in our evolutionary environment. The space of all possible modelling systems is so vast, and us, and nature, have only scratched the surface so far. The current architectures are just the beginning of all of this: Deep learning models, transformer models, diffusion models, RL CoT models, neurosymbolics with MCTS (AlphaZero), statistical models, etc. Are current AI approaches in the current paradigm enough for radical new scientific discoveries and paradigm shifts? AlphaFold technically isn't LLM, but it's an autoregressive Evoformer/Pairformer that uses transformer iirc and some diffusion, and it seems to have done big progress in protein folding research But i think for leaps in physics we might need to go beyond deep learning Or maybe some kind of selfplay could bootstrap more optimal models? Something like AlphaGo move 37? Or could you give future AIs for predicting physics a RL reward signal in the form of empirical predictive results from experiments? Could that bootstrap novel results? Would that be eventually feasible when you spend enough infrastructure and compute to do these experiments? Or could physics simulations find shortcuts in training, similarly how we train robotics in simulations using RL now? Or do we need fundamental architecture more based on biology or physics or mathematics of information processing? How to: Actually grokking currently known equations of physics and fundamental physics as circuits? Being able to more strongly generalize them in nonbrittle way? Possibly go beyond them more out of distribution? So many unanswered questions... AI x physics is endless rabbit hole: You can study AI using methods from physics, you can study physics using AI models, you can try to make AI systems model physics as accurately as possible through physics biases, you can design better AI architectures using physics, many AI architectures are applied physics, etc. Nerds are fighting that AI can't solve Riemann hypothesis or invent quantum mechanics from scratch, while normies are happy that it can help them solve simple algebraic equations that they struggled with in school I don't think AI will fully replace scientists. I think human intelligence will always have a place in science, and adding more diverse intelligences into the mix acts more as a multiplier of our capabilities and as an upgrade in places where our brain's architecture made by evolution is too limited and constrained. That seems to have been the case so far, each type of intelligence excelling in different ways, that are even stronger together. And if it will lead to for example breakthroughs in physics or curing diseases faster, then I think that's amazing. But maybe we will somehow create systems that can basically replicate everything that humans do in science, but I think that wont be soon. My ideal scifi would be about benevolent superintelligence that cures all diseases, makes all beings happy, figures out how biology, fundamental physics, consciousness, intelligence, etc. works by countless scientific breakthroughs, understands all math, understands everything in philosophy, creates post-scarcity abundance for all, creates infinitely fascinating complex art, and in the process grows infinitely more and more in intelligence and creativity, maximizes morphological freedom, and does no harm Benevolent superintelligence explosion [[Artificial intelligence x Science]] Yeah its a bit unrealistic superutopia that I like dreaming about, so that's why it's science fiction. My current biggest fear in the real world is tech companies centralizing too much power for themselves via AI and other technology and other means (economic, political,...), so that's partially why I want open source to win and try to support it, while trying to reverse engineer the moat of tech companies. To democratize the power. The issue with AI safety community I started to have is that a big part of them basically want something like government surveillance on GPUs and training runs to prevent unsafe AI, which can so much easily turn into surveillance dystopia and destroy open source completely, plus big tech is merging with government as well to have the least restrictions for themselves while wanting to restricting others including open source. It feels like that will make power dynamics even more concetrated instead. A lot of luddites also joined the AI safety movement I think when I look at the current world and at history, then a lot of times when there was too much concentration of power in any form to some centralized entity, then it started killing freedom for everyone else. And I view AI as technology that has the potential to give the ultimate power, centralized power if its in the hands of few, or decentralized power fi tis in the hands of people. I also started to not really believe in the assumption that increasing intelligence automatically leads to rogueness. I think intelligence is independent of that, and also independent of power seeking. For example we have galaxy brain scientists that are not at all rogue or power seeking. It depends so much. and they are controlled by IMO less intelligent managers and politicians. My favorite definitions of intelligence include stuff like modelling capability, predictive capability, generalization capability, etc., about some data, which are decoupled from agency and goals in changing the world to me. [[Thoughts AI x physics]] [[Thoughts AI science]] [[Thoughts (computational) neuroscience brain]] ## Resources [[Links AI science]] [[Links AI x quantum computing]] [[Links AI x psychology]] [[Links AI programming]] [[Links AI physics]] [[Links AI neuroscience]] [[Links AI math]] [[Links AI healthcare biology]] [[Links AI biology]] [GitHub - yuzhimanhua/Awesome-Scientific-Language-Models: A Curated List of Language Models in Scientific Domains](https://github.com/yuzhimanhua/Awesome-Scientific-Language-Models) machine learning in science [Machine Learning for Science](https://ml4sci.org/) machine learning in physics [Machine Learning for Physicists – Neural Networks and their Applications (Slides and Videos for the Lectures by Florian Marquardt)](https://machine-learning-for-physicists.org/) [Machine Learning for Physicists 2023 - HedgeDoc](https://pad.gwdg.de/s/Machine_Learning_For_Physicists_2023) [Machine Learning for Physicists (lecture series) - YouTube](https://www.youtube.com/playlist?list=PLemsnf33Vij4eFWwtoQCrt9AHjLe3uo9_)