I wonder often how close to the brain do we actually have to get to create machines that can be merged with our biological neural network or to create machines that can be considered as conscious. Do we need to match "hardware architecture" architecture deeply, or just "software architecture" lightly. ([Theories of consciousness | Nature Reviews Neuroscience](https://www.nature.com/articles/s41583-022-00587-4) , [Consciousness - Wikipedia](https://en.wikipedia.org/wiki/Consciousness#Models) , https://en.wikipedia.org/wiki/Models_of_consciousness) Qualia research institute ([On Connectome and Geometric Eigenmodes of Brain Activity: The Eigenbasis of the Mind?](https://qri.org/blog/eigenbasis-of-the-mind) , https://philarchive.org/rec/GMEDFT) and some neuromophic computing groups ([Joscha Bach, Yulia Sandamirskaya: "The Third Age of AI: Understanding Machines that Understand" - YouTube](https://www.youtube.com/watch?v=6xHVtgwNBcY)) are on this spectrum very close to the "we need to replicate hardware deeply". IIT, GWT ([What a Contest of Consciousness Theories Really Proved | Quanta Magazine](https://www.quantamagazine.org/what-a-contest-of-consciousness-theories-really-proved-20230824/) , https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9916582/ ), Active Inference (https://direct.mit.edu/books/oa-monograph/5299/Active-InferenceThe-Free-Energy-Principle-in-Mind) and Joscha Bach ([Joscha Bach - Consciousness as a coherence-inducing operator - YouTube](https://www.youtube.com/watch?v=qoHCQ1ozswA)) is more on the functional level. (https://www.frontiersin.org/articles/10.3389/frai.2020.00030/full), free energy principle tries to merge these levels? ([Karl Friston on Unifying The Cognitive Sciences - YouTube](https://www.youtube.com/watch?v=Q9hOPiSHbwo) https://www.sciencedirect.com/science/article/pii/S037015732300203X [Inner screen model of consciousness: applying free energy principle to study of conscious experience - YouTube](https://www.youtube.com/watch?v=yZWjjDT5rGU) [Physics as Information Processing ~ Chris Fields ~ AII 2023](https://coda.io/@active-inference-institute/fields-physics-2023) [#67 Prof. KARL FRISTON 2.0 [Unplugged] - YouTube](https://www.youtube.com/watch?v=xKQ-F2-o8uM) ) But I'm open to anything on this spectrum really.
And I wonder how much is intelligence intristic to us, if we can make totally alien hardware and alien software that is gazilion times more intelligent and effective in terms of energy consumption and memory than human systems. If humans are just a tini subspace in the vast space of all possible intelligences. Are we are actually really general or hyper specialized in doing humans things and AGI in the sense of maximizing generality will be gazilion times more general than us, and combined with ASI having the ability to be gazilion times more effectively specialized for different problem domains than us.
[Generalist AI beyond Deep Learning - YouTube](https://www.youtube.com/watch?v=p-OYPRhqRCg)
Foundational perspectives on causality in large-scale brain networks - causes change the probability of occurrence of their effects https://www.sciencedirect.com/science/article/pii/S157106451500161X
I wonder if this is enough or not enough for the possibility of getting brain's (approximate) machine code that you can read by and edit by electrical or other signals: compressing EEG or other signals using ML (that today somewhat map to thoughts) [UTS HAI Research - BrainGPT - YouTube](https://www.youtube.com/watch?v=crJst7Yfzj4) and then reverengineering the ML models (RASP is a assembly language for Transformers. RASP-L is a human-readable programming language which defines programs that can be compiled into Transformer weights) [[2310.16028] What Algorithms can Transformers Learn? A Study in Length Generalization](https://arxiv.org/abs/2310.16028) , or in some way applying the methods we use to reversengineer the ML models directly to brain dynamics.
[Pareto efficiency - Wikipedia](https://en.wikipedia.org/wiki/Pareto_efficiency) game theory Pareto efficiency or Pareto optimality is a situation where no action or allocation is available that makes one individual better off without making another worse off
“...the brain becomes less reliant on its pre-existing expectations or beliefs and pays more attention to the incoming sensory information. This mechanism also aligns with the description provided within the free energy principle framework (50) of the deconstructive meditation family (51) as cultivated in OM meditation and non-dual meditative states such as OP. Hence, our study provides some support with these current theories, which should guide the future empirical studies of these non-dual meditations.” https://twitter.com/RubenLaukkonen/status/1747404675719299511
Scientists destroy cancer using nanobots
https://bnnbreaking.com/breaking-news/health/urease-powered-nanobots-a-potential-game-changer-in-bladder-cancer-treatment/
[Urease-powered nanobots for radionuclide bladder cancer therapy | Nature Nanotechnology](https://www.nature.com/articles/s41565-023-01577-y)
Fast quantized open source LLMs https://twitter.com/ivanfioravanti/status/1747296097570046068?s=61&t=p40fYfzsXQ3N7R0SWIId7Q
There are many types of hardware for quantum computing, i discovered (Superconducting Circuits, Trapped Ions, Quantum Dots, Topological Qubits, Photonics, Silicon Quantum Dots, Nitrogen-Vacancy Centers in Diamond,...)
[Topological quantum computer - Wikipedia](https://en.wikipedia.org/wiki/Topological_quantum_computer)
"It employs quasiparticles in two-dimensional systems, called anyons, whose world lines pass around one another to form braids in a three-dimensional spacetime (i.e., one temporal plus two spatial dimensions). These braids form the logic gates that make up the computer. The advantage of a quantum computer based on quantum braids over using trapped quantum particles is that the former is much more stable. Small, cumulative perturbations can cause quantum states to decohere and introduce errors in the computation, but such small perturbations do not change the braids' topological properties. This is like the effort required to cut a string and reattach the ends to form a different braid, as opposed to a ball (representing an ordinary quantum particle in four-dimensional spacetime) bumping into a wall."
Does AI implement symmetries and redundancy?
[Resting in the Flow of the Mind - YouTube](https://www.youtube.com/live/G1Rd516lX0U?si=iC_YqF3A7oFRsJb-) Michael taft guided meditation Wow the end mantra teleported me to my 5-MeO-DMT trip, long simple tones assiated with dissolving comic love and kindness
Extremely high valence
[The low-rank hypothesis of complex systems | Nature Physics](https://www.nature.com/articles/s41567-023-02303-0)
Everything minimized free energy... In the limit... Maybe biological organisms have their own unieuq loss functions [Joscha Bach Λ Karl Friston: Ai, Death, Self, God, Consciousness - YouTube](https://youtu.be/CcQMYNi9a2w?si=sgxMOIfY_4nrpOpW)
More money is still coming into making Hollywood movies about AI instead of actual AI research that can now more and more automate the whole creation process among everything else
[[2112.10510] Transformers Can Do Bayesian Inference](https://arxiv.org/abs/2112.10510)
[[2206.00826] BayesFormer: Transformer with Uncertainty Estimation](https://arxiv.org/abs/2206.00826)
[End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes | OpenReview](https://openreview.net/forum?id=kfWzpZvEUh)
Consciousness is noticing that youre noticing, selfstabilization leading to putting language on your substrate and minimizing inconsistencies [Joscha Bach Λ Karl Friston: Ai, Death, Self, God, Consciousness - YouTube](https://youtu.be/CcQMYNi9a2w?si=-OtVXUgvoXCkfnr7) 1:11:00
Artists fell in love with their loss function
https://basalanalytics.com/blog/meta-learning-an-overview-and-applications/
[Metalearning: a survey of trends and technologies | Artificial Intelligence Review](https://link.springer.com/article/10.1007/s10462-013-9406-y)
[Explainable AI Methods - A Brief Overview | SpringerLink](https://link.springer.com/chapter/10.1007/978-3-031-04083-2_2) LIME, Anchors, GraphLIME, LRP, DTD, PDA, TCAV, XGNN, SHAP, ASV, Break-Down, Shapley Flow, Textual Explanations of Visual Models, Integrated Gradients, Causal Models, Meaningful Perturbations, and X-NeSyL
[Machine learning in physics - Wikipedia](https://en.m.wikipedia.org/wiki/Machine_learning_in_physics)
[[1701.06806] A Survey of Quantum Learning Theory](https://arxiv.org/abs/1701.06806)
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6508868/
I like hybrid methods that involve both classical and quantum processing, where computationally difficult subroutines are outsourced to a quantum device
https://twitter.com/joao_gante/status/1747322413006643259?t=ZOUT8InauKyQWhdaxzenQA&s=19 LLM inference speedups Up to 3x faster LLM generation with no extra resources/requirements - ngram speculation has landed in 🤗 transformers! 🏎️💨 All you need to do is to add `prompt_lookup_num_tokens=10` to your `generate` call and you'll get faster LLMs🔥
DeepMind AI math [AlphaGeometry: An Olympiad-level AI system for geometry - Google DeepMind](https://deepmind.google/discover/blog/alphageometry-an-olympiad-level-ai-system-for-geometry/?utm_source=twitter&utm_medium=social)
there will be such vast amounts of artificial consciousness that biological-based sentience will be revered as special and rare
https://openai.com/blog/democratic-inputs-to-ai-grant-program-update
“The first principle is that you must not fool yourself, and you are the easiest person to fool.”
— Richard Feynman
"Perpetual optimism is a force multiplier."
— Colin Powell
Meditate on the interplay between these two insights and you will achieve agency-enlightenment
Linear programming [The Art of Linear Programming - YouTube](https://youtu.be/E72DWgKP_1Y?si=YMYUWg3EDUxPcjoj)
Statistical learning
[#5: Quintin Pope - AI alignment, machine learning, failure modes, and reasons for optimism - YouTube](https://youtu.be/f9Msoqvlla4?si=r7tb65cH6D66cBwJ) quintin pope ml researcher
https://twitter.com/johnschulman2/status/1741339801985630295?t=BFLoCWX1wVY96GAq7LbMCw&s=19
https://twitter.com/dbeagleholeCS/status/1741354208568254600?t=BCrEcoXUFUdsG9WDh-CK5g&s=19 how neural networks learn patterns from the data. We show this mechanism can be implemented with a kernel to learn the the same or similar patterns
https://twitter.com/bindureddy/status/1747477033234649474?t=Nbb-CIwH9wrWGe5mbPqD0w&s=19 LLM issues
https://onlinelibrary.wiley.com/doi/10.1002/advs.202303575
"Microsoft released a new method to speed up LLM inference, boost performance, while making them 20x smaller." https://twitter.com/AlphaSignalAI/status/1747698333358186758?t=Azi7iEsXtaN8XPXa_UzlKA&s=19
Text to speech SotA tracker A one-stop shop to track all open access/ source TTS models! ML https://twitter.com/reach_vb/status/1747371141486801035?t=mp0N6cmI6D54GkOs0FHX3Q&s=19
[Logical quantum processor based on reconfigurable atom arrays | Nature](https://www.nature.com/articles/s41586-023-06927-3)
Psychonautwiki
[Information geometry of dynamics on graphs and hypergraphs | Information Geometry](https://link.springer.com/article/10.1007/s41884-023-00125-w)
Theoretical and applied science
Control theory systems theory
Qualia social space https://twitter.com/algekalipso/status/1747790045220942328?t=PqI4grJA1Nk_3B5en9W4WA&s=19
Information geometry
Principle of least action physics
Robotics hand dexterity https://fxtwitter.com/DrJimFan/status/1747370196128628815?t=IXX9KmLevac0cnXKfib9Uw&s=19
Computational neuroscience
[[2401.08406] RAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on Agriculture](https://arxiv.org/abs/2401.08406)
List of LLM subfields https://twitter.com/srush_nlp/status/1747673238434365805?t=5ee2LsRH_xHKNw5Tn-QPnQ&s=19
[[2203.05482] Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time](https://arxiv.org/abs/2203.05482)
[[1610.02424] Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models](https://arxiv.org/abs/1610.02424)
[[2302.14045] Language Is Not All You Need: Aligning Perception with Language Models](https://arxiv.org/abs/2302.14045)
https://twitter.com/srush_nlp/status/1747673268494991594?t=7aT7nY-quVn1PlZ0A8zMLQ&s=19 What language reveals about perception (arxiv.org/pdf/2302.01308…) Cognitive Science Society.2023
Best local open source LLMs https://twitter.com/ivanfioravanti/status/1747772354762072318?t=cFD_jQzxFyo7DFNWc4o9Sg&s=19
Table of contents for pages
Dharma wiki
Put the videos of maps into words
Od there AI/ML Wiki? More maps find
Write how i think about the topiv on each topic page
Additional resources, additional topics
Add Authors and time to every link
[Dissolve All Thought in Space - YouTube](https://www.youtube.com/live/sbovTGvb_qU?si=9apR00E-o_HkenPS) boundaryless centerless timeless love and kindness Michael taft
Gpt plugins
[AlphaGeometry - YouTube](https://youtu.be/TuZhU1CiC0k?si=2sW_BfVfdH6_tNx6)
Paper connecting grokking to polytope lense for MI https://arxiv.org/pdf/2310.12977.pdf mechanistic interpretability
Faster LLM inference [x.com](https://twitter.com/lmsysorg/status/1747675649412854230?t=frt-jpXrgNPHLeNH8JO5SQ&s=19)
[CES 2024: This AI-powered exoskeleton can help you trek further, run faster and carry more](https://www.yahoo.com/tech/ces-2024-ai-powered-exoskeleton-091445096.html?guccounter=1)
[[2401.09417] Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model](https://arxiv.org/abs/2401.09417)
Will we jump from current unsustainable maladaptive local minimum of our civilizational cybernetic architecture to more optimal local minimum, or to an even more maladaptive local minimum such as dystopia and extinction? To what degree is our evolutionary cilizational experiement chaotic uncoordinated clueless monkeys pressing random buttons not really knowing what we're doing hoping it will collectively work without the ability to actually make future go as we want? [x.com](https://twitter.com/burny_tech/status/1747953441584935250)
[[2401.06104] Transformers are Multi-State RNNs](https://arxiv.org/abs/2401.06104)
“OpenAI's next big model "will be able to do a lot, lot more" than the existing models can, CEO Sam Altman told Axios in an exclusive interview at Davos on Wednesday. Altman says AI advances will "help vastly accelerate the rate of scientific discovery." He doesn't expect that to happen in 2024, "but when it happens it's a big, big deal." Altman said his top priority right now is launching the new model, likely to be called GPT-5.” https://www.axios.com/2024/01/17/sam-altman-davos-ai-future-interview
[TrustLLM-Benchmark](https://trustllmbenchmark.github.io/TrustLLM-Website/)
https://undark.org/2024/01/03/brain-computer-neurorights/
[[2310.10632] BioPlanner: Automatic Evaluation of LLMs on Protocol Planning in Biology](https://arxiv.org/abs/2310.10632)
[Multiple AI models help robots execute complex plans more transparently | MIT News | Massachusetts Institute of Technology](https://news.mit.edu/2024/multiple-ai-models-help-robots-execute-complex-plans-more-transparently-0108)
[Psychedelics Rapidly Fight Depression—a New Study Offers a First Hint at Why](https://singularityhub.com/2024/01/16/psychedelics-rapidly-fight-depression-a-new-study-offers-a-first-hint-at-why/) https://www.science.org/doi/10.1126/scitranslmed.adi2403
[New technology enables drones to navigate around obstacles autonomously](https://interestingengineering.com/innovation/technology-enables-drones-to-autonomously-navigate)
AI solving PDEs Technique could efficiently solve partial differential equations for numerous applications — “PEDS does the opposite by choosing its parameters smartly. It leverages the technology of automatic differentiation to train a neural network that makes a model with few parameters accurate.” [Technique could efficiently solve partial differential equations for numerous applications | MIT News | Massachusetts Institute of Technology](https://news.mit.edu/2024/peds-technique-could-efficiently-solve-partial-differential-equations-0108) [Physics-enhanced deep surrogates for PDEs](https://dspace.mit.edu/handle/1721.1/153164)
[[2401.03003] AST-T5: Structure-Aware Pretraining for Code Generation and Understanding](https://arxiv.org/abs/2401.03003)
Gödel universe: a solution to general theory of relativity that has many unusual properties—in particular, the existence of closed time-like curves that would allow time travel. [Gödel's Solution to Einstein's Field Equations (1949)](https://www.privatdozent.co/p/godels-solution-to-einsteins-field) [x.com](https://twitter.com/XinYaanZyoy/status/1748063349575704759)
https://www.sciencedirect.com/science/article/pii/S0010945223002897
"Autistic adults exhibit a diminished neural response to their own faces compared to neurotypical adults, suggesting unique differences in self-referential processing."
https://www.psypost.org/2024/01/autistic-adults-show-unique-neural-responses-to-self-images-study-finds-220536
https://www.psypost.org/2024/01/autistic-adults-show-unique-neural-responses-to-self-images-study-finds-220536
[Secure, Governable Chips | Center for a New American Security (en-US)](https://www.cnas.org/publications/reports/secure-governable-chips)
hamas ai weapons [Bloomberg - Are you a robot?](https://www.bloomberg.com/news/articles/2024-01-10/palantir-supplying-israel-with-new-tools-since-hamas-war-started)
LMU researchers have developed new high-performance nanostructures to obtain Hydrogen (H2) with the help of solar energy.
[Harvesting more solar energy with supercrystals - LMU Munich](https://www.lmu.de/en/about-lmu/structure/central-university-administration/communications-and-media-relations/press-room/press-release/harvesting-more-solar-energy-with-supercrystals-2.html)
"The twenty-first century has seen a breathtaking expansion of statistical methodology, both in scope and in influence. “Big data,” “data science,” and “machine learning” have become familiar terms in the news, as statistical methods are brought to bear upon the enormous data sets of modern science and commerce. How did we get here? And where are we going? This book takes us on a journey through the revolution in data analysis following the introduction of electronic computation in the 1950s. Beginning with classical inferential theories – Bayesian, frequentist, Fisherian – individual chapters take up a series of influential topics: survival analysis, logistic regression, empirical Bayes, the jackknife and bootstrap, random forests, neural networks, Markov chain Monte Carlo, inference after model selection, and dozens more. The book integrates methodology and algorithms with statistical inference, and ends with speculation on the future direction of statistics and data science."
[Computer Age Statistical Inference: Algorithms, Evidence and Data Science](https://hastie.su.domains/CASI/)
MultiON_AI web agent: We just solved the long-horizon planning & execution issue with Agents [x.com](https://twitter.com/DivGarg9/status/1747683043446579416) This but as generic multimodal agent in your operating system combined with Open Interpreter. Open Interpreter lets LLMs run code on your computer to complete tasks. [The Open Interpreter Project](https://openinterpreter.com/)
When will AGI happen?
Depends how you define AGI!
I think artificial human level intelligence capable of doing most economically valuable tasks will be there by 2026.
Everyone keeps using the word AGI differently.
For some its synonymous with human level intelligence, for some its literally ASI (however you define that), for some its human level generality, for some its human level specialization (i dont think humans are that general when it comes to the space of all possible intelligences, I think we're just a tini subspace), for some its more than human or ultimate generality, for some humanlevel or beyond humans robotics matter a lot, for some it means basically God (all-knowing, all-powerful system),...
How do you define AGI, ASI and when do you think it will arrive?
Photorealism is now possible from for example Midjourney. My Facebook is full of deepfakes that majority believes are real even if it to us, who look at the details, obviously looks AI generated because of style or uncorrected errors. I'm curious for US elections.
"It is now highly feasible to take care of everybody on Earth at a higher standard of living than any have ever known. It no longer has to be you or me. Selfishness is unnecessary. War is obsolete. It is a matter of converting our high technology from WEAPONRY to LIVINGRY."
- Buckminster Fuller
originální Elliptic Curve Cryptography a RSA půjde prolomit až se kvantový počítače naškálujou
je hodně metod co se snaží být quantum safe [Post-quantum cryptography - Wikipedia](https://en.wikipedia.org/wiki/Post-quantum_cryptography)
[Study reveals a universal pattern of brain wave frequencies | Picower Institute](https://picower.mit.edu/news/study-reveals-universal-pattern-brain-wave-frequencies)
[GitHub - smweis/methods_in_neuro: A list of links and resources for my Methods in Neuroimaging Course](https://github.com/smweis/methods_in_neuro)
information theory summary [x.com](https://twitter.com/francoisfleuret/status/1747974813472186760)
[[2401.09350] Foundations of Vector Retrieval](https://arxiv.org/abs/2401.09350)
[Neuroscience for machine learners | A freely available short course on neuroscience for people with a machine learning background. Designed by Dan Goodman and Marcus Ghosh.](https://neuro4ml.github.io/)
[x.com](https://twitter.com/goodside/status/1730410630673244654)
"Is prompt engineering dead?
No, it’s SoTA.
GPT-4 with good prompts (dynamic k-shot + self-generated CoT + choice-shuffled ensembles) beats Med-PaLM 2 on all nine of the MultiMedQA benchmarks it was fine-tuned for, without fine-tuning:"
https://www.lesswrong.com/posts/LNA8mubrByG7SFacm/against-almost-every-theory-of-impact-of-interpretability-1
coding LLm SotA [x.com](https://twitter.com/svpino/status/1747971746047627682?t=4M2SsIDpEiBl1ETX_fI4jQ&s=19)
[[2401.08500] Code Generation with AlphaCodium: From Prompt Engineering to Flow Engineering](https://arxiv.org/abs/2401.08500)
Instead of using a single prompt to solve problems, AlphaCodium relies on an iterative process that repeatedly runs and fixes the generated code using the testing data.
1. The first step is to have the model reason about the problem. They describe it using bullet points and focus on the goal, inputs, outputs, rules, constraints, and any other relevant details.
2. Then, they make the model reason about the public tests and come up with an explanation of why the input leads to that particular output.
3. The model generates two to three potential solutions in text and ranks them in terms of correctness, simplicity, and robustness.
4. Then, it generates more diverse tests for the problem, covering cases not part of the original public tests.
5. Iteratively, pick a solution, generate the code, and run it on a few test cases. If the tests fail, improve the code and repeat the process until the code passes every test.
[Agency Swarm Can Now Create Your Agent Swarms for You - YouTube](https://www.youtube.com/watch?v=qXxO7SvbGs8)
[x.com](https://twitter.com/marktenenholtz/status/1748050083046736203)
AutoAct is a method to train agents to solve multi-step tasks with little data.
It outperforms methods that use synthetic data from GPT-4 with as small as 13B models.
1. Start with a small dataset of tasks, simply questions mapped to outcomes
2. Use Self-Instruct to generate additional samples
3. From a large pool of tools, ask the model to pare down the tool library to only the ones it will need
4. Use repeated chain-of-thought to solve each task
5. Throw out the chain-of-thought trajectories that didn't accurately solve the problem
6. Use those generated trajectories to fine-tune a new agent for this problem.
7.
[This is your brain on love: the beautiful neuroscience behind all romance - BBC Science Focus Magazine](https://www.sciencefocus.com/the-human-body/how-love-changes-your-brain)
stability is an illusion
[Rethinking Humanity - a Film by RethinkX - YouTube](https://www.youtube.com/watch?v=r71yNnfY6ss)