"People are currently debating if AI revolution will transform and automate everything in existence in one nanosecond and no jobs will exist or/and eats the whole universe in one nanosecond, or if AI is only just hype and nothing else.
I think reality is in the middle. It's a very transformative technology that is already under so much of our system's infrastructure and the generative AI waves adds even more cool tools and systems to the toolbox for experts, engineers, non-technical people and so on. And I don't see the advancing (in many ways exponential, in many ways linear) progress of this technology stopping anytime soon.
The extreme hype drives funding which creates better technology in self reinforcing self fullfiling prophecy way but at the same time it creates overly inflated expectations disconnected from actual reality that then crash in dissapointment, but on the opposite extreme too little hype doesn't realize the actual realistic transformative potential of the technology, and AI is extremely amazing technology that is under so many things already and I use a lot for a lot of pragmatic things with so many of the recent new breakthroughs with applications and products! It's amazing! But I think that we have to be aware realistic dangers and mitigate them, but at the same time be aware of the actual practical limitations of the current and mainly short term future technologies while giving some of attention to the possible long term future technologies as well, but IMO not too much.
There seems to be this general pattern that repeats, where some hype wave about something grows and grows too big if unchecked by reality checks from real world implementations and people working there, that eats almost the whole memetic transmission channels that is so strong that it starts getting (sometimes more, sometimes less) out of touch with reality, and as a result a completely polar opposite movement emerges and culture war starts with two polar opposites fighting with each other and fueling each other's validation and realness this way. But eventually these two sides most of the time merge into one, or stay in war for long, or divide into even more nuanced tribes fighting each other for dominance, but often with at least some degree of cooperative mechanisms.
It's like initial metaverse hype, quantum computing hype, initial internet hype, crypto hype, open source vs closed source, AI caution/fundamental safety vs AI pure accelerationism, technology accelerationism vs deaccelerationism in general, AI scepticism vs AI super optimistic believism, political left vs right,...
One can argue if these various hype cycles had more advantages or disadvantages, if they were more constructive or destructive in more good or bad or neutral directions etc., how connected to actual reality they were etc. In the case of AI I think that the not as strong version of the hype is actually more connected to reality than a lot of AI sceptics think.
Hegelian idea of the emergence of some new thesis, then as a responce the emergence of countercultural antithesis that battles with the initial thesis, and then the dynamics calming down by their synthesis into one unified thesis, feels similar.
It's like two dissonant attractors self reinforcing each other with even more dissonance (@algekalipso thinks similar formalism holds for bad 5-MeO-DMT trips too) in the physics of cultural memetics. And then these mutually dissonant (statistical) attractors merge into one attractor when their individual metastability gets too unstable and their overlap gets too big, or continue fighting if they're individually metastable enough, or split into even more dissonant attractors forming even me clusters of coherence with mutual competing dissonance on some levels of analysis, but unifying cooperative coherence on other levels of analysis."
Moore's law for AI: The same quality of generative AI output is getting exponentially cheaper every x months
Meditation is like watering and fertilizing all subagents in the mind
All models are right. Some of them apply to our consensual reality.
[Ultra-detailed brain map shows neurons that encode words’ meaning](https://www.nature.com/articles/d41586-024-02146-6)
Universal basic love for every being
https://www.lesswrong.com/posts/gcpNuEZnxAPayaKBY/othellogpt-learned-a-bag-of-heuristics-1
https://x.com/teortaxesTex/status/1810151860067619015?t=oGZiR8oGoieiHVfHEj127A&s=19
Links for 2024-07-07
AI:
1. AI Mathematical Olympiad: It appears that the winning program correctly answered 29/50 of the private test questions. — “Maybe what's even more impressive about this competition, beside the level of math these models are already capable of is how ressource contraint the participants were actually, having to run inference in a short amont of time on T4 which only let us imagine how powerful these models will become in the coming months.” https://x.com/Thom_Wolf/status/1809895886899585164
2. Learning Formal Mathematics From Intrinsic Motivation [[2407.00695] Learning Formal Mathematics From Intrinsic Motivation](https://arxiv.org/abs/2407.00695)
3. “This means the relationship between changes in underlying model capabilities and changes in real world impact can be unintuitive. If stepwise accuracy goes from 99% to 99.99%, a 200 step task goes from failing most of the time to succeeding almost always” https://x.com/RatOrthodox/status/1809055334536786130 (Paper: Rethinking AI agent benchmarking and evaluation https://www.aisnakeoil.com/p/new-paper-ai-agents-that-matter)
4. Gradually, then Suddenly: What often matters is when technologies pass certain thresholds of capability. [Gradually, then Suddenly: Upon the Threshold](https://www.oneusefulthing.org/p/gradually-then-suddenly-upon-the)
5. OmniJARVIS: Unified Vision-Language-Action Tokenization Enables Open-World Instruction Following Agents [OmniJARVIS: Unified Vision-Language-Action Tokenization Enables Open-World Instruction Following Agents](https://omnijarvis.github.io/)
6. Introducing ReSearch: An iterative self-reflection algorithm that enhances LLM's self-restraint abilities. Encouraging abstention when uncertain. Producing accurate, informative content when confident. [[2405.13022] LLMs can learn self-restraint through iterative self-reflection](https://arxiv.org/abs/2405.13022)
7. Natural Language Embedded Programs for Hybrid Language Symbolic Reasoning [[2309.10814] Natural Language Embedded Programs for Hybrid Language Symbolic Reasoning](https://arxiv.org/abs/2309.10814)
8. Diffusion Forcing combines the strength of full-sequence diffusion models and next-token models, acting as either or a mix at sampling time for different applications without retraining. [Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion](https://boyuan.space/diffusion-forcing/)
9. Improving retrieval with LLM-as-a-judge [Improving retrieval with LLM-as-a-judge | Vespa Blog](https://blog.vespa.ai/improving-retrieval-with-llm-as-a-judge/)
10. “This is an interim report on reverse-engineering Othello-GPT, an 8-layer transformer trained to take sequences of Othello moves and predict legal moves. We find evidence that Othello-GPT learns to compute the board state using many independent decision rules that are localized to small parts of the board.” https://www.lesswrong.com/posts/gcpNuEZnxAPayaKBY/othellogpt-learned-a-bag-of-heuristics-1
Engineering:
1. New Multi-Material “Laser” 3D Printer Can Create Complex Devices With Just a Single Machine [No assembly required // Mizzou Engineering](https://engineering.missouri.edu/2024/no-assembly-required/)
2. Desalinating Water Is Becoming “Absurdly Cheap” https://humanprogress.org/desalinating-water-is-becoming-absurdly-cheap/
3. Open-TeleVision: Teleoperation with Immersive Active Visual Feedback [TeleVision](https://robot-tv.github.io/)
4. “Britain should reclaim an area the size of Wales from Dogger Bank, the area of the North Sea where the sea is only 15-40m deep. We could do it for less than £100bn.” [A New Atlantis - by Duncan McClements and Jason Hausenloy](https://model-thinking.com/p/a-new-atlantis)
Miscellaneous:
1. BB(5) is now known to equal 47176870, thanks to a collaboratively-made Coq proof that decides the halting problem for all 5-state Turing machines by case analysis of ~180 million equivalence classes, which `coqc` can check in ~10 hours of wall-clock time. [Amateur Mathematicians Find Fifth ‘Busy Beaver’ Turing Machine | Quanta Magazine](https://www.quantamagazine.org/amateur-mathematicians-find-fifth-busy-beaver-turing-machine-20240702/)
2. “Our results imply that being genetically predisposed to be smarter causes left-wing beliefs.” https://www.sciencedirect.com/science/article/abs/pii/S0160289624000254
3. “…we show that inattentionally blind participants can successfully report the location, color and shape of the stimuli they deny noticing.” [Sensitivity to visual features in inattentional blindness | bioRxiv](https://www.biorxiv.org/content/10.1101/2024.05.18.593967v1)
Artificial Intelligence Math Olympiad (AIMO) with LLMs
[AIMO Prize](https://aimoprize.com/)
https://x.com/Thom_Wolf/status/1809895886899585164?t=57Zl4N1dg0MYFZbR2JDjyg&s=19
Automated mathematics is approaching
[Entropy | Free Full-Text | Towards a Theory of Quantum Gravity from Neural Networks](https://www.mdpi.com/1099-4300/24/1/7?fbclid=IwZXh0bgNhZW0CMTEAAR20JgZhZl-rK1Q_o1O_Qggy2WW0_u99L6rwkNAnopWYfQW4dhFsXMpyvTg_aem_8ZSYfnNKPUSAHrB1nsT42A)
"My argument is that younger healthy people don't need a jab and are putting themselves at albeit tiny risk of death from having it."
Statistically speaking, you're putting yourself on a relatively comparatively higher risk by not having it.
"I've had COVID twice. I am alive and well thanks and also naturally immunised."
This is just n=1 anecdotal evidence and not any full rigorous scientific statistical empirical proof. Many people other also had vaccines and are as alive and well as you. But statistically speaking, this proportion is lower. Check the actual statistics present in the various studies if you want to use them as arguments.
[[2407.02678] Reasoning in Large Language Models: A Geometric Perspective](https://arxiv.org/abs/2407.02678)