Intelligence isn't being a stochastic parrot, but being a generalizing circuit grokking agent
"Intelligence isn't the ability to remember and repeat, like they teach you in school. It is the ability to learn from experience, solve problems, and use our knowledge to adapt to new situations."
https://x.com/ProfFeynman/status/1815772030270075304
One of the prompts I often use is "explain step by step how data is transformed through this code/mathematical set of equations" and I am basically always satisfied with the result :D (but idk about ChatGPT because most of the time I only use Claude/Perplexity/Phind/CursorSh etc. now for some time)
Sometimes what happens to me is that it's thinking faster than me and solves a problem for me before I solve it (thanks to CursorSh with Claude Sonnet 3.5)
I sometimes see these usecases too, this is mentioning an old model without any extra wrapper adding RAG over codebase, best practices etc. like CursorSh etc.
https://x.com/mufeedvh/status/1767590697618547015?t=eIczEBOa8VQTAVGvPkuzWg&s=19
I shrinked my code using LLMs a few times
I usually took advantage of CursorSh's whole codebase access for that
Another thing I do often is inject it with documentations (easy in CursorSh) and tons of guides and other potential pointers it has possibility to few shot learn out of
LLMs have their limits but I think most people haven't tasted their full potential yet as:
- we're so early
- using it is a skill on its own that gets developed with practice and collecting tricks from others
most people aren't using SOTA systems for their usecase as they're using free version of ChatGPT instead of paid ChatGPT, Copilot, Claude, Perplexity, Phind, CursorSh, ConcensusAI, specialized models and specialized wrappers etc.
- the tools are getting better very fast
- some good stuff is paid and people don't pay which I understand
I may also be biased because most stuff I do is AI stuff and I'm very sure that since machine learners created these models and wrapper systems, and want to automate themselves out of existence as much as possible, that the models are very much finetuned on knowledge and solving machine learning than on any other subset of coding and math
Models are the data
https://x.com/burny_tech/status/1816149294950539434?t=TwnoF7r0fqcAPZO2iJgMrw&s=19
If you overfit on the entire world, you are basically done.
Memorization is the first step towards generalization
LLMs are semantic vector search engines with weak generalization. They are different types of more advanced search engines that are working with vector representations. They are retrieving compressed knowledge and (sometimes less, sometimes more fuzzy) vector programs that are more concrete or abstract with weak generalization capabilities and composition. They can technically memorize compressed vector representations of various concrete and abstract programs aka heuristics and knowledge to some level of granuality with weak generalization. But they can also encode almost arbitrary generalizing circuits when we enhance our reverse engineering knowledge and techniques for steering the training and inference process.
LLMs are such extremely fascinating systems relative to all the things they are capable of doing when they approximate the training data manifold by curve fitting with attention and interpolate on top of it with all sorts of vector program combinations. And it still boggles my mind how the models can sometimes generalize out of distribution a lot with just curve fitting by getting into generalizing short program circuit, that lies in often flat local minima, when they grok!
Major part of my suffering comes from wanting a world where everyone has their basic needs met and can selfactualize, as long as they don't harm others, and seeing how far from it we are, but I have hope that we will get to a better state of the world where this is relatively better than today, by proper actions towards that
Sweetness acceleration
Mainstream LLM benchmarks suck and are full of contamination. This is private noncontaminated reasoning benchmark. You can see how the models are actually getting better, and that were not really "stuck at GPT-4 level intelligence for over a year now".
https://x.com/burny_tech/status/1816240293143773303?t=ZYWUlyuGj7RJeBfRoinnZg&s=19
https://www.reddit.com/r/singularity/s/PQAm8XsURP
[Llama 405b: Full 92 page Analysis, and Uncontaminated SIMPLE Benchmark Results - YouTube](https://youtu.be/Tf1nooXtUHE?si=S28juCrMj9eoWIq4)
Philosophy is defining words with words that you define with words that you define with words that you define with words that you define with words sometimes in circular manner
We should speak in mathematics only to minimize ambiguity
Cautious TESCERAL
I am totally normal and I can be trusted with language
If you wanna understand codebase, make Claude LLM in Artefacts or CursorSh explain it to you line by line step by step and recursively explain each part you don't understand, and fact check it with Perplexity
@CursorSh Your AI IDE would be so much more powerful if it had access to the web. Perplexity with Claude in an IDE?
AI boom will crash because of overly inflated expectations and then AI will basically quickly boom again. A lot of the current inflated expectations will turn out to be true quickly soon in few years anyway. This will happen again and again. Booms and crashes will be closer and closer to eachother. Faster and faster, more compressed, closer to eachother overtime, gartner hype cycles. A global exponential made of closer and closer local sigmoids. That is the technological singularity since the digital revolution, or since the industrial revolution, or the agricultural revolution.
You have near unlimited intelligence in your pocket and 99% of people choose to doomscroll tiktok
bloomscroll free online Stanford lectures instead
"Is language the first mind virus that spawned its ecosystem, or is consciousness a mind virus, too?"
Universal darwinism across all spaciotemporal scales
Universal virusimism across all spaciotemporal scales
https://x.com/Plinz/status/1817144890205311470?t=avnRYFLohkwxFZzExy7otg&s=19
"First 8 hours of the day no social media, only productivity" is powerful
god is the quantum harmonic oscillator
LLMs are just the beginning of AI
"
I think superintelligence will be controllable.
But! We have to develop that steerability technology. Which is a hard technical problem.
Like:
Politicians and CEOs control galaxy brain scientists and engineers.
Cats aligned humans.
LLMs or neurosymbolic systems are already smarter in various domains than humans I would argue (under certain definitions of intelligence).
Etc.
Under real world constrains of our system and physics.
But it's true that highly agentic autonomous system can be deadly.
Future might include offensive superintelligent swarms of superdrones fighting defensive superintelligent swarms of drones.
But I hope for future where it's not humans vs AI, but humans cooperating with AI, where superintelligence helps humanity solve physics, biology, science, technology, and play the longest games together, like populating the galaxy, and beating the heat death of the universe.
Let's build that fork of the multiverse together, humans with AIs, other (synthetic) life forms, and those that merge!
"
https://x.com/AISafetyMemes/status/1817116199878295580?t=0UJ7gQcupgsh95cxs77VgQ&s=19
Under my most used definitions, consciousness (having an experience) is completely unrelated to the ability to reason and intelligence (narrow and general, ability to solve problems in diverse environments, compression, adapt, generalize, agency etc.)
Let's evaluate and minimize the Kullback-Leibler (KL) divergence between our latent neural parameters to minimize the distance between us and get closer to each other <3
UBI paid by automation!
https://www.reddit.com/r/singularity/s/pE7hLEnc44
Reverse engineering and steering AI systems.
Political advocacy for giving fruits of technology to everyone's hands.
AI for science and engineering.
"
I sometimes feel this 😭.
It's not practical. But it feels it's coming from the very core!
I had to accept that I have to traverse this word and minimally hurt other sentient systems (not hurting at all is impossible) otherwise I couldn't do anything at all!
At least I don't smash bugs and politely bring them out of my room.
And I try to be as kind as I can to other humans... While still respecting my own values though which I had to learn too and sometimes comes with not pleasing everyone! And poor social skills and other neurodivergences also sometimes accidentally hurt others...
World is harder for us with hyperactive processing of possible negative direct or indirect consequences in current and all possible negative counterfactual worlds!
For better world one must upskill agency and act aligned with one's values, and not get paralyzed by "I'm causing harm" analysis paralysis! Acting is often net positive when it comes to total amount of badness in the world being minimized and total amount of goodness being maximized, even if there are risky possible counterfactuals, often the probabilities are good to be worth it!
"
https://x.com/Kat__Woods/status/1817472130675793938?t=9xUKrmrUnHNtkt3-2E48gA&s=19
My favorite gender
https://x.com/burny_tech/status/1817929621314351386?t=1w9po7hMDmfmN77dm2fQmg&s=19
[Calabi–Yau manifold - Wikipedia](https://en.wikipedia.org/wiki/Calabi–Yau_manifold)
[String Theory - YouTube](https://youtu.be/n7cOlBxtKSo?si=i2Eh-bZVdPAwukwY&t=609)
I'm thinking why it resonates with me so much.
Many dimensions folded up such that only few are visible feels so relatable.
Red pills? Black pills? Blue pills? White pills?
Gimme some omniperspectival rainbow pills!
"
Feeling extreme care for all beings being well feels core to me. All the current and future beings in this universe. Even tho I sometimes get paralyzed from the fact how so many beings currently are unwell and how they can be potentially unwell in the future if certain bad events unfold.
And from how little agency I have over it, but I'm trying my best anyway as a single being with it's limited skills, resources, and opportunities. I try anyway to help to steer the world towards the outcomes that benefit all more.
I just wish pure wellness for all. And I want that to actually happen through actions, not just meditating on it. I wish I had more opportunities to help steer the world towards the types of outcomes where all benefit. Even tho I'm still probably in a kind of incubation stage.
I am currently trying to learn relevant skills and do as many actions as I can to help the collective technical project of steering of AI systems. I think that as these systems get better and more powerful, we will need to scale the research and development of steering wheels on them much more, so that they're safely, robustly and reliably, ideally helping, not harming, sentient beings in all sorts of ways. I also think that superintelligence alignment is solvable.
Automation. Post-scarcity abundance. Eradicating poverty. Eradicating diseases. Accelerating Healthcare. Biology. Physics. Science. Mathematics. Technologies. Education. Social support. Environmental protection. Transportation. Stabilization and upgrading of food and energy supply chains and other systems. Entertainment. Wellness. Existential risk and suffering risk mitigation. Progress. Sustainable growth. Cognitive enhancement. Benevolent AIs. Exploration of knowledge. Exploration of the universe. Universal basic services. Fulfillment for as many as possible. Etc.
Am I doing it correctly? Am I going towards that goal optimally? Are my current actions helping, or on the contrary? What other actions do I have to do to get into this fulltime such that it's the main thing I spend time on? Am I doing these other actions optimally?
Are there better more effective methods to reach similar goals? With what actions is my mind actually compatible the most? Is this really a needed cause, or should I put my energy into more pressing causes? Is fighting power concentration by a few misaligned with others more neglected? Is limiting the misuse of technology for less beneficial and more destructive usecases more neglected? Is developing beneficial usecases of technologies more neglected? Is attempting to help minimize wars more neglected cause? Is governance towards these goals Marie neglected? Is sending messages like this more effectively more neglected? Are other technical, social, political, or other issues more neglected?
Constantly overthinking and reevaluating everything results in doing nothing, and often it's not even evaluable. One needs to get grounded and see what works for them the most while being aligned with one's higher level goals and values. Focusing on a concrete thing is important to have some progress, even if it might not have directly observable effects in the world but through nth order effects.
Or is a better mind needed for doing that? Better skills are needed for doing that? Technical skills, verbal skills, other skills? Better opportunities? Better resources? Better incentives? More wealth to use? Better communication skills? Better x? Constantly reevaluating can give hints to make it better, but also create analysis paralysis.
I try the most with what I have and what I can do under constraints of my organism, the environment, physics, etc. No one is ever perfect. The world is never perfect. We have to work with what we have in the world that we have. We have to work with what is available to steer the world towards a better world for all. A better world for all is possible.
The path towards that is action, not giving up. If enough people act, the world will be better. The less people, that want a better world, giving up, the better. The world will be better than current scenarios and possible bad future scenarios. We will build that better world together!
"
"AI is just a fad" says while he uses tools that use machine learning algorithms everywhere he steps without even realizing it
"I wonder when social media companies will have sophisticated tools to trace, predict and plan the genealogy, incentive structure, causal power and distribution of memes for politics and consumption at meaningfully high resolution
As a social media user, I also want to see a real time map of the meme galaxy, my own location in it, and the local and global meme weather report. And who gives what marching order to whom on this national middle school yard? Who are the main hubs and jet streams?"
https://x.com/Plinz/status/1818146356474843501
Which architecture was my neural network collective intelligence psyopped into?
I'm still not empirically convinced that AIs and superintelligences by default get emergent desire to dominate and even if so I think that it's a much more steerable property
"As the whole tech world bashed them for releasing products and research slowly to the public and having no moat anymore, they secretly cooperated with the biggest players on the planet and accelerated harder than ever before in the largest scale military defence AI Manhattan project ever to this date as this is how the most impactful technologies ever were built in the past."
https://x.com/sama/status/1540227243368058880?t=rp8o9FkmwBN_TTtMiL1yBA&s=19
"
Is the most likely outcome that some (coalition of) SuperGigaMegaUltraTechCorpotations building some centralized mostly for themselves SuperGigaMegaUltraArtificialGeneralSuperIntelligence(s) and that way either:
a) the corporations using the massively powerful technology to create down down monarchy surveilence dystopia (possibly in coalition with some government(s)) where everyone else becomes a peasant without any agency and power over the whole system?
or,
b) the AGSI going uncontrollably crazy creating massive damage as the market doesn't really incentivize steerability research and development?
How do we not get any of this!? Open source is interesting but guess who will be able to actually run these gigantic SuperGigaMegaUltraArtificialGeneralSuperIntelligences and other risk exist. If building such SuperGigaMegaUltraArtificialGeneralSuperIntelligence(s) is inevitable, how to make sure it's steered by benevolent people not getting selfishly drunk on power and make sure that it benefits all of current and future sentience instead?
Am I too distrustful in authority and there are actually enough people in power wanting good for all as we're not on such strong unfree dystopia already by default, even tho the current freedom could be wayyyy better? Will good players keep such strength and gets stronger in future, or will bad selfish players overtake them in power games and exploit peasants to oblivion? Will power get even more centralized in the hands of few misaligned (or aligned) with all of sentience, or will it get much more decentralized? Will current power structures destabilizace or fully fall apart soon, possibly thanks to open source AI? So much uncertainity!
"
Xitter is the ultimate hard drug
I wish LLMs told me more often that I was mistaken thinking an incorrect thing and corrected me more
I think the current AI boom will crash because of wayyy too early too big overly inflated expectations but then then AI will basically quickly boom again in a few years when new systems get released that are orders of magnitude scaled or algorithmically improved or with smarter data engineering or all or something else. A lot of the current inflated expectations will turn out to be true quickly soon in few years anyway, but so many of them are so early. And some exponentials are sampled too discretely. I think this will happen again and again. Booms and crashes will be closer and closer to eachother. Faster and faster, more compressed, closer to eachother overtime gartner hype cycles. A global exponential made of closer and closer local sigmoids. This is how I see the current technological singularity. *pulls out Kurweil's straight lines on exponential graphs :D*
https://fxtwitter.com/dwarkesh_sp/status/1739654775816462796
[Will scaling work? - by Dwarkesh Patel - Dwarkesh Podcast](https://www.dwarkeshpatel.com/p/will-scaling-work)
Sam Altman 420D Chess
step 420: partnering with US gov for state sponsored, fully accelerated, against China, superintelligence defense Manhattan project, concentrating power even more, laying grounds for universal basic compute for the peasants
https://x.com/BasedDaedalus/status/1700602905219174824
[Introduction - SITUATIONAL AWARENESS: The Decade Ahead](https://situational-awareness.ai/)
[OpenAI's Sam Altman Has a New Idea for a Universal Basic Income - Business Insider](https://www.businessinsider.com/openai-sam-altman-universal-basic-income-idea-compute-gpt-7-2024-5)
https://openai.com/index/openai-appoints-retired-us-army-general/
[Trump allies draft AI order to launch ‘Manhattan Projects’ for defense - The Washington Post](https://www.washingtonpost.com/technology/2024/07/16/trump-ai-executive-order-regulations-military/)
[Bloomberg - Are you a robot?](https://www.bloomberg.com/news/articles/2024-07-22/ubi-study-backed-by-openai-s-sam-altman-bolsters-support-for-basic-income)
"I sense I want team consciousness to be the globally dominant memeplex so much...
By minimizing the dissonant asymmetrical dukkha and tanha of wanting? 8)"
https://x.com/algekalipso/status/1818777437393830008
Technology is the unit of power
AI: Wins silver medal in international math olympiad, something that has been considered as an absolute AI win for a long time
People, desensitized from the recent AI hype: Nothing ever happens *Yawn*
Inside of you are million dynamically on the fly constructed experts forming higher order experts
[1 Million Tiny Experts in an AI? Fine-Grained MoE Explained - YouTube](https://youtu.be/F70KlwO4wP0?si=F9oy_U7PBHz_hlLc)
Everytime I learn a new machine learning architecture or algorithm I'm like: Hmm, this can be applied to my experience in these and these ways, very interesting 🤔