But i like to think of it from evolutionary perspective these kinds of fundamentalist memeplexes helped to create social coherence in local communities, reduce uncertainity by tons of simplicification that was good enough for its purpose, and spread efficiently (and people wanting to think for themselves or think in relativistic terms, less dogmatically, had a harder time)
And to why it survived evolutionary processes, i like this lens: Most succesful cultures are at the intersection of internal and external positive/zero/negative sum dynamics that do coordinated violence effectively at scale (before it was mostly physical, now it is mostly economic/diplomatic/technological,...)
Christianity was op with their previous crusades, convert or die, cultural genocide by God's love
If you see your deeply installed rigid dogma's (network of stable interconnected assumptions) stability about norms is threathened, you will protest, and sex can evolutionary been seen as core of us in the sense of reproduction. When you dont see sex as socializing/pleasure/exploration/experience/etc. relativistic thing but as fundamental process to bring new life, your mind will send dissonance pain signals when it sees otherwise and will try to prevent them
I see it like you because in my mind there are not much such rigid constructs but i try to emphatize with people with people who have them.
Though when their existence and with me or overall environment is interfering with my wants, goals, values, or destabilizing longtermism global goals of society, or causes more suffering than reduction of suffering in society, then I'm accepting it just partically and work towards changing it. I wouldnt be a able to live in a dogmatic closeminded environment that doesn't solve complex problems with its needed corresponding complexity.
Enlightenement is understanding how experience is implemented
Ability to change qualia via strenghtening and expanding agency over priors
Nondual state is collapse of distinctiom between self model and rest of experience, cortical hiearchy of priors dissolved into raw sensory data
Mind unrestricted from social restrictions is more free to think on its own which might be less or more effective in different situations
Minds resonate with relevantely realized in its niche patterns in the environment
I suppose neurotypicals have less problems with this as they think in more vague (less overfitted) terms instead of hypertechnical hyperspecific asperger language.
I liked one description of LLMs: Autistic dumb nonconflictive intern in terms of problem solving but that has data amount like a wikipedia.
Though another distinction between LLMs and humans I would say that people learn less by massive bruteforcing, instead lots of by evolution preinstalled heuristics and metaheuristics help to do succesful relevance realization without needing bazilion of data in the beggining to get some structured start in inference instead of swimming in chaos and getting data overwhelm (see how this is also partially "broken" in autists).
I see LLMs as more general than humans in terms of data but i would say less general in terms of deeper problem solving or what makes humans humans (more vague heuristics, emotional reasoning), and lack metaawareness, embodiment, mutimodality, planning, since they are learned by bruteforcing more instead of the by evolution preinstalled heuristics and metaheuristics, or another structures in our cortical hiearchy priors when learning that humans have.
Going from classic neural nets to transformers with attentional mechanism was a good step though, still not enough structure similar to human reasoning. I think Active Inference is closer to that, but probably still too austistic.
Then one can think about how the fact that brain on the hardware level operates as neuronal resonance network governed by neural field theory with holistic field computing (or other implementational models of consciousness) plays a role on terms of qualia computing. How well it helps to reduce computational costs in terms of data, energy, time,...
Stochastic = probabilistic
Parrot = repeating
If our resonance network couples with patterns in the environment in terms of learned probabilistic heuristics, that seems to work to me.
One structure LLMs might also be missing is also the hard boundary between self and other and raw sensory data that you dont still have as a baby and learn it, unifying neural activity attractor, that is broken in psychedelics, meditation or some autists or other natural or non natural ways, or overall Kegan stages of development, or different more or less communicating functional networks we have, or that we're selforganizing system of parts, or real time updating of priors according to sensory data with any interaction, or that we learn change in dynamic qualia streams instead of static snapshots, that we also act in embodied way, all that has certain computational advantages and disadvantages.
But one can ask to which extend those structures can be emergent in any intelligence.
Or how automatically/universally emergent it Is versus do we need to hardcode some better types of evolutionary pressures and do we need different hardware.
Or do we really want to mimic humans, what if we need some other direct hardcoded or indirect evolutionary pressures (architecture, priors, training data or consequences of those) that are more efficient, general, compressing, contextswitching, fit etc. than what evolution recruited for in human information processing architecture to maximize fitness for our niche in our environment?
Also everything in biology is constructed from stratch, our organism's structure including brain is fuzzy to our environment adaptible ducktaped spaghetti that emerges from an embryo by dynamic interaction of genome (where genome was constructed by stochastic natural selection with tons of fuzzy evolutionary pressures) and existing developing structure with the environment, while structure of LLMs is more hardcoded and given from the beggining, even tho its weights flexible to a degree while learning, we have more variable metaparameters. [x.com](https://twitter.com/TrendsCognSci/status/1697729082312794376?t=t3kE6hsINeTOILKlKmlf4g&s=19) [Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans | Lex Fridman Podcast #392 - YouTube](https://youtu.be/e8qJsk1j2zE?si=xG7eqr5uINJKRyfH)
[What are Cognitive Light Cones? (Michael Levin Interview) - YouTube](https://youtu.be/YnObwxJZpZc?si=j41SoK_Z_MrWqJe2) [Unveiling the Mind-Blowing Biotech of Regeneration: Michael Levin - YouTube](https://youtu.be/Z0TNfysTazc?si=XnQG7yL5tnilZOUd)
Yeah I suspect understanding and empathizing will lead to change and cooperation more than futher angry polarizing division.
I want culture of connection and compassion among diverse values.
But some values seem to inherently be incompatible. Changing the value distribution to a tolerating compassione longtermist values should be IMO done more smartly.
Though sometimes peaceful dialog or changing external incentives for the emergence of the values isnt strong enough of a pressure so one has to use more radical (ideally nonviolent if possible) approaches.
Though sometimes Czech tradition might be the only way for systemic change, throwing leaders out of the window. Or maybe not, as it started a big war. Or maybe that is sometimes needed to stop opressive dictatorship or something similar.
Should we instead of lossfunction predicting next token in transformers optimize for short/long term quasicoherence to model metastable adaptivity in this changing world with some regularities across time like humans
And fundamentally embedding some ground truths that we live in.
Needs of being surviving in the universe with others instead of imitating text on the internet.
[Variational ecology and the physics of sentient systems - PubMed](https://pubmed.ncbi.nlm.nih.gov/30655223/)
We are directed by houndreds of physiological, dozens of social and a few cognitive needs that are implemented as reflexes until we can partially transcend them and replace them with instrumental behavior that relate to our higher goals.
My favorite lens on wellbeing, where mental illnesses are different failure modes or nonfunctioning mechanisms in this framework leading to not being fit to current environment (though one can argue how current environment id good on its own) or just hardwarebased inability to feel good [ActInf GuestStream #008.1 ~ "Exploring the Predictive Dynamics of Happiness and Well-Being" - YouTube](https://www.youtube.com/live/m8gUev5aVFs?si=sn8yZJcFWHKRp0xX)
We will see something as true if Its in our language, injecting thoughts in our language by soecialized ingroup or fully individualized finetuned AIs, Cambridge analytica, political narrative warfare, most succesful polarizing replicators will win culture war
Collective Putin versus maybe slightly unbound capitalism of China
Nonclassical logic
Action against moloch: join project, donate, propagate, spread, educate, recruit
Doubling of compute power is getting harder overtime but it more than doubles technology anyway because of AI technology being supercharged leading to singularity attractor
In terms of autimation in general, i see the issue in that if you automate cognitively/physically low effort/repetive jobs without creating something like universal basic income in our system where people need income to survive, people can end up working harder jobs, competing with AIs on the job market, instead of AIs making our life easier. And if art if automated as well and there is lower economic incentive to give artists money, that makes it harder as well to do jobs that are creative. Same for research possibly. For many jobs its not replacement but a copilot - usually the harder ones tho. And there are lots of new jobs creates related to maintaining the AI. Many useless jobs will be cut off, but without UBI people that aren't that bright have to make money somehow.
But when i look at my family, where my mom or aunt isnt at all able to learn new stuff, doesn't know any new technology really, doing the same corporate decision tree 24/7, and how lots of people in such corporate are very incompetent when it comes to technology, and one can't blame them really as they didnt really grown in such rapidly changing environment like today and intelligence and neuroplasticity and will to learn is limited. Usually they can't even automate what already can be today lol. I fear what's gonna happen there if UBI isnt implemented for them. And i dont want creative jobs to die too really without UBI.
Automation is probably inevitable, here I'm thinking that too fast accelerationism will cause lots of social issues and to maximize advantages and minimize disadvantages of it we should take in account our adapting capacity more and help it as well
If we want theoretical utopia, if AGI can solve all tasks better than us and we become outdated, technically if wellbeing is stored in the process of doing better than expected at resolving various error gradients, including finding new ones relating to our old ones, then it could generate infinite perfect series of tasks creating a whole for us (feeling of perfect growth), a version of wireheading, which videogames kind of already exploit. [ActInf GuestStream #008.1 ~ "Exploring the Predictive Dynamics of Happiness and Well-Being" - YouTube](https://www.youtube.com/live/m8gUev5aVFs?si=JKPMcAMmVywpSvQO)
Game theoretically stable consciousness wireheading infrastructure! Or if wellbeing is stored in symmetries in neurophenomenology, then that can be a kind of wireheading as well, which drugs tend to exploit. :D [The Future of Consciousness – Andrés Gómez Emilsson - YouTube](https://youtu.be/SeTE8vtJufA)
Alternatively instead of giving all agency to this utopic AGI, we could upgrade our biological machinery by biological/physical upgrading or merging with machines such that we never become deprecated, and we can overcome our limited agency, information processing ability (ability to navigate various problem solving spaces, get from point a to point b in some problem solving space), hedonium (enhancing suboptimality or hacking our objective function or substrate of wellbeing etc.), all at once, and working together with other pure or mergings of biological and nonbiological intelligent systems
I would argue days where you do as little as possible (for me ideally with do nothing meditation) are crucial if one has a brain that gets easily overwhelmed with everything if days are too intense like mine, to let it recharge and clean it self as much as it needs, to prevent fatigue, burnout, motivation loss, chaos etc.
liberální demokracie vzdělaných, tak šíleně nepolarizovaných, lidí na regenerativní ekonomice s co nejmenšími podmínkami pro vznik a udržování sociopatických manipulativních mafií
je realistický a dostatečně rychlý globálně před tím než se zřítíme do hajzlu typu až moc zesílený dystopie, chaos, krize, katastrofy, risky, násilí,...
Food scarity, good security
Breakdown of essencial infrastructure, chains
Nuclear winter
Global risks and crises watch with github backend
On Effective Altruism Global Berlin I had a great conversation with riesgoscatastroficosglobales.com about landscape of existencial risks and crises and concrete practical solutions to mitigating them, preventing, solutions for causes or symptoms, or what to do when things turn out catastrophically, and communication of them to relevant policymakers, politicians, wealthy impactful people (generating economic incentives), or general public
Ground between academia, policymakers, stake holders, and general public (automating mapping our their landcape in their niche through scraping)
[The Policy Playbook | Center for Security and Emerging Technology](https://cset.georgetown.edu/publication/the-policy-playbook/)
EA cause prioritization groups and charities (GiveWell)
Association causality graph between risks and interventions
[Convergence Analysis](https://www.convergenceanalysis.org/)
Epoch
Cserc existencial risks
[Observatorio de Riesgos Catastróficos Globales (ORCG)](https://riesgoscatastroficosglobales.com/)
Swiss existencial risk intitiative
List of AI risks https://arxiv.org/abs/2306.12001
UN
PauseAI
StopAI
Moratorium
EA global risks
Focusing on low income less educated countries
Preventing panicking, but prevent inertia
Resilient foods (Seaweed)
Breakdown of trade
[Observatorio de Riesgos Catastróficos Globales (ORCG)](https://riesgoscatastroficosglobales.com/) collaborators, allfed (protein)
UNESCO
ITN + nth order effects framework + other EA additions
Rename qualia research to burny wiki
Democratize AI paper https://arxiv.org/abs/2303.12642
UN global secretary AI risk [UN Secretary General embraces calls for a new UN agency on AI in the face of ‘potentially catastrophic and existential risks’ | CNN Business](https://www.google.com/amp/s/amp.cnn.com/cnn/2023/07/18/tech/un-ai-agency/index.html)
https://www.gcrpolicy.com/home
[Home | High Impact Professionals](https://www.highimpactprofessionals.org/)
[Mindmap with overview of EA organisations via tinyurl.com/eamindmap (and many other lists of orgs) — EA Forum](https://forum.effectivealtruism.org/posts/Avi9XgSikH5BdHzKu/mindmap-with-overview-of-ea-organisations-via-tinyurl-com)
https://media.discordapp.net/attachments/992217349971263539/1150051438467239947/IMG_20230909_145313.jpg
All the combinatorial intersections
Center for study of existencial risks ať Cambridge
Sort by EA effectivity
Autism is high precision to raw sensory data
Meditation is about learning to rest in semihybernation state
Psychedelics are multiuse: theraphy, learning, intellectual exploration, experiental exploration, finding or creating meaning, rest, freeeying dissolution, getting better local minina
The more shared prior context robot has
Active Inference is also embodied robot architecture
Verses Is puttijg tons of compute to Active Inference as AI architecture as knowledge graph [Autopoietic Enactivism and the Free Energy Principle - Prof. Friston, Prof Buckley, Dr. Ramstead - YouTube](https://youtu.be/bL00-jtRrMA?si=0w57Wk0HmJ-QbxET)
Selfreflexive observer definition of consciousness [Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans | Lex Fridman Podcast #392 - YouTube](https://youtu.be/e8qJsk1j2zE?si=5V6Emt7a86kmA59C)
Is cessation/neither perception nor not perception/anesthesia with conscious experience but disabled memory saving?
Asymtotic gravity
When i look at Czech elections I'm not sure education Is strong enough force for democracy of the educated, when populists are efficient ať creating ingroup polarizing hating outgroup narratives disconnected from actual problems we face globally
Can people even think in more nonlinear complex systemists holistic technical rational way or is most human hardware optimized to think in linear nnarrow concrete vague narratives that are not enough to solve big interconnected problems and this culture warfare is accelerated by social media algoritms creating echochambers and Cambridge analytica like psychological profile modelling and manipulation by politicians with money and recruited data science agencies [What's Our Problem? - YouTube](https://youtu.be/KRl5301kQoI)
Operation management
Bayesian causal graphs
[The 25 researchers who have published the largest number of academic articles on existential risk](https://existentialcrunch.substack.com/p/the-top-25-existential-risk-researchers)
Digital gaia alignment
USA, EU, China AI regulations
PostAGI governance as most neglected
Simon institute for long term governance
Gov.ai
AI governance interventions https://media.discordapp.net/attachments/992217349971263539/1150525397004460062/IMG_20230910_153151.jpg
Those narratives that are the most sticky are those that are efficiently simplifying everything and reducing uncertainity about goals and supporting values, ideology, culture or overall relate and support and reinforce stability of fundamental bayesian priors with deepest intellectual symbolic or intuitive fuzzy objective functions with homeostats, our moral circle size is influencing what we want a ton
Is currently the most effective intervention creating some Cambridge analytica like sociál engineering with AI designed ingroup/individual custom tailored sticky narratives that are left leaning, longtermist, existential risk/crises alarming, with solutions, to fight with the same weapons that populists and right wingers Are using with spreading polarizing insults of outgroups disconnected from real problems... but do we really want to feed into the Moloch or instead try to create cooperation incentivizing evolutionary forces like culture of connection, compassion (training theory of mind), wellbeing, steelmanning perspectives and synthetizing with doublecruxed common shared ground on a higher order of complexity in dialog, meaning with to that aligned media and social media, rationalist systemic holistic interconnected nonlinear complex thinking, openmindedness, epistemic humility, unified languages, togetherness in this world full of problems we face together, and shared good future utopian globalist vision, cooperative regulations (UN etc.), education etc., or are these evolutionary forces just not strong enough and we have to use the weapons the destabilizing polarizing moloch inducing competing shorttermist nonaltruistic sociopathic selfcentered uneducated agents are using as the only way to beat them, just like how to prevent dynamics like China eating Tibet, Christianity spreading by crusades, shingistan spreading everywhere,... are the most effective cultures and thoughts, agents made of subagents, that effectively coordinate violence on scale in terms of physical, economic, status, diplomatic, technological etc. power if not restricted strongly enough by altruistic internal and external incentives for cooperation?
[Center for AI Safety (CAIS)](https://www.safe.ai/)
[Centre for the Governance of AI | Home](https://www.governance.ai/)
https://www.conjecture.dev/coem/
Future of life isntute
Horizon institute
Center for the study if emerging technologies
Existential risk coortisuum
Impact academy
Dynamically updated existential risk organizations map
Mapping organizations of transferers of knowledge between research groups and policymakers and people and politicians
*Has a degree* is a positive signal filtering signal from noise for recruitment
[Metaheuristic - Wikipedia](https://en.wikipedia.org/wiki/Metaheuristic?wprov=sfla1)
[Metaheuristics - Scholarpedia](http://www.scholarpedia.org/article/Metaheuristics)
[Global AI Law and Policy Tracker](https://iapp.org/resources/article/global-ai-legislation-tracker/)
[x.com](https://twitter.com/InferenceActive/status/1701663186523677032?t=ZJMDOcySvWJEzo1lGLzcCw&s=19)
Rational animations bayes rule
Active Inference in modelling conflicts [PaperStream #006.0 ~ Active Inference in Modeling Conflict - YouTube](https://www.youtube.com/live/HAujw2_ClCM?si=xktQdX4QxgUzBGqr)
Existence is useful nonexistent construct
Free will in summary