"Můžeme zkopírovat/teleportovat jednu mysl? To hodně záleží na jaký úrovni abstrakce a jiných implementačních detailech je ve fyzice individuální prožitek implementován.
Potřebuješ zkopírovat high level funkcionální klasický algoritmy a data, nebo skoro všechny low level kvantový informace? Ale jaký algorithmy a informace?
Musí ty informace být jen z mozku, nebo z celkovyho nervovýho systému, celkovyho organismu, nebo i nějaký informace z prostředí jsou důležitý?
Můžeš mít dva v podstatě identický fyzikální systémy tvořící dvě individuální mysli? Co se týče kvantový informace, tam klonování znemožňuje quantum no cloning theorem, ale neznemožňuje to u kvantový teleportace, nebo u klonování klasických informací, to děláme v počítačích denně.
Končetiny bývají amputovány, aniž by se z lidí staly dramaticky odlišné mysli. Platí to podobně pro mozek? Existuje nějaká hranice jako: Tato podmnožina informací o systému je dostatečná informace determinující tuto individuální mysl?
Co by se stalo kdybychom místo toho zkusili pomalý nahrazování částí nebo slučování jednoho systému s tým druhým systémem?
Platí vůbec reduktivní fyzikalismus a uzavřený individualismus ve filozofii mysli?
Reálně nikdo neví, všechno jsou jenom teorie, co potvrdit jinak než empiricky pravděpodobně nejde."
"Can we copy/teleport mind? That depends on what level of abstraction and other implementation details the individual mind is implemented in physics.
Do you need to copy high-level functional classical algorithms and data, or almost all of low-level quantum information in quantum field theory standard model? But what algorithms and which information?
Does the information have to be just from the brain, or from the overall nervous system, the overall organism, or is some information from the environment important as well?
Can you have two basically identical physical systems creating two individual minds? As for quantum information, there cloning is prevented by the quantum no cloning theorem, but it doesn't prevent quantum teleportation, or for cloning classical information, we do that in computers every day.
Limbs get amputated without people becoming dramatically different minds. Does it similarly hold for the brain? Is there some boundary like: This subset of system's information is enough information for determining this individual mind?
What would happen if we instead tried a slow replacement of parts or merging the first system with the second one?
Is reductive physicalism and closed individualism even true in the philosophy of the mind?
Realistically, no one knows, they are all just theories, which probably can't be confirmed in any other way except empirically."
"Functionalism doesn't work as its too causally disconnected from physical substrate?
For example i think i get the argument against functionalism on a high level: the arbitraryness, the gigantic space of possible functional solutions. But i dont think that fundamentally invalidates the whole space of possible solutions there, as i think that argument can be applied to the whole models of consciousness field, with its very limited empirical tools to verify these models most of which havent been even tried yet.
i dont think it has to be disconnected in that way
you can have algorithm running which is just part of the physics on a more abstract level
identified by stuff like or similar to mean field theory approach [Mean-field theory - Wikipedia](https://en.wikipedia.org/wiki/Mean-field_theory)
The idea is that you have the two levels if abstraction fundamentally coupled together as the more abstract level is just the fundamental dynamics described on a more abstract level
So you get different abstraction levels of analysis instead
There's no causal coupling as it's the same fundamental level, just different parts of it, different patterns in it
For example: emergent patterns in celluar automata are still part of the fundamental celluar automata
Electromagnetic field theory of consciousness works?
I also think excluding all other quantum fields for forces and matter, that are defined in interrelated ways via various equations in standard model's quantum field theory (plus unsolved gravity), might be an issue, even tho electromagnetic field constitutes majority of interactions in the universe on macroscopic scales
And on a lot of ontological and metaphysical questions in philosophy of mind, since there's not really a way to verify them empirically, I'm often agnostic, or liking various solutions in parallel, or depending on the context etc..
"
[OpenWorm](https://openworm.org/)
we have read a worm's neural structure fully, to single neuron level, and now can run that in emulators
[Scientists Upload Worm's Mind Into a Lego Robot - YouTube](https://www.youtube.com/watch?v=2_i1NKPzbjM)
By age eight, John von Neumann was familiar with differential and integral calculus, and by twelve he had read Borel's La Théorie des Fonctions. [John von Neumann - Wikipedia](https://en.wikipedia.org/wiki/John_von_Neumann)
třeba po tom jak možná někdy budou lidi mít možnost si skoro libovolně editovat svojí mentální kapacitu a vědomosti pomocí tlačítka, tak všechny "jsem nic" komplexy zmizí!
https://www.researchgate.net/publication/343758238_Book_Review_Alice_and_Bob_meet_Banach_The_interface_of_asymptotic_geometric_analysis_and_quantum_information_theory
[Alice and Bob Meet Banach: The Interface of Asymptotic Geometric Analysis and Quantum Information Theory](https://bookstore.ams.org/surv-223)
Alice and Bob Meet Banach: The Interface of Asymptotic Geometric Analysis and Quantum Information Theory
"The quest to build a quantum computer is arguably one of the major scientific and technological challenges of the twenty-first century, and quantum information theory (QIT) provides the mathematical framework for that quest. Over the last dozen or so years, it has become clear that quantum information theory is closely linked to geometric functional analysis (Banach space theory, operator spaces, high-dimensional probability), a field also known as asymptotic geometric analysis (AGA). In a nutshell, asymptotic geometric analysis investigates quantitative properties of convex sets, or other geometric structures, and their approximate symmetries as the dimension becomes large. This makes it especially relevant to quantum theory, where systems consisting of just a few particles naturally lead to models whose dimension is in the thousands, or even in the billions.
Alice and Bob Meet Banach is aimed at multiple audiences connected through their interest in the interface of QIT and AGA: at quantum information researchers who want to learn AGA or apply its tools; at mathematicians interested in learning QIT, or at least the part of QIT that is relevant to functional analysis/convex geometry/random matrix theory and related areas; and at beginning researchers in either field. Moreover, this user-friendly book contains numerous tables and explicit estimates, with reasonable constants when possible, which make it a useful reference even for established mathematicians generally familiar with the subject."
mathematics vs physics notation https://imgur.com/3Lb4coU
Correspondence between statistical physics and bayesian statistics
https://arxiv.org/abs/1706.01428
[x.com](https://twitter.com/burny_tech/status/1783552920664908038)
I think that we should go off and figure out how to give everybody on Earth a great education, cure every disease, have great entertainment, go explore space, and discover new physics … and create more abundance.
[Let's build GPT: from scratch, in code, spelled out. - YouTube](https://www.youtube.com/watch?v=kCc8FmEb1nY)
[Why Do Neural Networks Love the Softmax? - YouTube](https://www.youtube.com/watch?v=p-6wUOXaVqs)
Map of breakthroughs in neural networks algorithms and architectures in machine learning
[x.com](https://twitter.com/burny_tech/status/1783567241847435661)
AlexNet (2012) Deepness + GPUs win.
VGG16 (2014) Add more layers.
GoogLeNet (2014) Scale invariance/efficiency.
Seq2Seq Models (2014) NNs can translate.
ResNet models (2015) Learn the residuals.
DenseNet (2017) More direct/skip connections
Transformers (2017) Attention is all you need.
EfficientNet (2019) How to scale up a network
GPT-2 (2019) Scaling gives zero shot learning
Vision Transformers (ViT) (2020) Transformers interpret images.
DALL-E (2021) Creating images from text.
CLIP (2021) Understanding images in context.
Switch Transformers (2021) Scaling with mixture-of-experts.
GPT-4 (2022) Multimodal understanding, more parameters.
LaMDA (2022) Richer dialogues, more fluid.
Diffusion Models (2022) Gradual image generation refinement.
GNNs for Complex Systems (2023) Modeling intricate networks.
Gen-2 Video Models (2023) High-fidelity video synthesis.
Robocat (2023) Adaptive, general-purpose robotics.
GraphCast (2023) Advanced graph-based forecasting.
NeuroCode (20xx) Neural networks that write their own code.
SentienceLab (20xx) Modeling artificial consciousness.
MindMeld (20xx) Direct brain-computer interfaces.
HoloVerse (20xx) Fully immersive AI-generated worlds.
UniversalNet (20xx) A unified architecture to rule them all.
QuantumMind (20xx) Quantum computing meets deep learning.
TESLA (20xx) Thought Embedding, Simulation and Language Architecture - AGI is born.
[What is Jacobian? | The right way of thinking derivatives and integrals - YouTube](https://www.youtube.com/watch?v=wCZ1VEmVjVo)
[What is the difference between negative log likelihood and cross entropy? (in neural networks) - YouTube](https://www.youtube.com/watch?v=ziq967YrSsc)
[Why do we need Cross Entropy Loss? (Visualized) - YouTube](https://www.youtube.com/watch?v=gIx974WtVb4&pp=ygUSY3Jvc3MgZW50cm9weSBsb3Nz)
[Understanding Binary Cross-Entropy / Log Loss in 5 minutes: a visual explanation - YouTube](https://www.youtube.com/watch?v=DPSXVJF5jIs&pp=ygUSY3Jvc3MgZW50cm9weSBsb3Nz)
[Maximum Likelihood Estimate in Machine Learning - YouTube](https://www.youtube.com/watch?v=2PfGO753UHk)
sázím velkou pravděpodobnost na to že carrier of mind je nějaký algoritmus v neurovědách z {lobal neuronal workspace theory, integrated information theory, recurrent processing theory, predictive processing theory, neurorepresentationalism, dendritic integration theory} nebo jejich kombinace
(An integrative, multiscale view on neural theories of consciousness https://www.cell.com/neuron/fulltext/S0896-6273(24)00088-6 )
nebo na low level úrovni electromagnetický pole
[Electromagnetic theories of consciousness - Wikipedia](https://en.wikipedia.org/wiki/Electromagnetic_theories_of_consciousness)
alternativně z {Attention schema theory, Dynamic core hypothesis, Damasio's theory of consciousness, Higher-order theories of consciousness, Holonomic brain theory, Multiple drafts model, Orchestrated objective reduction}
(všechny mají wiki stránku)
[Models of consciousness - Wikipedia](https://en.wikipedia.org/wiki/Models_of_consciousness)
[Models of consciousness - Scholarpedia](http://www.scholarpedia.org/article/Models_of_consciousness)
ale teď jenom... kterej model je empiricky správnej, nebo která kombinace, nebo jestli je to ještě úplně jiná struktura co jsme ještě nenašli
na některých bylo pár pokusů o empirických měření [What a Contest of Consciousness Theories Really Proved | Quanta Magazine](https://www.quantamagazine.org/what-a-contest-of-consciousness-theories-really-proved-20230824/)
ale zatím nic moc výsledky
pokud vůbec reduktivní fyzikalismus a uzavřený individualismus u mysli platí 😄
[A new theory of Open Individualism – Opentheory.net](https://opentheory.net/2018/09/a-new-theory-of-open-individualism/)
sázím na to že jo s velkou pravděpodobností
[Ship of Theseus - Wikipedia](https://en.wikipedia.org/wiki/Ship_of_Theseus)
[Evolving New Foundation Models: Unleashing the Power of Automating Model Development](https://sakana.ai/evolutionary-model-merge/)
"Intelligence is the ability for an information processing system to adapt to its environment with insuficient knowledge and resources."
Insuficient knowledge and resources means the system works with respect to the following restrictions:
Finite. The system has a constant information processing capacity,
Real-time. All tasks have time requirements attached, and
Open. No constraints are put on the knowledge and tasks that the system can accept, as long as representable in the interface language.
https://cis.temple.edu/~pwang/Publication/intelligence.pdf
Different definitions and predefinitions and models are useful for different purposes in different contexts IMO. It's not that all are equally valid in all contexts. There are different motivations behind each in different ways useful definition of intelligence.
We are forcing sand to think summoning a machine God out of it
[Set Theory and Foundations of Mathematics](https://settheory.net/)
Let's synthesize the of best of both of these https://imgur.com/o2CZHC7
knot theory [The Insane Math Of Knot Theory - YouTube](https://www.youtube.com/watch?v=8DBhTXM_Br4&pp=ygUXdG9wb2xvZ2ljYWwga25vdCB0aGVvcnk%3D)
[Supercharging Research: Harnessing Artificial Intelligence to Meet Global Challenges - YouTube](https://www.youtube.com/live/GBkdfpNKzrc?si=8IusM-7g4uopNtwu)
there is now a cow-demic. A large number of cow herds in the US appear to be infected with H5N1. [x.com](https://twitter.com/Plinz/status/1783580417188327683)
Your startup is an OpenAI wrapper
OpenAI is a Nvidia wrapper
Nvidia is a TSMC wrapper
TSMC is an ASML wrapper
ASML is a Zeiss & Trumpf wrapper
Zeiss is a glass wrapper
Glass is a sand wrapper
Sand is an erosion wrapper
Erosion is an entropy wrapper
Conclusion: Invest in entropy.
[Endoreversible thermodynamics - Wikipedia](https://en.wikipedia.org/wiki/Endoreversible_thermodynamics)
Agent is controller of future states
[x.com](https://twitter.com/tsarnick/status/1783653775233921136?t=1Q0405nRVdiajFmP0eDCPQ&s=19)
[How Selective Forgetting Can Help AI Learn Better | Quanta Magazine](https://www.quantamagazine.org/how-selective-forgetting-can-help-ai-learn-better-20240228/)
[The Story of Physics ft. Edward Witten - YouTube](https://www.youtube.com/watch?v=UW_M7hotSlk&t=2820s)
[Cognitive architecture - Wikipedia](https://en.wikipedia.org/wiki/Cognitive_architecture)
[Principles of Synthetic Intelligence PSI: An Architecture of Motivated Cognition - Joscha Bach - Knihy Google](https://books.google.cz/books/about/Principles_of_Synthetic_Intelligence_PSI.html?id=YwRKxvnIN4sC&source=kp_book_description&redir_esc=y)
[Feature learning - Wikipedia](https://en.wikipedia.org/wiki/Feature_learning)
https://arxiv.org/abs/2310.02304
[Claude 3 Opus has stunned AI researchers with its intellect and 'self-awareness' — does this mean it can think for itself? | Live Science](https://www.livescience.com/technology/artificial-intelligence/anthropic-claude-3-opus-stunned-ai-researchers-self-awareness-does-this-mean-it-can-think-for-itself?utm_source=facebook.com&utm_medium=social&utm_campaign=socialflow&utm_content=livescience&fbclid=IwZXh0bgNhZW0CMTEAAR2A-pNQ1wRwAM89It9EvSloYON2oOfO0epiqDx6_AJxrSXOQk-0ONsSnR8_aem_AR8MPDlrb4YHCdwrn0v_2pA8ALni3u7h_IageeiJ4neUFyQaozyRcf7EOIjxZ8PT6bwXgTNO1kALO0tLei-LhIut)
[Overview of LLM Control Theory (April 2024) - YouTube](https://www.youtube.com/watch?v=tlAhCuekN6Q)
https://arxiv.org/abs/2310.04444
Cognitive architectures
Symbolic Architectures: These architectures rely on explicit symbolic representations and rule-based processing. Examples include ACT-R, Soar, EPIC, ICARUS, PRODIGY, CAPS, CLARION, NARS, CHREST, and LIDA.
Emergent Architectures: These architectures focus on bottom-up, self-organizing, and neural network-inspired approaches. Examples include Adaptive Resonance Theory (ART), Hierarchical Temporal Memory (HTM), Leabra, Integrated Biologically-based Cognitive Architecture (IBCA), Synthesis of ACT-R and Leabra (SAL), Shruti, Recommendation Architecture, and Semantic Pointer Architecture Unified Network (SPAUN).
Hybrid Architectures: These architectures combine aspects of both symbolic and emergent approaches. Examples include DUAL, CLIP, OpenCog Prime (CogPrime), PolyScheme, Sigma, Novamente, 4CAPS, CHARISMA, OSCAR, Disciple, Companions, and FORR.
Developmental Architectures: These architectures focus on modeling cognitive development and learning over time. Examples include DIARC, ICARUS (which is also listed under Symbolic Architectures), MicroPsi/MLECOG, MACSi, BECCA, and NACS.
Particles in a quantum superposition are irreducible ontological entities
“Superpozice kvantových stavů je, abych tak řekl lidsky, ireducibilní ontologická entita, neboli česky nezjednodušitelný jsoucnostní hentoonen.”
- Pavel Cejnar, Kvantová teorie II, Matfyz
https://arxiv.org/abs/2404.15676
https://onlinelibrary.wiley.com/doi/10.1111/tops.12717
https://arxiv.org/abs/2402.17762
https://arxiv.org/abs/2404.16014
[x.com](https://twitter.com/sen_r/status/1783497788120248431?t=nZeA3AGnb8Dgv6nxHMzm0A&s=19)
[x.com](https://twitter.com/Francis_YAO_/status/1783446286479286700?t=5U6RbQoNPA7SdBTJxKwqNQ&s=19)
https://arxiv.org/abs/2404.15574
"Everything can be formulated as an optimization problem."
- Yann Lecun
Here’s a couple I want to add that I think will be insightful even for active inference
Here’s a taste of my massive archive of information
[David Wolpert: What Can We Really Know About That Which We Cannot Even Imagine? No Free Lunch - YouTube](https://youtu.be/CCmeah2_I_s?si=nMCgfC7pBe0aAJIX)
[Engineering Explained: Bayesian Mechanics - YouTube](https://youtu.be/3VpptCcInjU?si=qbj3uLe7_JmLnlGE)
[Building Blocks of Memory in the Brain - YouTube](https://youtu.be/X5trRLX7PQY?si=aqt9qgxoN05HYIb5)
[Cognitive Gadgets – New Thinking from Old Parts- Celia Heyes - YouTube](https://youtu.be/fTxK2RDotrg?si=Q0myr-vKDiAamDlv)
[How Your Brain Organizes Information - YouTube](https://youtu.be/9qOaII_PzGY?si=O8SK-mqke3xwwLLJ)
[#51 FRANCOIS CHOLLET - Intelligence and Generalisation - YouTube](https://youtu.be/J0p_thJJnoo?si=j_p6yEex2q1ZtNeq)
[Roy Baumeister: Free Will, The Self, Ego, Will Power - YouTube](https://youtu.be/aXoK-C2c2AQ?si=ITuo_4txnvLka_E1)
[Anand Vaidya: Consciousness, Truth, Belief, Time - YouTube](https://www.youtube.com/watch?v=0BPLcuHnS_A&t=0s)
[Edward Gibson: Human Language, Psycholinguistics, Syntax, Grammar & LLMs | Lex Fridman Podcast #426 - YouTube](https://youtu.be/F3Jd9GI6XqE?si=LByUQZSo-kDfW8uR)
["Math Does Not Represent" by Erik Curiel - YouTube](https://www.youtube.com/live/aA_T20HAzyY?si=ntJ1da3aq93OPeti)
[Graham Priest: Logic, Nothingness, Paradoxes, Truth, Eastern Philosophy, Metaphysics - YouTube](https://youtu.be/ZGOMmGK4eeY?si=pea1ni8nfoj-_j47)
[Dr. JEFF BECK - The probability approach to AI - YouTube](https://youtu.be/c4praCiy9qU?si=4DvkQ4b475tpEbtw)
[Dendrites: Why Biological Neurons Are Deep Neural Networks - YouTube](https://youtu.be/hmtQPrH-gC4?si=5SX9Ki10-ur7xxum)
[Building a GENERAL AI agent with reinforcement learning - YouTube](https://youtu.be/s3C0sEwixkQ?si=UGQ6r_di2RGZ1i1L)
[AI AGENCY ISN'T HERE YET... (Dr. Philip Ball) - YouTube](https://youtu.be/n6nxUiqiz9I?si=kOHlFUKe1pU6T-yO)
[THE GHOST IN THE MACHINE - YouTube](https://youtu.be/axuGfh4UR9Q?si=k_bwMui-y0gijWcM&t=1213)
[DANIEL DENNETT - Can we trust AI? - YouTube](https://youtu.be/axJtywd9Tbo?si=IZWQW_kSx4vIOxsy)
Tyhle jsou dle mě nejlepší interviews s ním a lectures
[Synthetic Sentience - Can Artificial Intelligence become conscious? by Joscha Bach - YouTube](https://www.youtube.com/watch?v=FZxm810ruz0)
[Joscha Bach Λ Karl Friston: Ai, Death, Self, God, Consciousness - YouTube](https://www.youtube.com/watch?v=CcQMYNi9a2w)
[Cyber Animism by Joscha Bach - YouTube](https://www.youtube.com/watch?v=YZl4zom3q2g)
[Joscha Bach Λ Ben Goertzel: Conscious Ai, LLMs, AGI - YouTube](https://www.youtube.com/watch?v=xw7omaQ8SgA)
[Michael Levin Λ Joscha Bach: Collective Intelligence - YouTube](https://www.youtube.com/watch?v=kgMFnfB5E_A)
myslím že má až moc velkou jistotu o tom že jeho předpoklady, definice, a z nich derivovaný modely jsou správně
ale je skvělý jak moc engineering focused jsou 😄
zajímá mě co z jeho AGI labu výjde za výsledky [Liquid AI: A New Generation of Foundation Models from First Principles](https://www.liquid.ai/)
z těch jeho interviews jsem pochytil že jeho přístup k AGI je blíž k biologii, blíž k paradigmatu odstartovaný alan turingem [Turing pattern - Wikipedia](https://en.wikipedia.org/wiki/Turing_pattern)
blíž k sebeorganizaci, k neurálním celuárním automatám [The Future of AI is Self-Organizing and Self-Assembling (w/ Prof. Sebastian Risi) - YouTube](https://www.youtube.com/watch?v=_7xpGve9QEE)
I'll happily see this ❤️ https://fxtwitter.com/tsarnick/status/1783042172335563025
https://arxiv.org/abs/2310.17467
[Subtractive Mixture Models via Squaring: Representation and Learning | OpenReview](https://openreview.net/forum?id=xIHi5nxu9P)
[x.com](https://twitter.com/loreloc_/status/1783532892447994057?t=0CKEUSu7ep1LYvA-aAkrXQ&s=19)
Causes of aging
[x.com](https://twitter.com/MarcosArrut/status/1783839110173737088?t=NHvr7LTdJbkP2_pXH0Obkg&s=19)
https://www.sciencedirect.com/science/article/pii/S0006322324011405?via%3Dihub
[Biological neuron model - Wikipedia](https://en.wikipedia.org/wiki/Biological_neuron_model?wprov=sfla1)
[Kuramoto model - Wikipedia](https://en.wikipedia.org/wiki/Kuramoto_model?wprov=sfla1)
The only moral action is shortterm maximization of entropy for longterm minimization of entropy
But not universally, keeping certain pockets of negentropy.
https://imgur.com/5sYRlKe
Explaining reality from first principles, our history, present, possible all possible futures and how to steer towards it
[Tim Palmer: Non-Locality, Universe on a Fractal, Quantum Mechanics - YouTube](https://www.youtube.com/watch?v=vlklA6jsS8A)
https://www.sciencedirect.com/science/article/abs/pii/S1568163724001284?dgcid=author
[Friendly Superintelligent AI: All You Need Is Love | SpringerLink](https://link.springer.com/chapter/10.1007/978-3-319-96448-5_31?fbclid=IwZXh0bgNhZW0CMTEAAR3Ly3zabuEo2nemlex9qQxjTvrszNhqMlVtusaDTK_gQfmdMrnwE7mjCi4_aem_AR-U973tMwU2TKAkF8V0sgUChgfGcdvN5lIJPsgTbkXW16K0pkBCiceINv_2HPGOPHA0tmEjelUdWLpHu1CmaO67#Sec3)
https://arxiv.org/abs/2404.15758
[Varovná čísla: Celý svět rekordně zbrojí. Největší nárůst nemá Rusko ani Ukrajina - Aktuálně.cz](https://zpravy.aktualne.cz/zahranici/sipri-2023/r~640a682200a211ef801c0cc47ab5f122/)
We should create new university curriculum that teaches all of math applied to AI systems combined with engineering them
Feed me all equations describing artificial, biological, collective etc. intelligence
Energy based models
[Ontology Of Psychiatric Conditions: Taxometrics](https://www.astralcodexten.com/p/ontology-of-psychiatric-conditions)
Some mental health disorders are spectrum, some are discrete — some are just the extreme ends of continuously distributed traits, some are more like manifestations of some underlying thing you either have or don't have
We shouldn't be underestimating China in the AI race, they have more patents
[x.com](https://twitter.com/pmddomingos/status/1783970487271645216?t=kjD9crg_VctRHSDtD59GTQ&s=19)
https://www.extremetech.com/computing/intel-completes-assembly-of-worlds-first-high-na-lithography-machine?utm_campaign=trueAnthem%3A+Trending+Content&utm_medium=trueAnthem&utm_source=facebook&fbclid=IwZXh0bgNhZW0CMTEAAR0uiSz2cfRz7Yb7ulCT1naGIxxZrxek2RlCmiKwoGLCtaP-45T6oDYuOug_aem_ARiOMVv68Ack43C9OC_dZl9U7wFBZQZn-aHgjZwHlg9jWIXL-U2p8H3jw-hdeCy1uoWuWMRs5qTp93mvGNjkBnp_