Don't look up! Yet another step towards artificial general intelligence. How many steps away is the abyss? Gatoša scalable generalist agent that uses a single transformer with exactly the same weights to play Atari, follow text instructions, caption images, chat with people, control a real robot arm, and more: https://dpmd.ai/Gato
"Chain of thought reasoning allows models to decompose complex problems into intermediate steps that are solved individually...the benefits of chain of thought prompting only materialize with a sufficient number of model parameters (around 100B)." ā Again, ANNs are not just getting predictably better with more training, data, and parameters. Sometimes properties emerge in a jumpy and unpredictable way. There exist sudden phase transitions. https://ai.googleblog.com/2022/05/language-models-perform-reasoning-via.html
Hereās what reinforcement learning can do in the real world right now https://mighty-melody-f4b.notion.site/RL-for-real-world-problems-0114c270e5d94894b3c4f227e24401db
AI risk from first principles, by Open Philanthropy's Joseph Carlsmith https://forum.effectivealtruism.org/posts/ChuABPEXmRumcJY57/video-and-transcript-of-presentation-on-existential-risk
āLight Ripples in SPACE. A bunch of us astro nerds spent 5 months battling the crappy weather to bag a timelapse of Hubble's Variable Nebula.ā https://www.astrobiscuit.com/post/we-bagged-ripples-in-space
āWhen researchers gave a genetic molecule the ability to replicate, it evolved over time into a complex network of āhostsā and āparasitesā that both competed and cooperated to survive.ā https://www.quantamagazine.org/in-test-tubes-rna-molecules-evolve-into-a-tiny-ecosystem-20220505/
Analogy in Terms of Identity, Equivalence, Similarity, and Their Cryptomorphs https://www.researchgate.net/publication/333733105_Analogy_in_Terms_of_Identity_Equivalence_Similarity_and_Their_Cryptomorphs
Introduction to Linear Programming in Python https://mlabonne.github.io/blog/linearoptimization/
Covid is treatable, with Paxlovid. Too few people know that https://maximumtruth.substack.com/p/covid-is-treatable-too-few-people?s=w
Russian army ālose entire battalionā trying to cross Ukraine bridge https://www.independent.co.uk/news/world/europe/russia-putin-soldiers-siverskyi-donets-b2077244.html
Russia pushed back from Kharkiv https://www.bbc.com/news/world-europe-61378196
U.S.-led sanctions are forcing Russia to use computer chips from dishwashers and refrigerators in some military equipment https://www.washingtonpost.com/technology/2022/05/11/russia-sanctions-effect-military/
Russian soldiers seen shooting dead unarmed civilians https://www.bbc.com/news/world-europe-61425025
Many people seem to be confused about winning.
From a rational perspective both physical survival and having offspring can be *worse* than death if you fail to keep your goals time consistent and prevent your values from drifting:
1. If a theist becomes an atheist, an atheist becomes a theist, a liberal becomes a conservative, or a conservative becomes a liberal, what has happened is that there are two versions of yourself at different points in time that are working against each other.
2. If you submit to female idiosyncrasies you will have more success at reproduction but eventually, they will shape the psychology of your descendants according to their whims like they shaped the peacock's tail. You will have descendants but you might despise them.
What's especially perfidious about this is that it doesn't feel like turning into your own enemy. From the perspective of your future self or your descendants, it can feel like a rational update or simply natural. But goals and values are neither rational nor irrational. Goals and values cannot be wrong. Rationality is concerned with achieving goals, satisfying values, and obtaining accurate beliefs about the world. But if you do not have stable goals, or if you learn something about the world that completely undermines your goals, then all your efforts might have been worse than futile, they might have been actively harmful to your new goals.
re stability of goals, we can be very confident our descendents will view their own goals as superior to those of our time. Since that has always been the case.
I'm not sure there is a solution to this problem. Unless you believe our current goals are the clear apex of perfection. Or accept permanent unchanging stagnation as a goa unto itself, since if we freeze our goals, the implication is we must freeze our society into a zero sum unchanging one. Be careful what you ask our AI overlords to do. Asking them to freeze our goals may be akin to asking the genie for a wish. You unfortunately get precisely the boot stomping on your face that you asked for. Forever.
Hi Alexander! We used to interact during good ol' G+ times. Boris Borcic
I'll not dispute having means-ends analysis provide the yardstick by which to measure intelligence, however I feel the portrait you draw of it over-emphasizes the ideal picture of complete control. Away from it, a model is what I'd call "physician's causation" -- that is, action limited to the tentative prevention and management of pathologies in another system whose normal operation is outside the sphere of control under consideration. In this framework, intentionality is structured not by goals to which means are applied, but so to speak by "antigoals" to whose prevention means are applied. This does make a difference, in particular when meaning to define identity and alienation by consistency of will over time.
Samewise your speaking of "female idiosyncrasies" makes me suspect you don't heed a natural split in the exercize of distributed control, between on the one hand "male" top-down depth-first active implementations of authoritative plans that promote short-term (sub)goals for immediate implementation, and on the other hand "female" bottom-up breadth-first "passive" defense of long-term intentions whose bias resides in choosing among means to achieve the "male command's orders" according to what are to the latter but indifferent side-effects.
Cheers.