Links for 2022-11-01
A human-like reasoning architecture for language models to solve math word problems. The technique obtains superior reasoning ability over other language models, and thus outperforms language models with several tens of times more parameters. https://arxiv.org/abs/2210.16257
“ ...we show that transformers can improve themselves autonomously through trial and error without ever updating their weights. No prompting, no finetuning. A single transformer collects its own data and maximizes rewards on new tasks.” https://arxiv.org/abs/2210.14215
Fabolous post with animations on how sound works. https://ciechanow.ski/sound/
There’s Hope for Life on Europa, a Distant Moon [The Atlantic] https://archive.ph/vAiF1
48% of AI researchers think AI has a significant (>10%) chance of making humans extinct. 58% believe AI alignment (by Russell's definition) is "very important". Most think human-level AI is likely within our lifetime. https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/
When people have children, they become less likely to commit crimes. https://www.nber.org/papers/w30385
Overview of why use softmax (do good options often, worse options less often) rather than argmax (do the best option 100%) in many situations. https://forum.effectivealtruism.org/posts/8Ban7AnoqwdzQphsK/we-can-do-better-than-argmax
“We’ve identified the Mind-Body Interface, a novel distributed network within human primary motor cortex that disrupts the famous—but incorrect—motor homunculus, and that exhibits strong connections to high-level control networks.” https://www.biorxiv.org/content/10.1101/2022.10.26.513940v1
“In the GCSE exam given to almost all 16-year-olds in England, the black-white IQ gap has plummeted since ~2005. Now the gap is a trivial 2 IQ points…Strangely, whatever has caused the gap to shrink on GCSEs has not affected attainment on any other test.” https://georgefrancis.substack.com/p/solving-the-gcse-mystery
Redditor acquires decommissioned Netflix cache server with 262TB of storage https://arstechnica.com/information-technology/2022/10/redditor-acquires-decommissioned-netflix-cache-server-with-262tb-of-storage/
There is now a book on suffering risks: https://forum.effectivealtruism.org/posts/XyCLLYkBCPw44jpmQ/new-book-on-s-risks
Friendly reminder: suffering risks >> existential risks >> catastrophic risks
1. Suffering risks: Events that produce astronomical suffering.
Example: A misaligned artificial intelligence with the prime directive to prevent any harm to humans. All humans are kept alive until the heat death of the universe but locked away and prevented from committing suicide.
2. Existential risks: Events that eliminate all of humanity and thereby forever prevent the existence of future generations.
Example: A doomsday cult creates a bioweapon targeting phytoplankton in the oceans triggering a collapse of the whole ecosystem.
3. Catastrophic risks: Events that could damage human well-being on a global scale, even endangering or destroying modern civilization.
Example: Advances in machine learning combined with cheap, microscopic, and self-powered sensors enable a perpetual totalitarian world government.