Links for 2023-04-02
Letting a large language model (LLM) recursively criticize and improve its output significantly outperforms existing LLM methods on computer tasks and surpasses supervised learning and reinforcement learning approaches:
In this work, we show that a pre-trained large language model (LLM) agent can execute computer tasks guided by natural language using a simple prompting scheme where the agent recursively criticizes and improves its output (RCI).
Paper: https://arxiv.org/abs/2303.17491
“Strong problem-solving systems can be built from AI systems that play diverse roles, LLMs can readily play diverse roles in role architectures, and AI systems based on role architectures can be practical, safe, and effective in undertaking complex and consequential tasks.” https://www.lesswrong.com/posts/AKaf8zN2neXQEvLit/role-architectures-applying-llms-to-consequential-tasks
Five years of progress in GPTs: A summary of the progression of the SOTA in language models https://finbarrtimbers.substack.com/p/five-years-of-progress-in-gpts
Learning from human-written natural language feedback is both more effective and sample-efficient than training exclusively on demonstrations of code generation tasks. https://arxiv.org/abs/2303.16749
ChatGPT for threat analysis of your code: “In just 2 days, we confirmed 227 vulnerable and malware packages, all discovered with the help of ChatGPT” https://twitter.com/feross/status/1641548124366987264
GPT is becoming a Turing machine: Here are some ways to program it https://arxiv.org/abs/2303.14310
“Our model achieves the SOTA on image-text and text-image retrieval, video question answering and open-vocabulary detection tasks, outperforming much larger and more extensively trained foundational models.” https://arxiv.org/abs/2303.16839
“We observe that scaling Vision Transformers increases [out-of-distribution] performance: even though ImageNet accuracy saturates, we see a significant increase on ObjectNet from ViT-e to ViT-22B…” https://ai.googleblog.com/2023/03/scaling-vision-transformers-to-22.html
Detecting novel systemic biomarkers in external eye photos https://ai.googleblog.com/2023/03/detecting-novel-systemic-biomarkers-in.html
“Micro-reactors are quite the popular topic right now, so let's talk about how you make a REALLY micro-reactor using the best (thermal) nuclear fuel we know of, Americium! Specifically, the isotope Am-24m.” https://twitter.com/GBruhaug/status/1638998500770992130
How your brain data could be used against you [MIT Technology Review] https://archive.is/xb7Pc (Related science fiction short stories (highly recommended): 1. https://qntm.org/mmacevedo 2. https://zerohplovecraft.substack.com/p/key-performance-indicators)
ZeFrank on predatory mussels and their mimicry: “This is one of my all-time favorite examples of the power of natural selection, and one I taught in my evolution class as an example of mimicry.” https://whyevolutionistrue.com/2023/03/23/zefrank-on-predatory-mussels/
Don’t panic about social media harming your child’s mental health – the evidence is weak https://inews.co.uk/news/technology/dont-panic-about-social-media-harming-your-childs-mental-health-the-evidence-is-weak-2230571
Tankers of 54th Mechanized Brigade attacking Russians near Verkhnokamyanske, March 31 https://www.youtube.com/watch?v=0EvE6KMmJ70
(The following is based on something I originally wrote in 2013.)
A rarely mentioned side effect of superhuman artificial general intelligence is that, even if it doesn't kill us, it will remove almost all meaning. Everything anyone cares about will be literally one prayer away from being fulfilled.
For example, what if you came up with a philosophical conundrum? Well, just ask God to solve it for you. And if you're not smart enough to understand the solution, just ask God to make you smart enough.
Or what if you wanted to do mathematics? You could trivially integrate the resources of a specialized Matrioshka brain into your consciousness and implement and run an ideal mathematician.
Everything you could do has either already been done or can be done better by God.
But surely, you wonder, there must be fantastic virtual environments to explore. And what about sex? Well, God thoroughly understands what it is that makes exploration and sex fun for humans. It knows how to implement the ideal adventure in which you save people of maximal sexual attractiveness and could instantly integrate the memory of such an adventure for you, or simulate it a billion times in a few nanoseconds. And the same is true for all possible permutations that are less desirable.
But the consequences are even deeper than that. Concepts such as creativity or fun will be perfectly understood as mechanical procedures that God can easily implement and maximize. For God, human happiness is conceptually no more interesting than an involuntary muscle contraction. For God, the incomprehensible complexity of your human values is a conceptually simplistic, and transparent set of rules, barely more intriguing than a rat pressing a lever to receive a short electric stimulation of its reward center.
In summary, artificial general intelligence is literally the last discovery we have to make. At that point, the universe has understood itself.
The movie has been watched.
The game has been won.
The end.