Links for 2022-12-11
Here is a reminder that machine learning applied to robotics is progressing quickly as well. This robot learned its skills from watching internet videos of humans using their hands, plus a few teleoperated demonstrations to help bridge the gap between internet data and the robot embodiment. It outperforms various state-of-the-art methods on various manipulation tasks. https://video-dex.github.io/
***See your cells without a microscope*** 👀 Introducing #UnclearingMicroscopy, where cells are expanded 8,000 volumetrically and opaqued with light-scattering molecules of high density to reveal cell microstructure with the naked eye https://biorxiv.org/content/10.1101/2022.11.29.518361v1
Anthropic discussion: "Cost on biggest model today is $10M. Expect $100M in a few years. $1B by 2030." https://twitter.com/RhysLindmark/status/1600666878963171329
“How does text-davinci-003 do on agent-like tasks? TLDR: Displays superior understanding and ability to take multi-step actions towards original goal” https://threadreaderapp.com/thread/1600162023589163008.html
Using GPT-Eliezer against ChatGPT Jailbreaking https://www.lesswrong.com/posts/pNcFYZnPdXyL2RfgA/using-gpt-eliezer-against-chatgpt-jailbreaking
“I played chess against ChatGPT” https://villekuosmanen.medium.com/i-played-chess-against-chatgpt-4c2cc78b5acf
“Introducing AuDrA: An Automated Drawing Assessment platform for evaluating creativity! AuDrA is a neural network trained to judge the creativity of drawings like a human.” https://threadreaderapp.com/thread/1600508898699579395.html
“A ball on a spinning turntable won't fly off as you might expect. In fact the ball will have it's own little orbit that is exactly 2/7th the angular speed of the table. Here's why.” https://www.youtube.com/watch?v=3oM7hX3UUEU
Domesticated cattle have "brains about 25% smaller than their wild forebears." [Science.org] https://archive.vn/wip/ktgtc
"If we want to make progress in biology, we need a high-status way for top biology researchers to keep doing research, in the same way that DeepMind, OpenAI, FAIR, etc. have created a new high-status way for AI researchers to keep doing research" https://www.sam-rodriques.com/post/why-is-progress-in-biology-so-slow
Inner and outer alignment decompose one hard problem into two extremely hard problems https://www.lesswrong.com/posts/gHefoxiznGfsbiAu9/inner-and-outer-alignment-decompose-one-hard-problem-into
“The biggest status-enhancing and status-decreasing behaviors. Evolutionary psychology of status and status sex differences. Cross-cultural findings across 14 nations.” https://threadreaderapp.com/thread/1598286056927162368.html
Google is imposing a penalty on AI-generated content in its rankings. https://medium.com/geekculture/google-destroys-ai-generated-content-rankings-59589da095ab
Google employees explain why we haven’t seen ChatGPT like functionality in their products https://news.ycombinator.com/item?id=33817682
Even without any further progress, ChatGPT could be improved A LOT with already existing solutions. For most of its shortcomings, there is already a system or technique that demonstrates how to do much better.
For example, AlphaCode can do competitive programming. Cicero shows how to use language models in a goal-oriented manner in order to achieve well-defined objectives. Google's Minerva can solve 80% GCSE Higher Mathematics problems and a third of STEM undergraduate problems from MIT. OpenAI's neural theorem prover and Meta's HyperTree Proof Search show how to solve Math Olympiad problems. To name just a few of many other systems like DeepMind's Flamingo.
If one were to take all these cutting-edge systems and stitched them together, then I'm confident that the resulting system could be called a weakly general AI.
So I don't quite get the “Singularity is canceled” takes from some people in response to ChatGPT failures. The stuff that already exists right now and is publicly known is already much more impressive than ChatGPT. And this ignores everything that's on the horizon and which hasn't yet been released or tried.