Links for 2023-02-21
“Do neural networks learn 'universal' solutions or idiosyncratic ones? We find inherent randomness. Models consistently learn group composition via an interpretable, representation theory (!) based algorithm. Yet they even skip simple reps for complex ones!” https://threadreaderapp.com/thread/1625948104121024516.html
Learning Performance-Improving Code Edits: PIE allows CODEGEN, an open-sourced and 10x smaller model than CODEX, to match the performance of CODEX https://pie4perf.com/
LEVER: Learning to Verify Language-to-Code Generation with Execution https://arxiv.org/abs/2302.08468
“You too can not follow all the new papers out every day proposing methods to Augment Language Models e.g. CoT, Toolformer, and more? We thought the same and tried to present and discuss the big lines in this survey” https://arxiv.org/abs/2302.07842
A proposed method for forecasting transformative AI https://www.lesswrong.com/posts/4ufbirCCLsFiscWuY/a-proposed-method-for-forecasting-transformative-ai
“5 years is possible. 10 years is pretty plausible. 30 years would be surprising. The world is just starting to take an extremely wild ride. We’re really unprepared for it, in terms of technical safety and in terms of our society’s ability to adapt gracefully to the shock.” https://musingsandroughdrafts.com/2023/02/17/my-current-summary-of-the-state-of-ai-risk/
“46% of respondents think that AI development will do about the same amount of good and harm, and 41% of people... believe that the technology will ultimately do harm to society overall... 55% are very or somewhat worried that AI could one day pose a risk to the human race.” https://www.cnbc.com/2023/02/15/only-9percent-of-americans-think-ai-development-will-do-more-good-than-harm-.html
Strange New Form of Ice Discovered – “Raises Many Questions on the Very Nature of Liquid Water” https://scitechdaily.com/strange-new-form-of-ice-discovered-raises-many-questions-on-the-very-nature-of-liquid-water/
“European countries spend a third less on research and development than America or Japan, as a share of gdp, and are out-invested even by China nowadays.” [The Economist] https://archive.is/XAyF2
The ocean science community must put science before stigma with anomalous phenomena https://thehill.com/opinion/technology/3853227-the-ocean-science-community-must-put-science-before-stigma-with-anomalous-phenomena/
A Black Professor Trapped in Anti-Racist Hell https://compactmag.com/article/a-black-professor-trapped-in-anti-racist-hell
Among white Americans, extreme liberals are the smartest group with a mean IQ of 107, while extreme conservatives have a mean IQ of 98.5. https://kirkegaard.substack.com/p/conservatives-arent-stupid
Grooming Gangs: Britain’s Shame https://edwest.substack.com/p/our-modern-babylon
This post could have cited better examples than Boston Dynamics when it comes to progress in robotics [edit: they updated the post with the examples below]:
1. “The result is a state-of-the-art Robotics Transformer model, or RT-1, that can perform over 700 tasks at a 97% success rate, and even generalize its learnings to new tasks, objects and environments. This is an early step towards robot learning systems that may be able to handle the near-infinite variability of human-centered environments.” https://blog.google/technology/ai/helping-robots-learn-from-each-other/
2. This robot learned from watching videos of humans using their hands, plus a few teleoperated demonstrations to help bridge the gap between data and the robot embodiment. https://video-dex.github.io/
3. Researchers have used reinforcement learning to build a robotic dog that learns to walk on its own in the real world (i.e., without prior training and use of a simulator). [Technology Review] https://archive.ph/ZIr5S
Read the original post: https://www.lesswrong.com/posts/PE22QJSww8mpwh7bt/agi-in-sight-our-look-at-the-game-board