Links for 2023-03-13
This Changes Everything — The New York Times quoting Sundar Pichai, the chief executive of Google: “A.I. is probably the most important thing humanity has ever worked on. I think of it as something more profound than electricity or fire.” And another quote by former key member of OpenAI: “The broader intellectual world seems to wildly overestimate how long it will take A.I. systems to go from ‘large impact on the world’ to ‘unrecognizably transformed world. This is more likely to be years than decades, and there’s a real chance that it’s months.” https://archive.is/h5rkU
Planning with Large Language Models for Code Generation: “…it can generate programs that consistently achieve higher performance compared with competing baseline methods; 2) it enables controllable code generation, such as concise codes and highly-commented codes by optimizing modified objectives.” https://codeaimcts.github.io/
“One of GPT's most under-appreciated features is one-shot structuring of unstructured data. The possibilities of this are far reaching and can unlock massive productivity boosts in many domains. This will be a thread of some experiments applying this to clinical note-taking” https://twitter.com/petepetrash/status/1619578203143798791
Scaling up GANs for Text-to-Image Synthesis: Orders of magnitude faster at inference time; Can synthesize high-resolution images, for example, 16-megapixel pixels in 3.66 seconds. https://mingukkang.github.io/GigaGAN/
“Artificial neural networks that incorporate both feedforward and feedback connections are generically called recurrent neural networks (RNNs). Such networks (unlike feedforward LLMs) can discern patterns in data that change over time...RNNs with spiking neurons outperform those with standard neurons, and, in theory, are three orders of magnitude more computationally efficient...Such computing elements will need to be built into hardware, on neuromorphic chips, to realize their benefits...DeepMind’s researchers showed that a 7.5-billion parameter LLM, coupled with a database of 2 trillion tokens, outperforms LLMs with 25 times more parameters...During inference, GLaM used half the computing resources that GPT-3 needed. And it outperformed GPT-3 when trained on the same amount of data.” https://www.nature.com/articles/d41586-023-00641-w
“PAC-NeRF is a novel approach to estimating both the unknown geometry and physical parameters of highly dynamic objects from multi-view videos.” https://sites.google.com/view/PAC-NeRF
“Having thought a bunch about acausal trade — and proven some theorems relevant to its feasibility — I believe there do not exist powerful information hazards about it that stand up to clear and circumspect reasoning about the topic. I say this to be comforting rather than dismissive; if it sounds dismissive, I apologize.” https://www.lesswrong.com/posts/3RSq3bfnzuL3sp46J/acausal-normalcy
"[We hypothesize] that fragility may not only be a possible risk, but could be inevitable, and would therefore be a subclass or example of Bostrom’s vulnerable worlds." https://philpapers.org/rec/MANSFA-3
A new longitudinal study of 107 4-year-olds identifies key cognitive and memory processes that support the development of creative thinking https://www.tandfonline.com/doi/full/10.1080/10400419.2023.2182492
OpenAI Founder Sam Altman Invests $180 Million in Life-Extension Startup https://gizmodo.com/open-ai-life-extension-tech-sam-altman-retro-1850203114
Obesity could cost the world over $4 trillion a year by 2035 https://www.statnews.com/2023/03/02/obesity-costs-4-trillion-2035/
We've learned to fear plutonium as one of the most dangerous substances known to man. Except, it isn't. Unless you eat it, it's extremely safe. Decades of scaremongering have poisoned us against one of the elements of nuclear power, writes Jack Devanney. https://worksinprogress.co/issue/the-most-dangerous-substance-known-to-man
Germany. University life seen through American eyes. Tupper, 1900-1901 https://www.irwincollier.com/germany-university-life-seen-through-american-eyes-tupper-1900-1901/
Here is your frequent reminder that equity is one of the most destructive ideas ever conceived of.
Even a society with advanced genetic engineering, in which everyone can be a good-looking and healthy genius, won't be enough to achieve equity.
Through sheer luck, people will eventually accumulate more wealth, connections, and thereby opportunities, than others. Outcomes will then begin to diverge.
The only way to ensure equity is by force; by continually stripping people of wealth and status and redistributing it to strangers.
The Khmer Rouge is what happens when you take the concept of 'equity' to its logical conclusion.