“Tested on the PISA benchmark, Magnushammer achieves 59.5% proof rate compared to a 38.3% proof rate of Sledgehammer, the most mature and popular symbolic-based solver. Furthermore, by combining Magnushammer with a neural formal prover based on a language model, we significantly improve the previous state-of-the-art proof rate from 57.0% to 71.0%.” https://arxiv.org/abs/2303.04488
MathPrompter: Mathematical Reasoning using Large Language Models -- improves over state-of-the-art on the MultiArith dataset (78.7% → 92.5%) evaluated using 175B parameter GPT-based LLM https://arxiv.org/abs/2303.05398
A Neural Network Solves, Explains, and Generates University Math Problems by Program Synthesis and Few-Shot Learning at Human Level https://arxiv.org/abs/2112.15594
Stable Diffusion generates beautiful images, but can it be used for open-world recognition? The pre-trained diffusion model indeed is a good image parser, and allows for open-vocabulary segmentation and detection. — “Our approach outperforms the previous state of the art by significant margins on both open-vocabulary panoptic and semantic segmentation tasks.” https://jerryxu.net/ODISE/
Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models https://arxiv.org/abs/2303.04671
Scott Aaronson: "like the Jesuit astronomers declining to look through Galileo’s telescope, what Chomsky and his followers are ultimately angry at is reality itself, for having the temerity to offer something up that they didn’t predict and that doesn’t fit their worldview." https://scottaaronson.blog/?p=7094
The Parable of the King and the Random Process https://www.lesswrong.com/posts/LzQtrHSYDafXynofq/the-parable-of-the-king-and-the-random-process
There are more hedge funds than Burger Kings (A lot more) [Financial Times] https://archive.is/pM2ml
"books … 1850 to 2019 … words associated with rationality, such as “determine” & “conclusion,” rose systematically after 1850, while words related to human experience such as “feel” & “believe” declined. This pattern reversed over the past decades" https://www.pnas.org/doi/10.1073/pnas.2107848118
There is no IQ threshold effect, also not for income https://kirkegaard.substack.com/p/there-is-no-iq-threshold-effect-also (see also: The Mensa fallacy https://kirkegaard.substack.com/p/the-mensa-fallacy)
Review of 1,039 studies indicates that exercise can be more effective than counselling or medication for depression https://bjsm.bmj.com/content/early/2023/03/02/bjsports-2022-106195
“Researchers found that when they turned cancer cells into immune cells, they were able to teach other immune cells how to attack cancer… ‘This approach could open up an entirely new therapeutic approach to treating cancer.’” https://med.stanford.edu/news/all-news/2023/03/cancer-hematology.html
“The EU’s “chat control” legislation is the most alarming proposal I’ve ever read. Taken in context, it is essentially a design for the most powerful text and image-based mass surveillance system the free world has ever seen...The idea that we can deploy AI systems to read your private conversations and report crimes is frankly dystopian. Even if such systems existed, no reasonable democracy would vote for this. But this is what the EU is proposing to mandate and *build* in the next couple of years.” https://threadreaderapp.com/thread/1634252397919739921.html
Remember that if AI is an existential risk, then the worlds in which you survive will be increasingly marked by crazy coincidences that slow down or prevent the creation of artificial superintelligence.
What might such events look like? For example, crypto miners causing a GPU shortage, a pandemic preventing people from meeting at AI conferences and causing supply chain disruptions, semiconductor sanctions against China, or the collapse of a well-capitalized bank caused by the venture capital community it served.
China attacking Taiwan and destroying the TSMC chip factories is something else you would expect to happen in the surviving world lines.
This concept is called an anthropic shadow. Circumstances that prevent our future demise don't usually happen suddenly. The world lines in which we survive depart before we die. For example, if a conference of physicists in favor of a new particle accelerator is blown up, this is evidence that it constitutes an extinction risk.
The idea of an anthropic shadow is interesting, but I don't find these examples too convincing. The tech industry is the most dynamic and profitable sector of the American economy, reliant on highly technical logistics and top-tier talent. As a result, tech will always be affected one way or another by world events.