Links for 2022-06-11
Super Study Guide: Algorithms and Data Structures — A resource to study data structures and algorithms. https://superstudy.guide/algorithms-data-structures/foundations/algorithmic-concepts
“I don’t want to definitively assert that a brain-sized GPT will definitely be just as good at reasoning as the brain. But I hardly think GPT’s performance provides strong evidence to the contrary.” https://astralcodexten.substack.com/p/somewhat-contra-marcus-on-ai-scaling?s=r
AGI Safety FAQ / all-dumb-questions-allowed thread https://www.lesswrong.com/posts/8c8AZq5hgifmnHKSN/agi-safety-faq-all-dumb-questions-allowed-thread
A descriptive, not prescriptive, overview of current AI Alignment Research https://www.lesswrong.com/posts/FgjcHiWvADgsocE34/a-descriptive-not-prescriptive-overview-of-current-ai
Borealis photonic QPU built by Xanadu now accessible via Amazon Braket. It's "the first publicly accessible device with a peer-reviewed quantum advantage claim, giving customers access ... to quantum computations that cannot be simulated [classically]." https://aws.amazon.com/de/blogs/quantum-computing/explore-quantum-computational-advantage-with-xanadus-borealis-device-on-amazon-braket/
“Physicists are building neural networks out of vibrations, voltages and lasers, arguing that the future of computing lies in exploiting the universe’s complex physical behaviors. …McMahon views his devices as striking, if modest, proof that you don’t need a brain or computer chip to think. ‘Any physical system can be a neural network,’ he said. https://www.quantamagazine.org/how-to-make-the-universe-think-for-us-20220531/
"Genetic paparazzi are right around the corner, and courts aren’t ready to confront the legal quagmire of DNA theft" https://theconversation.com/genetic-paparazzi-are-right-around-the-corner-and-courts-arent-ready-to-confront-the-legal-quagmire-of-dna-theft-178866
Science is getting harder. Evidence that discoveries are getting smaller on average. https://mattsclancy.substack.com/p/science-is-getting-harder?s=r
What Is It About the Human Brain That Makes Us Smarter Than Other Animals? New Research https://singularityhub.com/2022/06/03/what-is-it-about-the-human-brain-that-makes-us-smarter-than-other-animals-new-research/
The First Privately Funded Killer Asteroid Spotter Is Here https://www.wired.com/story/the-first-privately-funded-killer-asteroid-spotter-is-here/
Wikipedia page on the price of every chemical element. https://en.wikipedia.org/wiki/Prices_of_chemical_elements
This isn't happening because it wouldn't work and you don't need to be a +140 IQ alignment researcher to see this:
1. AI researchers are much more replaceable than AI alignment researchers. Fewer people are interested and capable to work on AI alignment. The few alignment researchers that exist might be locked up as a result or be unable to work freely. Almost nobody would want to work in that field anymore.
2. It would be incredibly hard to track down and take out Chinese and Israeli government researchers. You could end up strengthening more ruthless people who might realize the strategic importance of AI and start a black project as a result.
3. It would become impossible to have a rational discussion about the issue after the mainstream media "experts" finished thoroughly ridiculing it as the unfounded idea of a doomsday cult.
P.S. Here are several reasons why you should be extremely risk-averse about extreme actions from which one cannot recover, even if they seemingly appear to be rational at that moment in time:
Ontological crisis: you may learn something about reality that completely undermines your goals or makes them self-contradictory (e.g. a theist learning about evolution).
Moral uncertainty: even if we know each and every consequence of our actions, we would still need to know which is the right ethical perspective for analyzing these consequences.
Humans don’t have stable values: since humans change their goals with time, it is better to follow an approximate set of values that satisfy a broad range of terminal goals that you might eventually end up with.
Bounded rationality and long-term consequences: the sign of the value of the impact of one’s actions becomes less predictable the farther one looks into the future.
See also: Terrorism Is Not Effective https://www.gwern.net/Terrorism-is-not-Effective
Paul Krugman's column about the state of Mars colonization in 2050 will age as well as his prediction that the Internet's impact on the economy will be no greater than the fax machine's.
Even under very pessimistic assumptions, by 2050 we will likely have autonomous and self-replicating factories thanks to progress in artificial intelligence. This will make space colonization vastly easier because the necessary infrastructure will build itself.
Any long-term plan or prediction that ignores technological progress is as useless as your 1998 browser estimating that your download will take 40 years to complete.
Our mammalian ancestors adapted their way out of an asteroid impact that wiped out 99.9999 percent of all living organisms on Earth, stopped any photosynthesis, and extinguished the phytoplankton in the oceans. And they did not have rapidly advancing technology!
See also: Climate change: Many think it's the world's top problem because it threatens humanity's survival. Is this right? https://80000hours.org/problem-profiles/climate-change/