"It often feels like electronics will continue to get faster forever, but at some point the laws of physics will put a stop to that. Now scientists have calculated the ultimate speed limit – the point at which quantum mechanics prevents microchips from getting any faster." https://newatlas.com/electronics/absolute-quantum-speed-limit-electronics/
"Mercedes has said it will accept full responsibility for accidents caused by faults with the technology, though not by a driver’s failure to comply with their duty of care." https://www.driving.co.uk/news/technology/mercedes-to-accept-legal-responsibility-for-accidents-involving-self-driving-cars/
Language models can dramatically improve their reasoning by learning from chains of thought that they generate. https://arxiv.org/abs/2203.14465
Risks from Learned Optimization in Advanced Machine Learning Systems https://arxiv.org/abs/1906.01820
“We believe in patronage to help kickstart your dreams…1517 Medici Project works with high school, college students, and dropouts to launch projects to make humanity better.” https://www.1517fund.com/post/1517-medici-project
Gene editing and elimination of latent herpes simplex virus in vivo https://www.nature.com/articles/s41467-020-17936-5
No increased risk of brain tumours for mobile phone users, new study finds https://www.ox.ac.uk/news/2022-03-30-no-increased-risk-brain-tumours-mobile-phone-users-new-study-finds
“Preclinical studies show that replenishing NAD by supplementation with nicotinamide riboside (NR), a biosynthetic precursor to NAD, can promote health span and neuroprotection.” https://mindblog.dericbownds.net/2022/03/re-energizing-aged-brain.html
Unravelling the mystery of parrot longevity https://www.mpg.de/18488364/0329-ornr-unravelling-the-mystery-of-parrot-longevity-987453-x
Reshaping Human Body Types With AI https://www.unite.ai/reshaping-human-body-types-with-ai/
"In a Dutch factory, there’s a revolutionary chipmaking machine the whole world has come to rely on. It takes months to assemble, and only one company in the world knows how: Advanced Semiconductor Materials Lithography." https://www.youtube.com/watch?v=iSVHp6CAyQ8
“Russia cannot afford to lose, so we need a kind of a victory”: Sergey Karaganov on what Putin wants. A former adviser to the Kremlin explains how Russia views the war in Ukraine, fears over Nato and China, and the fate of liberalism. https://www.newstatesman.com/world/europe/ukraine/2022/04/russia-cannot-afford-to-lose-so-we-need-a-kind-of-a-victory-sergey-karaganov-on-what-putin-wants
I learned that Eliezer Yudkowsky's April 1st post was probably not a joke. He seems to believe that alignment research will fail and we are DOOMED (human extinction).
I've been a vocal critic of AI risks for years but changed my mind after witnessing unexpected and accelerating progress and a complete lack of good arguments against AI risks from experts.
Do I believe that survival is unattainable? I don't know. The subject is way too complex and fuzzy for me to be confident. But machine learning breakthroughs keep coming. China has created a roadmap for researching extremely large neural network models. Several people involved with AI are now frequently mentioning artificial general intelligence. Many chipmakers are now focusing on specialized AI-processors. Algorithmic advances, foundational research, and synergies between neuroscience and machine learning will accelerate progress even further. All of this and much more leads me to estimate that there is a significant chance that artificial intelligence will *radically* transform everything forever and that this could possibly happen before the end of the decade.
But even if I shared Yudkowsky's pessimism, I don't think we should give up. I'd rather start praying and build a magical tent before giving up.
What should happen now is to get the smartest people working on the problem of AI alignment. We need to convince people like Terence Tao and Peter Scholze that this is by far the most important and pressing problem in the world. It's not just their intelligence that might make a difference but their unique perspective. People who are currently working on AI and AI alignment have been through many filters. They are the kind of people that are naturally drawn to such work. If we can get true outsiders to work on it they might notice something nobody else has.
Don't give up!
Most of the news about AI that I see are about its latest learning or 'cognitive' achievements. I don't see anything about developing independent motivation or motivational complexity. But the two appear to be at least roughly correlated in the biological world: chimps have more kinds of desires than slugs, and there is a (choppy) continuum from one to the other. Is anything like that happening in AI? It doesn't seem so. If not, does that tell us anything?