Links for 2023-03-14
Yet another layer of complexity to neurons: They exchange mRNA between them, generated by ribosomes located far from the nucleus, like bacteria. This allows them to rapidly alter protein synthesis in response to oxidative stress and other unknown factors. https://www.nature.com/articles/s41467-021-26365-x
The first wiring map of an insect's brain hints at incredible complexity https://www.npr.org/sections/health-shots/2023/03/09/1161645378/scientists-first-wiring-map-fruit-fly-brain-connectome-human-learning
Bioinspired Neural Network Model Can Store Significantly More Memories https://scitechdaily.com/bioinspired-neural-network-model-can-store-significantly-more-memories/
“Human brains respond to semantic features of presented stimuli with different neurons. It is then curious whether modern deep neural networks admit a similar behavior pattern. Specifically, this paper finds a small cluster of neurons in a diffusion model corresponding to a particular subject. We call those neurons the concept neurons.” https://arxiv.org/abs/2303.05125
A Morgan Stanley note on GPT-4/5 training demands, inference savings, Nvidia revenue, and LLM economics: “We think that GPT 5 is currently being trained on 25k GPUs - $225 mm or so of NVIDIA hardware…” https://www.reddit.com/r/mlscaling/comments/11pnhpf/morgan_stanley_note_on_gpt45_training_demands/
LLMs and SQL: “…what if you could just interact with a SQL database in natural language? With LLMs today, that is possible. LLMs have an understanding of SQL and are able to write it pretty well. However, there are several issues that make this a non-trivial task.” https://blog.langchain.dev/llms-and-sql/
"Beyond the Pass Mark: the Accuracy of ChatGPT and Bing in the National Medical Licensure Examination in Japan", Kataoka 2023 (ChatGPT answers right 38% Japanese-language medical exam questions; Bing: 78%) https://osf.io/5uxra/
Why do most animals, but not most plants, have males? https://www.overcomingbias.com/p/men-are-animalshtml
"Is Cultivated Meat For Real? Cultivated meat faces a wall of scientific skepticism, but investors haven’t been deterred. A decade in, how close are we to seeing it on our plates?", Robert Yaman (long-term optimism) https://asteriskmag.com/issues/2/is-cultivated-meat-for-real
The dogs of Chernobyl: Demographic insights into populations inhabiting the nuclear exclusion zone https://www.science.org/doi/10.1126/sciadv.ade2537
France moving to repeal a misguided law capping nuclear at 50% of the country’s energy mix. https://www.euractiv.com/section/politics/news/french-mps-pave-way-to-dropping-legal-limit-on-nuclear-in-energy-mix/
“The feminization of the American university is all but complete.” https://www.city-journal.org/the-great-feminization-of-the-american-university
AGI labs play with how they say 'AGI': most people hear 'hyperadvanced chatbot', and the lab means the godlike galaxy eater thing. Labs say they see a 5% probability of things going badly, and they mean '100% chance someone builds a god and 95% that goes well'; and most people think they must mean there's only a 5% chance that it's possible to build a god.
— @Gabe_cc
If you doubt that they actually want to build AGI, just look at Twitter:
1. Sam Altman (OpenAI): https://twitter.com/search?q=AGI%20from%3Asama&src=typed_query
2. Demis Hassabis (DeepMind) https://twitter.com/search?q=AGI%20from%3Ademishassabis&src=typed_query
What people are missing about AI doomsayers like Eliezer Yudkowsky is that they are incredibly optimistic about what the outcome of creating an artificial superintelligence will be. They merely believe that AI is an existential risk rather than a suffering risk. In other words, they believe that we will all be dead rather than being kept alive by a psychologically abnormal god that doesn't sufficiently care about some aspect of our feelings like boredom.
For example, a misaligned AI with the prime directive to prevent any harm to humans could decide to keep us alive until the heat death of the universe but locked away and unable to commit suicide.