Precise atom manipulation through deep reinforcement learning: “We believe this study is a milestone in adopting artificial intelligence to solve automation problems in nanofabrication.” https://www.nature.com/articles/s41467-022-35149-w
We can now do photosynthesis in mammalian cells https://www.nature.com/articles/s41586-022-05499-y
"GPT can use a web browser to answer questions. When embedded in a REPL environment and prompted to strategize and monologue, agent-like behavior emerges. The agent can solve multi-step problems that involve going to pages, following links, reading the next page, etc. The process here involves several different prompts pipelined together in a recursive fashion to result in agent-like behavior." https://threadreaderapp.com/thread/1600890243452137472.html
“I Taught ChatGPT to Invent a Language” https://maximumeffort.substack.com/p/i-taught-chatgpt-to-invent-a-language
Researches have developed an AI system that learns to identify objects by using a natural language interface to ask humans what they’re seeing. https://techxplore.com/news/2022-11-deer-socially-aware-ai-humans.html
“If you believe like Eliezer Yudkowsky that superintelligent AI is threatening to kill us all, why aren't you evangelizing harder than Christians, why isn't it the main topic talked about in this subreddit or in Scott's blog, why aren't you focusing working only on it?” https://www.reddit.com/r/slatestarcodex/comments/zdi0p3/if_you_believe_like_eliezer_yudkowsky_that/
“A new paper suggesting that Dualism arises naturally, from theory of mind. This predicts that autistic people (whose ToM is weaker) ought to show weaker Dualism, and Dualism should correlate with ToM. That's what we observe!” https://www.pnas.org/doi/10.1073/pnas.2211628119
The Inconvenient Truth Behind the Black-White Income and Mobility Gap https://humanvarieties.org/2022/11/30/the-inconvenient-truth-behind-the-black-white-income-and-mobility-gap/
Mapping the connectome of an insect brain. 3,013 neurons and 544,000 synapses. 37% of neurons displayed contralateral branches that link the two hemispheres. https://biorxiv.org/content/10.1101/2022.11.28.516756v1.full.pdf+html
Ancient skull uncovered in China could be million-year-old Homo erectus https://www.nature.com/articles/d41586-022-04142-0
"surprise 2016 election of Trump [caused] … Republican-leaning counties [to] experience a sharp & persistent increase in fertility relative to Democratic counties, a shift amounting to 1.2–2.2% of the national fertility rate" https://www.aeaweb.org/articles?id=10.1257/aeri.20210485
Podcast with the CEO of Loyal – a startup with a mission to extend dogs' lifespans. Are humans next? https://www.youtube.com/watch?v=IBAHiFmx-io
The Lovelace Effect – AI generated texts should lead us to re-value creativity in academic writing https://blogs.lse.ac.uk/impactofsocialsciences/2022/12/06/the-lovelace-effect-ai-generated-texts-should-lead-us-to-re-value-creativity-in-academic-writing/
“Another day, another tweet trying to spread disinformation about generative model” https://www.reddit.com/r/StableDiffusion/comments/zczq3k/another_day_another_tweet_trying_to_spread/
Is ChatGPT useless because it is a midwit? I don't think so. Here are some use cases I came across:
1. You can use it to learn to code: It is pretty good at simple coding tasks, and its failures can serve as good debugging practices. Especially, its ability to explain simple code snippets and suggest corrections makes it a great coding companion for beginners.
2. Even more experienced programmers who know how to spot mistakes can use it instead of Google for basic searches of the type of "how to do X in bash". It can also be used as a high-level interface to libraries such as Matplotlib.
3. You can use it to rewrite clunky sentences (prompt: rewrite this sentence "sentence here") or to make rough text in technical documents better (prompt: "please rewrite this for brevity and clarity").
4. You can paste dense texts and ask it for a TLDR or explain __xyz__ like you're five. If you are capable of verifying the accuracy of the summary or explanation, then this can save a lot of time.
5. It works great for designing boilerplate letters such as Christmas greetings and translating them into multiple languages. For example: My company does X,Y,Z, please prepare a Christmas newsletter", followed by interactions to tweak the tone and focus, to finally "Perfect, translate this to these languages: [...]."
6. You can use it to learn languages by asking it to correct you, and explain and translate words or sentences. For example, you can ask if the following German sentence "Ich Brot habe gekauft." is grammatically correct, and it will reply with the following explanation: "No, the sentence is not grammatically correct. In German, the verb typically comes at the end of the sentence, so a correct version of the sentence would be "Ich habe Brot gekauft." Here, "habe" is the conjugated form of the verb "haben," which means "to have," and "gekauft" is the past participle of the verb "kaufen," which means "to buy." In the original sentence, "Brot" is the direct object, and "habe" and "gekauft" are the conjugated verb and past participle, respectively, but their order is incorrect."
7. You can ask it to provide a one-sentence detailed visual description of something using a lot of adjectives, and then use the resulting sentence as a prompt for a text-to-image model.
8. You can use it to brainstorm creative ideas for writing fictional stories where correctness isn't a big problem.
There are probably A LOT more useful applications.
It's peculiar to see some people *only* interested in exploring what ChatGPT does poorly
Like being handed a hammer for the first time, and complaining that it's a bad way to plant roses, do the dishes, sew clothing, [...]. Maybe try it out with some nails, see what it can do?
— Michael Nielsen
I'm a total layman, so take this with a big grain of salt: All the problems ChatGPT has will be overcome either through scaling[1][2][3] or by means of various techniques like inner monologue[4]. And if this isn't enough, there are many attack vectors that go beyond language models[5] or transformer-based architectures.[6][7]
I can't judge the relevant research in detail, but I have a rough bird's eye view of the progress from years of watching new results that achieve state-of-the-art performance across various benchmarks popping up in my feed. At the same time, an increasing number of human capital is allocated to coming up with further improvements.[8]
Skeptics say that deep learning models are just regurgitating their training data. But this seems unlikely to me. Those models compress terabytes of data into a few gigabytes of weights. How could one possibly achieve such compression without extracting deep abstractions and fundamental relationships from data?
Can you really create images of famous personalities as beggars or a Shakespeare poem about sorting algorithms without having gained something we would call comprehension? To me, it seems that these models have truly gained some form of understanding of the data they have been trained on. The most basic interpretation of this is hard to deny, since compression equals prediction[9], which in turn is an important component of what we mean when we say that something understands something.
Now, why would understanding be limited to tone, rhyme, and what it means to be a beggar? What makes you think that an even larger neural network won't be able to understand qualitatively more, just like a human brain can understand concepts that a chimpanzee cannot?[10]
I'm skeptical of the skepticism. But maybe that's just due to my poor comprehension of the data I've been fed.
[1] https://www.gwern.net/Scaling-hypothesis
[2] https://www.jasonwei.net/blog/emergence
[3] https://ai.googleblog.com/2022/11/characterizing-emergent-phenomena-in.html
[4] https://www.gwern.net/docs/ai/nn/transformer/gpt/inner-monologue/index
[5] https://en.wikipedia.org/wiki/Gato_(DeepMind)
[6] https://arxiv.org/abs/2106.01345
[7] https://arxiv.org/abs/2202.05780
[8] https://twitter.com/Suhail/status/1598708439966179328
[9] https://www.lesswrong.com/posts/hAvGi9YAPZAnnjZNY/prediction-compression-transcript-1
[10] https://royalsocietypublishing.org/doi/10.1098/rsos.211621
This prediction will be difficult to assess without a clear definition of "GPT-type sys". For example, GPT-5 might be an agentic and embodied multimodal system (https://en.wikipedia.org/wiki/Gato_(DeepMind)).
In other words, it will have access to robot bodies in the real world and in simulations. It will be trained on a wide variety of data, not just text. And it will be inclined to take actions such as running scripts, simulations, and experiments in order to corroborate its hunches.
I doubt this is what he has in mind here.
After playing with ChatGPT casually, the two most immediate use-cases that jump out are:
1. Advanced search.
I didn't realize it before, but there is useful niche in between Google search and question-and-answer forums (eg StackExchange, Quora, Reddit, etc.) Google is good for factual questions with definitive answers, but can be hard to use for more complex, subjective questions or questions where you don't understand enough about the topic to know how to put your confusion into words.
Forums like StackExchange can often fill the void for questions too advanced and specific for Google. But forums have drawbacks of their own. One is feedback time. Even if an expert gets around to your question, it might take a couple of days or even months. And sometimes, no one bothers to answer your question at all. And it can be hard to ask followup questions without coming across as overbearing.
I have found ChatGPT useful for exactly the type of questions that could slip through the cracks before.
2. Quickly producing boilerplate.
I don't want to contribute to the railing against "email jobs", but it really is incredible how much cliche boilerplate BS you are expected to produce as a knowledge worker. GPT allows you to reduce the time it takes to produce a first draft by like 90% (though you still need to be careful throughout the editing to make sure that there are no factual inaccuracies). ChatGPT will obviously useful for stuff like high school and college essays. But also useful for BS like cover letters, statements of purpose as well.
And what's nice is that you can create customized boilerplate quite easily now. Like let's say you are applying to graduate school, and you want your statement of purpose for each school to mention specific aspects of that school so that you come across as enthusiastic. That would be really time-consuming before. But that's exactly the kind of task that GPT excels at.