Self-Instruct: Aligning Language Model with Self Generated Instructions — Tuning GPT3 with SELF-INSTRUCT outperforms using existing public instruction datasets by a large margin. https://arxiv.org/abs/2212.10560
“…unsupervised methods are more scalable than supervised methods, deep learning has special structure that we can exploit for alignment, and we may be able to recover superhuman beliefs from deep learning representations in a totally unsupervised way.” https://www.lesswrong.com/posts/L4anhrxjv8j2yRKKp/how-discovering-latent-knowledge-in-language-models-without
Searching data stored in DNA in a massively parallel and scalable manner with resource usage almost independent of the data size. [Boston Blobe] https://archive.vn/4qifq
You can’t take it with you: straight talk about epigenetics and intergenerational trauma https://razib.substack.com/p/you-cant-take-it-with-you-straight
One Million Times Faster Than Current Technology: New Optical Computing Approach Offers Ultrafast Processing https://www.science.org/doi/10.1126/sciadv.abq8246
“For exploring nearby stars, let us consider the challenges of a picogram- to nanogram-scale probe to land, replicate, and produce a communications module based on biominerals at the destination. A billion such probes could be launched for similar cost as a single gram-scale probe.” https://www.liebertpub.com/doi/10.1089/ast.2022.0008
ULIP: Learning Unified Representation of Language, Image and Point Cloud for 3D Understanding https://tycho-xue.github.io/ULIP/
"AI composer bias: Listeners like music less when they think it was composed by an AI", Shank et al 2022 https://www.gwern.net/docs/ai/music/2022-shank.pdf
VULCAN Forges New Science for the Future of 3D-Printed Metal https://neutrons.ornl.gov/content/vulcan-forges-new-science-future-3d-printed-metal
“Francisco Macias Nguema, the insane and brutal dictator who reduced the population of his country by 75% in 10 years...he turned off the power plants and said he would use magic to power the country...He kept all the skulls of the people he killed and would use the skulls to beat more people to death...he changed the national motto to "There is no other God than Macías Nguema"...He had the chief of the Central Bank murdered and moved the country's entire reserves to his house, where he burned most of it” https://threadreaderapp.com/thread/1602398207908188174.html
PhD student solves 2,500-year-old Sanskrit problem https://www.bbc.com/news/articles/cg3gw9v7jnvo
Over the past few weeks, I encountered a bunch of GPT-4 rumors. I didn't share any of them because they seemed exaggerated and came from people of to me unknown trustworthiness.
I have noticed a general increase in noise around AI. This is probably the inevitable consequence when a topic enters the mainstream. People start making up stuff for clicks.
I'm not saying that the rumors are necessarily wrong. And I believe they might very well come true before the end of this decade. But maybe not at the GPT-4 level.
The same is true for many AI startups. Lots of people are now jumping onto the AI bandwagon. I expect many of them to falter. And most of the rest will be replaced by the next big model coming out of one of the major AI forgeries.
This isn't to say that AI won't make lots of people very rich. By the 2030s, I expect the top 10 largest companies by revenue to be all AI-based. But progress right now is just too fast and unpredictable to pick a successful startup without insider information and lots of luck.
P.S. As far as I can tell, this seems to be a well-founded attempt at forecasting GPT-4 features:
...I estimate that the total training compute for GPT-4 will be between 2.54 billion petaFLOP to 130 billion petaFLOP, with a central estimate of 18 billion petaFLOP. For comparison, that's roughly 1-50 times more compute than PaLM...Most likely, GPT-4 will be comparable in size to GPT-3, which had 175 billion parameters.
...GPT-4 will probably include some algorithmic advances that Chinchilla lacks. The most salient possibility is that it will employ an explicit retrieval mechanism, similar to DeepMind's Retro model...correspond to non-retrieval models with 10× more parameters on certain datasets...
With the algorithmic adjustment, the qualitative improvement from GPT-3 (vanilla) to GPT-4 is comparable to the improvement from GPT-2 to GPT-3. Since that was a rather big jump, I expect many will be stunned by GPT-4, especially those who expected strong diminishing returns.
I have also seen a lot of wild speculation flying around. Overall, I feel mixed. On the one hand, I feel vindicated that the mainstream is *finally* taking AI seriously when the impending transformation has been obvious for years (or even decades). On the other hand, I must confess that I already miss the times when AI discourse was bit more elitist, a bit more IQ-discriminating. A lot of midwit takes floating around, unfortunately.
As for GPT-4: I am leaning towards it being wild. While a lot of people are basely speculating, my understanding is that there are a select group of people who've been given sneak previews of it. And one of those people I believe is Tyler Cowen of Marginal Revolution. Based on Cowen's posting behavior over the past couple months, GPT-4 must be something alright. Cowen has always been mildly skeptical of AI, though never outright disparaging. But recently, his tenor has done a complete-180 and he's now one of the biggest AI proponents out there. Whatever he saw must have spooked him.