9 Comments
Mar 31, 2023Liked by Casey Newton

good essay. A few extra points though. One, is six months long enough? Two, how are you really gonna stop these guys from experimenting during that period? And three, what about everybody else in the world? Like I don’t know, China?

Expand full comment
author

The basic idea isn't that they shouldn't experiment during that time; just that they shouldn't start building *even larger models* than the ones they just released. Which seems sensible to me!

Expand full comment
Mar 31, 2023Liked by Casey Newton

Here is the possibly immediate risk that seems most concerning -- can you determine how real it is?

Are AI chatbots already polluting the only well we have?

Has the horse already left the barn, and what controls are currently in place? Carl Bergstrom raised this (https://fediscience.org/@ct_bergstrom/110071929312312906) asking "what happens when AI chatbots pollute our information environment and then start feeding on this pollution. As it so often, the case, we didn’t have to wait long to get some hint of the kind of mess we could be looking at. https://www.theverge.com/2023/3/22/23651564/google-microsoft-bard-bing-chatbots-misinformation."

Are we already inhaling our own hallucinating AI fumes, and what is to stop this from becoming an irreversible "tragedy of the information commons" due to poisons we cannot filter out?

Expand full comment
author

Definitely already happening and definitely a concern!

Expand full comment

Do you now regret berating Google for holding back Bard? I recall you saying “ship it or zip it” 🤔

Expand full comment
author

I think Bard is fine! Probably the safest of all the models. Just don't want them training anything bigger than GPT-5 before October.

Expand full comment

I'm not sure that the pace of AI has been that fast. Consider that the AI researchers have largely looked down on the innovation of ChatGPT. Their condescension is unwarranted (having a good product actually matters, academics) ... but the AI field hasn't moved at the breakneck space that the MBA field has moved.

Put another way, Sam Altman is an executive, not an engineer or researcher. So why would we expect Sam Altman to know much about the speed of AI progress?

Expand full comment
author

The MBA field? As in master's of business administration?

Expand full comment

Yes. In more general terms, those who are making business cases using AI are moving with speed. Those who are engineering AI seem to be making the same rate of progress they made last year and the year before that. So all the projections about what AI can do seem to rely on the recently increasing volume of use cases rather than the improvement in technology.

One spot where engineering has gotten better is in resource usage optimization, but the work there seems to be more proof-of-concept (GPT on a Raspberry Pi) rather than production-ready. It's also a key focus based on other articles here and elsewhere, so that improvement will probably continue.

Expand full comment