The AI Letter Calling for a 'Pause on Giant AI Experiments': A Quick Rundown
The AI letter signed by lots of AI researchers calling for a 'Pause on Giant AI Experiments' is dripping with AI hype. Here's a quick rundown. Context is provided by the Future of Life Institute, a longtermist operation focused on maximizing the happiness of future beings. Learn more about where this is coming from!
@emilymbender@dair-community.social on Mastodon
Professor, Linguistics, UW // Faculty Director, Professional MS Program in Computational Linguistics (CLMS) // she/her // @emilymbender@dair-community.social
-
Okay, so that AI letter signed by lots of AI researchers calling for a "Pause [on] Giant AI Experiments"? It's just dripping with #Aihype. Here's a quick rundown.
— @emilymbender@dair-community.social on Mastodon (@emilymbender) March 29, 2023
>> -
First, for context, note that URL? The Future of Life Institute is a longtermist operation. You know, the people who are focused on maximizing the happiness of billions of future beings who live in computer simulations.https://t.co/351Tws5rav
— @emilymbender@dair-community.social on Mastodon (@emilymbender) March 29, 2023
>> -
For some context, see: https://t.co/61G6JrG43J
— @emilymbender@dair-community.social on Mastodon (@emilymbender) March 29, 2023
So that already tells you something about where this is coming from. This is gonna be a hot mess.
>> -
There a few things in the letter that I do agree with, I'll try to pull them out of the dreck as I go along.
— @emilymbender@dair-community.social on Mastodon (@emilymbender) March 29, 2023
>> -
So, into the #AIhype. It starts with "AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1]".
— @emilymbender@dair-community.social on Mastodon (@emilymbender) March 29, 2023
>> pic.twitter.com/dQiet3EGnz -
Footnote 1 there points to a lot of papers, starting with Stochastic Parrots. But we are not talking about hypothetical "AI systems with human-competitive intelligence" in that paper. We're talking about large language models.https://t.co/QrrBwXIlQi
— @emilymbender@dair-community.social on Mastodon (@emilymbender) March 29, 2023
>> -
And the rest of that paragraph. Yes, AI labs are locked in an out-of-control race, but no one has developed a "digital mind" and they aren't in the process of doing that.
— @emilymbender@dair-community.social on Mastodon (@emilymbender) March 29, 2023
>> -
And could the creators "reliably control" #ChatGPT et al. Yes, they could --- by simply not setting them up as easily accessible sources of non-information poisoning our information ecosystem.
— @emilymbender@dair-community.social on Mastodon (@emilymbender) March 29, 2023
>> -
And could folks "understand" these systems? There are plenty of open questions about how deep neural nets map inputs to outputs, but we'd be much better positioned to study them if the AI labs provided transparency about training data, model architecture, and training regimes.
— @emilymbender@dair-community.social on Mastodon (@emilymbender) March 29, 2023
>> -
Next paragraph. Human-competitive at general tasks, eh? What does footnote 3 reference? The speculative fiction novella known as the "Sparks paper" and OpenAI's non-technical ad copy for GPT4. ROFLMAO.
— @emilymbender@dair-community.social on Mastodon (@emilymbender) March 29, 2023
>> pic.twitter.com/R0Ci9j3zWu -
On the "sparks" paper:https://t.co/5jvyk1qocE
— @emilymbender@dair-community.social on Mastodon (@emilymbender) March 29, 2023
>> -
On the GPT-4 ad copy:https://t.co/OcWAuEtWAZ
— @emilymbender@dair-community.social on Mastodon (@emilymbender) March 29, 2023
>> -
I'm mean, I'm glad that the letter authors & signatories are asking "Should we let machines flood our information channels with propaganda and untruth?" but the questions after that are just unhinged #AIhype, helping those building this stuff sell it.
— @emilymbender@dair-community.social on Mastodon (@emilymbender) March 29, 2023
>> -
Okay, calling for a pause, something like a truce amongst the AI labs. Maybe the folks who think they're really building AI will consider it framed like this?
— @emilymbender@dair-community.social on Mastodon (@emilymbender) March 29, 2023
>> pic.twitter.com/DSGmJysyYD -
Just sayin': We wrote a whole paper in late 2020 (Stochastic Parrots, published in 2021) pointing out that this head-long rush to ever larger language models without considering risks was a bad thing. But the risks and harms have never been about "too powerful AI".
— @emilymbender@dair-community.social on Mastodon (@emilymbender) March 29, 2023
>> -
Instead: They're about concentration of power in the hands of people, about reproducing systems of oppression, about damage to the information ecosystem, and about damage to the natural ecosystem (through profligate use of energy resources).
— @emilymbender@dair-community.social on Mastodon (@emilymbender) March 29, 2023
>> -
They then say: "AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal."
— @emilymbender@dair-community.social on Mastodon (@emilymbender) March 29, 2023
>> -
Uh, accurate, transparent and interpretable make sense. "Safe", depending on what they imagine is "unsafe". "Aligned" is a codeword for weird AGI fantasies. And "loyal" conjures up autonomous, sentient entities. #AIhype
— @emilymbender@dair-community.social on Mastodon (@emilymbender) March 29, 2023
>> -
Some of these policy goals make sense:
— @emilymbender@dair-community.social on Mastodon (@emilymbender) March 29, 2023
>> pic.twitter.com/1gV5d7hLs4 -
Yes, we should have regulation that requires provenance and watermarking systems. (And it should ALWAYS be obvious when you've encountered synthetic text, images, voices, etc.)
— @emilymbender@dair-community.social on Mastodon (@emilymbender) March 29, 2023
>> -
Yes, there should be liability --- but that liability should clearly rest with people & corporations. "AI-caused harm" already makes it sound like there aren't *people* deciding to deploy these things.
— @emilymbender@dair-community.social on Mastodon (@emilymbender) March 29, 2023
>> -
Yes, there should be robust public funding but I'd prioritize non-CS fields that look at the impacts of these things over "technical AI safety research".
— @emilymbender@dair-community.social on Mastodon (@emilymbender) March 29, 2023
>> -
Also "the dramatic economic and political disruptions that AI will cause". Uh, we don't have AI. We do have corporations and VCs looking to make the most $$ possible with little care for what it does to democracy (and the environment).
— @emilymbender@dair-community.social on Mastodon (@emilymbender) March 29, 2023
>> -
Policymakers: Don't waste your time on the fantasies of the techbros saying "Oh noes, we're building something TOO powerful." Listen instead to those who are studying how corporations (and govt) are using technology (and the narratives of "AI") to concentrate and wield power.
— @emilymbender@dair-community.social on Mastodon (@emilymbender) March 29, 2023
>> -
Start with the work of brilliant scholars like Ruha Benjamin, Meredith Broussard, Safiya Noble, Timnit Gebru, Sasha Constanza-Chock and journalists like Karen Hao and Billy Perrigo.
— @emilymbender@dair-community.social on Mastodon (@emilymbender) March 29, 2023 -
Two corrections:
— @emilymbender@dair-community.social on Mastodon (@emilymbender) March 29, 2023
1) Sorry @schock for misspelling your name!!
2) I meant to add on "general tests" see:https://t.co/kR4ZA1k7uz -
Broke the threading: https://t.co/nquBe2nzMY
— @emilymbender@dair-community.social on Mastodon (@emilymbender) March 29, 2023 -
Now as a blog post:https://t.co/zuE2A39W5F
— @emilymbender@dair-community.social on Mastodon (@emilymbender) March 30, 2023