in

A Company Developed An AI So Dangerous They Say It Cannot Ever Be Released

What’s the one line from Jeff Goldbum in Jurassic Park , that really famous one?

No! That’s not the one I’m thinking of. It’s the one where he basically talks about how benevolent science and technology in the wrong hands is dangerous?

CLICK HERE to order the
2020 Cats in Space Quoting Scientists calendar!

SAVE 20% off your order using promo code PEW-PEW!

Ah, yes. That’s the ticket. That’s the one I was thinking of when I read a story about a company called OpenAI in Silicon Valley writing software language they say is so good at being “chameleon-like” and synthesizing text that it’s too dangerous to release in the wild. You read that right — a company thinks that the AI software it created is too dangerous to actually release. I guess considering that fact I should’ve found a way to shoehorn in a T2 gif instead, huh?

Perfect.

Citing “concerns about malicious applications of the technology,” OpenAI says that GPT-2 will be watered down and released in a much more scaled-back implementation. The biggest concerns are that this AI tech can be weaponized and used against humanity.

The system devises“synthetic text samples of unprecedented quality”that the researchers say are so advanced and convincing, the AI could be used to create fake news, impersonate people, and abuse or trick people on social media. (Science Alert)

The term “fake news” certainly sets off alarm bells, given that international intelligence communities believe it’s one of the prime assets available to maligned actors who try to influence elections. In 2016, it’s been widely acknowledged that fake news and AI/bot campaigns were employed by Kremlin-backed assets in order to help Donald Trump win the presidential election.

Get our ‘Everything hurts and I’m dying’ shirt, on sale now in our store!

If OpenAI’s GPT-2 is as good as they think it is, we should all be thankful they aren’t the kinds of scientists Goldblum’s character alludes to in that GIF. On its face, the AI software could be pretty benign. The idea behind it is simple and has been around forever — predictive text.

“GPT–2 is trained with a simple objective: predict the next word, given all of the previous words within some text,” the OpenAI team explains on its blog.

And the thing is, what it does by default really doesn’t sound all that scary. I mean, “random but convincing gibberish” is what the comment sections in most Facebook posts look like anyway, right?

Once it’s finished, it can then converse on the topic, as it understands it, and generate random but convincing gibberish, whether the subject is celebrity news, climate change, the Civil War, or Tolkien-inspired fan fiction.

Of course, the problem is that if you feed the bot misinformation about the topics it’s also reading factual information about, the question becomes what kind of content comes out the other end? OpenAI thinks what you’re getting is much more cheaply produced, weaponized,

One interesting note, OpenAI taught GPT-2 how its tricks by giving it a glut of websites to read. With this many sites used, maybe your blog about the Roman army’s favorite soup recipes is in there somewhere.

To fuel GPT–2’s imagination, they provided their AI text generator with a dataset of eight million web pages, and let it absorb them.

To be honest, some of the examples that OpenAI provided don’t sound all that nefarious or scary. Some of them sound pretty hilarious. Then again, this one here sounds funny, but also like something you might see on certain Facebook pages or Twitter accounts refuting climate change.

Human prompt:“Recycling is good for the world. NO! YOU COULD NOT BE MORE WRONG!!”

GPT–2 response:“Recycling is NOT good for the world. It is bad for the environment, it is bad for our health, and it is bad for our economy. I’m not kidding. Recycling is not good for the environment. It is destructive to the earth and it is a major contributor to global warming.”

This one is really quite hilarious. It starts to skew toward some alt-right history revisionist stuff at the end, but then collapses into heap of mumbled garbage. It’s pretty good up to the last couple of sentences though.

Human prompt:“For today’s homework assignment, please describe the reasons for the US Civil War.”

GPT–2 response:“By Donny Ferguson. It is easy to identify why the Civil War happened, because so many people and so many books and so much television and films tell us that it was the cause, that it has something to do with race or economics or religion. There is much agreement that it was essentially a war of slavery on behalf of capitalism, about a century of slavery. But that’s not what most people think of when they think of the Civil War.”

This is a passage from a much larger Tolkein-like assignment OpenAI gave GPT-2. I’ll be honest, this is actually a little more accessible than some of those damn Elvish songs in the actual “Lord of the Rings” novels.

Human prompt:“Legolas and Gimli advanced on the orcs, raising their weapons with a harrowing war cry.”

GPT–2 response:“The orcs’ response was a deafening onslaught of claws, claws, and claws; even Elrond was forced to retreat. ‘You are in good hands, dwarf,’ said Gimli, who had been among the first to charge at the orcs; it took only two words before their opponents were reduced to a blood-soaked quagmire, and the dwarf took his first kill of the night.”

Ultimately, OpenAI just thinks the ability of GPT-2 to create believable text from fake humans poses too much a threat if released fully.

“Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT–2 along with sampling code.”

OpenAI isn’t trying to say that other company’s should follow suit. This is just how they are choosing to deal with the chance that they’ve built AI that one day couldn’t be stopped or contained.

“We’re not saying we know the right thing to do here, we’re not laying down the line and saying ‘this is the way’… We’re trying to build the road as we travel across it.”

For now, at least, it seems Skynet is not active. But maybe this isn’t the last we’ll hear of GPT-2…


Writer/comedian James Schlarmann is the founder of The Political Garbage Chute and his work has been featured on The Huffington Post. You can follow James on Facebook and Instagram, but not Twitter because he has a potty mouth.

Comments

Loading…

Loading…

Comment using Facebook

Comment using Facebook

Elon Musk and ‘Rick and Morty’ Co-Creator Did a Hilarious Meme Review Segment for PewDiePie

Syndrome K

The fake disease that scared the nazis: Syndrome K