Regulating Deep Fakes

There is an ocean of data on the Internet, growing every second from both humans and bots. Most of this data is meaningless and harmless. It’s noise. But some of the data is harmful, intended to influence and misdirect public opinion. This harmful data appears authentic (fake news) which is why it’s so effective. Deep fakes are about to make the problem much worse.

For a politician, deep fakes might be one of the most terrifying technologies to emerge since the nuclear bomb. The AI-based technology can create illusions that seem as real as the nightly news. In fact, they could recreate the news with a completely fabricated narrative. It is no surprise that Congress has introduced a bill to regulate deep fakes.

The bill is called the Malicious Deep Fake Prohibition Act of 2018.

The bill defines a Deep Fake as “An audiovisual record created or altered in a manner that the record would falsely appear to a reasonable observer to be an authentic record of the actual speech or conduct of an individual”.

The bill makes it illegal to create a deep fake that you intend to distribute to “facilitate criminal or tortious conduct under Federal, State, local, or Tribal law” OR distribute a deep fake that facilitates criminal conduct. Violating this law could carry up to a 10 year jail sentence.

Under this bill, networks are obligated to restrict access to, or availability of deep fakes, and restrict access to information about deep fakes. That creates a burden on networks like YouTube and Facebook to regulate deep fakes. I am curious how this compares to DCMA Title 2, which created the safe harbor for online service providers, reducing a networks liability if a user uploads copyrighted material.

Laws often have unintended consequences. Social networks have long automated DCMA takedowns, for instance. Automated takedowns have resulted in content creators who get their (rightfully owned) content erroneously removed without recourse, and companies with large legal teams have issues take downs for content that isn’t theirs.

The Steve Buscemi Jennifer Lawrence deep fake removed from YouTube

Now imagine the unintended consequences of deep fake regulation. Your identity could be incorrectly removed if a deep fake spotting algorithm thinks you aren’t real. Conversely, without regulation, imagine someone deep faking you, and having ‘you’ spread harmful information. What if the AI running your synthetic influencer uploads content deemed illegal, and you don’t realize it?

Regulating deep fakes is logical, but likely not sufficient. Instead, legislation will need to be matched with technology innovation. Identity ownership and control is still an unsolved problem, 15 years after the launch of Facebook. Deep fakes will be harmless fun, like social networks were, up until they aren’t. While the Malicious Deep Fake Prohibition Act of 2018 is unlikely to be the final bill, the fact that the conversation has started at a Congressional level is a good sign.