Departments

Teens & Tech: Issues of Privacy and Consent: AI and the world of fake porn

There is little doubt that the future has arrived with the advent of artificial intelligence.

Facial recognition tools and the ever-present virtual reality are now with us and are moving forward at breakneck speed. What was once thought of as science fiction is here to stay. Some of the multiple uses of this developing technology are found in astronomy, aviation, and even in the dairy industry, as it can be used to track the health of cows. Unfortunately, as with any breakthrough technology, someone will find illegal and immoral ways to exploit it. Fake sex videos known as “parasite porn” or “morph porn” are now being created using artificial intelligence. AI is used to produce significantly more realistic videos than the more dated attempts. Pornographic videos are increasingly being produced and posted online using the faces of popular celebrities.

The technology uses an AI method known as “deep learning.” This involves feeding data into a computer that then uses the information to make decisions. The video is created with a machine learning algorithm using easily accessible images and open source code. In the case of fake porn, the computer assesses and then selects the facial image that most closely resembles the target. The celebrity’s face is then swapped onto a pornographic video. Celebrities are a favourite category of pornographic videos today, but anyone can lose control of their images and identity just by posting their photographs on social media. The results of this unethical use of AI are frighteningly realistic.

A recent story in Motherboard reported that a Redditor, with the online name ‘deepfakes,’ is leading the way in creating AI celebrity porn. So far he has posted hard core porn videos on Reddit featuring the faces of Maisie Williams, Aubrey Plaza, Taylor Swift, Scarlett Johansson, and Gal Gadot. “I just found a clever way to face swap,” he told Motherboard. “With hundreds of face images, I can easily generate millions of distorted images to train the network. After that, if I feed the network someone else’s face, it will think it is just another distorted image and try to make it look like the training face.” The reality of our vulnerability is illustrated by the fact that in 2015 people uploaded 24 billion selfies to Google Photos. Many of us are creating sprawling databases of our own faces.

Deepfakes uses open source machine learning tools like Tensor Flow backend, which Google makes freely available to graduate students, researchers, and anyone with an interest in machine learning. “This is no longer rocket science,” he asserts. Another Reddit user has since

jumped on the bandwagon and created a desktop application called FakeApp. Novices to this form of video manipulation can now quite easily create their own pornographic videos. FakeApp gives the user the power to easily switch a face in a video with a different one lifted from another. This app is similar to Snapchat’s face swap feature. The ease with which anyone can create fake videos is frightening. Celebrity porn videos made with machine learning software are spreading online and the law can’t do much about it. Internet companies such as Reddit are also attempting to stop celebrity porn from being posted. It will soon become dangerously easy to create realistic videos of people doing things that can threaten their careers, their reputations, and their mental health.

This story naturally raises issues of consent and of the violation of privacy. In Canada the law is unsuccessfully struggling to keep up with this burgeoning epidemic and the damage that is done to unwitting victims. Creating and distributing fake pornography without the consent of the person whose face appears in the video is a form of image based sexual abuse, also known as “non-consensual pornography” or “revenge porn.” Technology now allows for the dissemination of videos and photographs at the touch of a button. Problems arise when the images disseminated are of a personal and sensitive nature and when they are shared without consent.

On January 21, 2016 the Ontario Supreme Court of Justice recognized a new tort called “breech of confidence.” This ground-breaking decision attempted to bridge a gap in the law allowing victims legal and financial recourse. In this case action for damages was brought by a woman against her ex-boyfriend after he posted an explicit video of her on an internet pornography website. It took three weeks before she realized it had been published and she was horrified. She suffered from depression and panic attacks brought on by the experience but found the strength to fight back. The court decided in her favour and that the harm she suffered should be appropriately compensated. The new law required three elements to be proven in such cases. In short the information posted must be of a private and personal nature, was intended to be confidential, and that the unauthorized use of the information caused economic loses and emotional harm.

The court’s decision was positive news to victims of non-consensual pornography, and a clear warning for those who think there is no prohibition to posting another’s intimate images online. Other provinces have also addressed these legal concerns with varying degrees of success. Manitoba introduced The Intimate Image Protection Act specifically to deal with the

non-consensual sharing of intimate images. It came in force in January of 2016. These new laws are in a state of flux across Canada, and there has been a recent challenge to the newly minted Ontario law. The original default judgement was overturned and the matter is still before the courts; therefore, the precedential value of this decision remains to be determined. The effectiveness and longevity of Manitoba’s law is also uncertain, given that cyber-bullying laws in other provinces have been struck down on the basis of being unconstitutional. An example of this uncertainty is the Cyber-safety Act in Nova Scotia which was enacted in response to the sexual humiliation and cyberbullying of Rehtaeh Parsons, who committed suicide in 2013. The Nova Scotia Supreme Court struck it down because it violated the Canadian Charter of Rights and Freedoms.

Mary Anne Franks, a law professor with experience in this field, helped draft America’s Intimate Privacy Protection Act, which was introduced in Congress in July 2016 and re- introduced as the ENOUGH Act in November 2017. According to Franks, these online images are indeed “non-consensual pornography.” She believes it will be tough to stamp out. Franks is uneasy about Bill C-13, Canada’s anti-cyberbullying legislation which passed in 2015. “It seems like a way to get Canadians to accept a greater intrusion on the part of government and police into their personal lives and using revenge porn as a pretext for doing that, which is really upsetting…. We don’t want to use a legitimate recognition of harmful behaviour as a pretext for violating people’s civil rights. I don’t think it’s ever going to work to try to protect privacy by invading privacy.” Franks is not alone in her concerns. Legal experts and civil liberty advocates are also alarmed. “Reasonable grounds for suspicion” is a low bar, but under Bill C-13 that’s all an officer will need to obtain a court order.

Ethically the implications of the malicious use of software are significant. Alex Champandard, an artificial intelligence researcher, believes that we need to have a very loud public debate. “Everyone needs to know just how easy it is to fake images and videos.” He thinks that researchers will develop technology to detect fake videos and that internet policy can be improved to regulate forgeries when they pop up. Ion Stoica, a professor at University of California Berkeley, identified security and safety as major topics for research and concern in the world of artificial intelligence. Ethics are the central issue in discussions about machine learning, triggered in part by the use of algorithms to make decisions that affect citizens. Advances in AI have created risks with the advent of Big Data, where a large amount of information is being

collected about each of us and then fed to algorithms to make predictions. We do not have the ability to know when the data is being collected, ensure that the data collected is correct, updated, or being assessed in the proper context. Tesla’s CEO Elon Musk’s recent words at a meeting of the National Governor’s Association were chilling: “I have access to the very most cutting-edge AI, and I think people should really be concerned about it.” He described it as “the biggest risk to civilization…. AI is a rare case where I think we need to be proactive in regulation, instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late.” He believes AI “could start a war by doing fake news and spoofing email accounts and fake press releases, and just by manipulating information.” Words we would do well to heed for ourselves and for our students. Celebrity porn may just be the tip of the iceberg.

By: Alison Zenisek

Advertise with Us!

Contact Stephanie Duprat for more information at
1-888-634-5556 x106 or stephanie@mzpinc.ca.