The last general election was altered by the rise of fake news spread on social media. We’re entering the first general election that will be influenced by fake video—altered, dubbed, even fully fabricated video.

Despite the omnipresence of warnings like the one above, we often dismiss them as red herrings, or perhaps assume that we know how to spot a fake. In recent weeks it has become increasingly evident that turning a blind eye toward this shifting media landscape is not enough.

In the era of AI and facial recognition there have been new, and ever more convincing, affected videos that falsely depict well-known celebrities/personalities. With this technology, people have become able to quite literally “put words in someone’s mouth,” making the quest for truth even more murky.

These videos remain a shade removed from the real video—for the time being, close scrutiny reveals that the video has been altered, however, the technology has made leaps and bounds in a very short period of time. In the years to come, it will undoubtedly blur even more into realism.

“Today’s AI-generated faces are full-color, detailed images. They are expressive. They’re not an average of all human faces, they resemble people of specific ages and ethnicities,” said Kelsey Pipe of Vox.

In 2016, the Oxford Dictionary named “post-truth” the word of the year; needless to say the trend has only gained momentum since. An urgent need for increasing K-12 students’ digital literacy has arisen, and along with it a necessity for boosting capacity within individual departments (such as library sciences, English, social studies) or through integrated cross-curriculum that investigates authority, perspective and copyright issues. While teaching the user how to best navigate content is necessary, it is hardly preventative, and therefore technology must continue to adapt and grow through developments in blockchain and cryptography.

In “Amusing Ourselves To Death,” Neil Postman remarks upon the trajectory of ‘truth’ in correlation to the medium: beginning with oration in the classical era, moving through the printing press, photography, television, etc. This assessment of presumed objectivity factored in “who has authority to publish” which, is rarely part of the equation today. Despite our decentralization of publishing, we continue to look at video as a stalwart medium of truth and authenticity—particularly when the speaker is on camera because, well, it’s been easy to spot a fake.

To help combat this trend toward misinformation, many organizations are recognizing the potential harm and are hoping to arm learners everywhere with the necessary tools to spot fakes and understand author bias. KQED Teach is now offering a course on how to critically consume. In addition to the more involved courses, thought leaders are compiling quick tips and best practices.

George Fox University professor, John Spencer (@spencerideas) offers five C’s of critical consumption:

  • Context: Look at the context of the article. When was it written? Where does it come from? Have the events changed since then?
  • Credibility: Check the credibility of the source. Does the site have journalistic integrity? Does the author cite credible sources? Is it satirical? Is it on a list of fake news sites? Is it actually an advertisement posing as a real news story?
  • Construction: Analyze the construction of the article. What is the bias? Are there any loaded words? Any propaganda techniques? Any omissions that you should look out for? Can you distinguish between the facts and opinions?
  • Corroboration: Corroborate the information with other credible news sources. Make sure it’s not the only source making the claim. If it is, there’s a good chance it’s actually not true.
  • Compare: Compare it to other news sources to get different perspectives. Find other credible sources from other areas of the ideological or political spectrum to provide nuance and get a bigger picture of what’s actually happening.

Within the last month online platforms have decided not to remove the recent deepfake video of Mark Zuckerberg stating that they will, instead,”treat this content the same way we treat all misinformation on Instagram […] If third-party fact-checkers mark it as false, we will filter it from Instagram’s recommendation surfaces like explore and hashtag pages.”

According to Hany Farid, a computer science professor at the University of California at Berkley, “The number of people working on the video-synthesis side, as opposed to the detector side, is 100 to 1.” Despite the disparity between those working for and against this deep learning video AI, there have been developments and momentary band-aids that hope to alleviate the more immediate threats of deepfakes; however, with an article title like “A new deepfake detection tool should keep world leaders safe—for now,” one can only hope that a more permanent solution is around the corner.

While deepfakes are currently assuming the guise of very high quality video editing (i.e. altering a pre-existing video), the technology of fully synthetic videos is rapidly improving. This will enable a hyper-realistic video of anyone, anywhere, doing/saying anything. The political and social implications of this kind of development are evident and frightening. It has never been more important to understand bias, check facts and familiarize yourself with sources.

For more, see:

This is the first blog in a series on digital discernment, a #FutureofWork series on teaching and leading in the age of AI. Stay tuned for additional articles later this month.


Stay in-the-know with all things EdTech and innovations in learning by signing up to receive the weekly Smart Update.

LEAVE A REPLY

Please enter your comment!
Please enter your name here