Imagine using a video editing program in which you can modify talking-head videos (footage from the shoulders up) by extracting patterns from real clips. The system uses machine learning to create a realistic video of a person saying something they never said. Seeing is no longer believing with these new texts.
You can type a word into the program, or drag and drop a new phrase, and the program makes the person in the video repeats the new script.
Critical evaluation challenges
This has big implications in video editing and content creation, but there are even bigger challenges with deep fake videos a we think about critical evaluation of online information.
Think about how quickly people are fooled by textual content and links shared via social media. Phony video evidence of people saying things they didn’t say will spread like wildfire.
Furthermore, the “checklist approaches” we teach for critically evaluating online info in our schools and libraries will never be able to prepare individuals as they consume and critique these sources of information.
Thanks to Bryan Alexander