For 20 years, privacy advocates have been sounding the alarm about commercial online surveillance.
Privacy advocates tried to explain that persuasion was just the tip of the iceberg. Commercial databases were juicy targets for spies and identity thieves, to say nothing of blackmail for people whose data-trails revealed socially risky sexual practices, religious beliefs, or political views.
I like this framing of persuasion vs. targeting.
we’re confusing automated persuasion with automated targeting
A reminder of the work happening behind the scenes with big data sets, computers, and computer scientists.
Cambridge Analytica are like stage mentalists: they’re doing something labor-intensive and pretending that it’s something supernatural. A stage mentalist will train for years to learn to quickly memorize a deck of cards and then claim that they can name your card thanks to their psychic powers. You never see the unglamorous, unimpressive memorization practice.
This persuasion/targeting dynamic again.
Facebook isn’t a mind-control ray. It’s a tool for finding people who possess uncommon, hard-to-locate traits, whether that’s “person thinking of buying a new refrigerator,” “person with the same rare disease as you,” or “person who might participate in a genocidal pogrom,” and then pitching them on a nice side-by-side or some tiki torches, while showing them social proof of the desirability of their course of action, in the form of other people (or bots) that are doing the same thing, so they feel like they’re part of a crowd.
Facebook doesn’t have a mind-control problem, it has a corruption problem. Cambridge Analytica didn’t convince decent people to become racists; they convinced racists to become voters.