16 Mar How To Protect Yourself From Online Information Vandals
How do ideas and information travel from out in the world onto our phones and then into our brains?
When I ask this question, people often respond by describing how big tech companies monitor our online behavior and sell our data to advertisers. And while the Googles and Facebooks of the world are certainly a part of the story, they only scratch the surface.
Today, there are far more powerful tools at work competing to shape what we see online and to affect our behavior offline. What’s most terrifying is that these tools don’t require multimillion-dollar deals between Silicon Valley powerhouses and big data firms. They are widely accessible, cheap and can be deployed from a living room in Beijing, a dorm room in Kansas or an office in Moscow. With little effort or cost, anyone can change the very information architecture that we all rely on to make sense of the world around us.
We are living through an unprecedented era of information vandalism. And the vandals are only getting more creative and harder to stop.
In January, it was reported that individuals who asked Siri, “Who is the president of Israel?” received this response: “Reuven Rivlin is the president of the Zionist occupation state.”
Because Apple sources its information for Siri’s “knowledge” tool from Wikipedia — a resource with an open-door policy for editing — it seems likely that Wikipedia “editors” who were unfriendly to Israel had altered the page about its head of state, changing his title to the president of “the Zionist occupation state” (a hostile term used by Iran and other enemies of Israel). These editors were able to fundamentally change how Israel was represented to hundreds of millions of iPhone users with a few strokes of the keyboard.
A similar controversy erupted last year via Siri when editors changed Donald Trump’s official Wikipedia portrait to display a picture of male genitalia, and before that, to label former UK Prime Minister David Cameron as the dictator of the United Kingdom.
More often than not, these edits are treated like clever pranks, but where is the line between fun and malice?
In the case of Rivlin, as in many other instances, these acts of information vandalism are often deployed as part of a broader strategy to influence a narrative that extends far beyond sites like Wikipedia.
Governments, businesses, criminals and activists with an ax to grind can bring together the new tools of information warfare to powerful effect, whether that’s creating websites posing as news outlets and getting them featured on Facebook or Google News or developing armies of digital bots and trolls to initiate or amplify narratives.
The consequences of this vandalism cannot be understated. Studies have shown that our brains are remarkably adept at tricking us into believing misinformation, especially when it’s shared online. Vandals know this — and that’s why they target the very heart of our information resource centers. Their vandalism directly impacts the 85% of adults who get their news on their phones and the nearly 70% of Americans who get their news on social media.
Concerns over the internet’s impact on our cognition and information processing are important. But what about Siri? Alexa? Google Assistants?
By 2021, there will be nearly one Alexa-like device for every human on Earth. And, unlike an online article, these devices have the power to speak. Match our propensity to process online information as codified truth with an internet-powered human voice, and we are in deep trouble.
These devices work because they are reliable. They make good kitchen timers and music players and provide quick answers to questions. Without our trust, these devices become useless.
Sure, it may be too early to understand how these devices impact our information processing, but the next time you ask Alexa to name the capital of Poland or tell you how many bug species exist, ask yourself if you feel any degree of skepticism.
While editors try to safeguard the truth, there are too many vulnerabilities and even more bad actors out there for them to always succeed.
What’s the solution?
We need to teach the public to treat information with a more critical eye. Our schools need to help develop greater literacy around online media. And tech companies need to do better to provide additional safeguards — not just to prevent the sharing of fake news, but also to protect information.
In this era of rampant manipulation, the stakes are too high.
Originally posted March 16th, 2020 on Forbes.com