Warning: Trying to access array offset on value of type null in /customers/d/1/a/ufmalmo.se/httpd.www/magazine/wp-content/themes/refined-magazine/candidthemes/functions/hook-misc.php on line 125 Warning: Trying to access array offset on value of type null in /customers/d/1/a/ufmalmo.se/httpd.www/magazine/wp-content/themes/refined-magazine/candidthemes/functions/hook-misc.php on line 125 What's that app doing? - Pike & Hurricane
What’s that app doing?

What’s that app doing?

Fake news is prevalent in our society today and we find it on almost every platform, from social media, to television news, to presidential campaigns, and most of the time it seems harmless. But what if it is not? What if a text message you have received has the potential to be dangerous, or even lethal? What would you do with that message? How would you react? Would you kill?

The Trust Issue

When information originates from trusted sources such as a family member, a friend, or even government officials, it becomes hard to question its validity, as you trust its source. This is especially true when the information we receive validates our opinions, prejudices, or even the fears we may hold about a particular group within or aspect of our society. We seek to confirm our biases and it provides us comfort in knowing that we are not alone in what we are thinking about or noticing in our communities.

This trust, this satisfaction we find in the information we choose to consume, makes the dissemination of information, and even misinformation, an easy task across social media platforms, where an immense amount of information congregates. Fake news can often be harmless, such as a video showing a commercial airliner doing a barrel roll during a typhoon, or a shark which appears in almost every hurricane, to the more disturbing such as the allegation that a Washington D.C. based pizza restaurant was a front for a child sex ring ran by Former Secretary of State Hillary Clinton and other prominent Democrats. Fake news, however, can be a lot more dangerous, even deadly.

The Rumors

Over the past few months, mobs, driven by misinformation spread on the popular messaging service WhatsApp, have killed multiple people in states across the Republic of India. The communications in question targets the fears people of any community hold: suspicion of outsiders and that of having your children taken. The rumors spreading on WhatsApp alerted people to the fabricated threat of outsiders entering their area with the intention of abducting children, killing people, or even harvesting organs.

The popular messaging app, with over two-hundred million users just within India, allows for messages and video/audio clips to be shared without any indication as to its authenticity or origin. India is WhatsApp’s largest market and with the price of smartphones and internet data decreasing annually, that market is only going to expand. The rumors even spread from WhatsApp to local media stations where they took on a life of their own. This, alongside the ease and pace as to which information can be shared, and with the general lack of education about the dangers of sharing false information, contributes to the problem India finds themselves in today.

So What Happened?

In May, a family of five was driving in the southern Indian state of Tamil Nadu to visit a temple; they were lost however, and asked for directions. This raised the suspicion of the locals, suspecting them of being child traffickers, and eventually the family were greeted by a mob who took them out of their car, stripped them naked, and beat them. Soon, sixty-five year old Rukmani was dead, the others close to it, and forty-six people were arrested.

This story is not unique. Nearly two-dozen people have been killed, and many more injured in this manner across India over the past few months, and these killings are not just limited to villages, attacks have even happened in major tech hubs and cities, such as one example in India’s third largest city Bangalore.

Some of the messages attributed to the killings were videos, one of which appeared to show a man on a scooter kidnapping a child. From the outside, the video appears to be exactly what the rumors have been describing: a man abducting a child in a public street. However, this video originates from a Pakistani public service announcement about the dangers of child abduction. The video ends with a message stating that in Karachi, Pakistan, three-thousand children go missing annually and urges parents to be vigilant to ensure the safety of their children.

The version found on WhatsApp however, was edited to remove the concluding message and leaving only the video of the mock kidnapping. Without the clarifying message at the end it is easy to mistake the depicted event as an actual kidnapping and when the video is shared with a message indicating that this is happening nearby it can be persuasive. The messages and clips that are received, may or may not reflect reality, and with no safeguards within the app to determine the credibility of information received, not much can be done to determine the authenticity of the message.   

What Can be Done?

Discovering the origin of these false and potentially dangerous messages is no simple task. The end-to-end encryption that WhatsApp was built on makes it difficult for authorities to determine where these messages originate. If authorities join WhatsApp groups, such as was the case in Balaghat, a district in the state of Madhya Pradesh, they may be able to determine who broadcasts such messages and make arrests. But often, this is not the norm.

In an attempt to counteract the spread of dangerous misinformation, WhatsApp have taken a number of steps aimed at educating the public and limiting the spread of false information. Two blog posts from July, posted on WhatsApp’s website, indicate two tools that the company has implemented to help curb this issue: limitations on forwarded messages and indicators on forwarded messages stating that the messages you received were indeed forwarded. This is done to urge users to consider the validity of the message before sharing it with others. WhatsApp also removed the “quick-forward” button that would appear next to messages containing any form of media. This is expected to help curb the spread of fake news as users in India forward more messages than users in any other country.

Authorities in India are doing their best to educate the public about how to discern what is false from what is true. Police have been taking to the streets, handing out flyers, holding town meetings, and even speaking to students, all in an effort to urge the public to be skeptical of what they may come across online. WhatsApp themselves recently have taken out informational ads in leading Indian newspapers both in Hindi and English. In some areas, the Indian government even shut down the internet at times in attempts to quell the flow of misinformation and WhatsApp even offered a $50,000 USD reward to anyone who can come up with a solution for this problem.

However, these are all just temporary fixes to a larger problem. While many things can be done to improve the app, it is not fair to place all the blame for these events on WhatsApp, for on the other side of these messages are real people who decide to create and distribute this information. This points to a larger societal problem that cannot be changed by some alterations to the app’s policies or code. Change needs to occur which dissuades people from wanting to produce misinformation in the first place, before it is spread. If those people are reached, then WhatsApp, Indian officials, and the general public have a chance at preventing its spread. In the meantime, think before you share a post.

 

By Ryan Campbell

Photo Credits

WhatsApp, Senado Federal (CC BY 2.0)

Angry Mob, Dalibor Levíček (CC BY-NC-SA 2.0)

Screenshot WhatsApp, Ryan Campbell

Print Friendly, PDF & Email