How Facebook Is Fighting Terrorism And Changing The Way We View News Through Artificial Intelligence
Interested in how Facebook is fighting terrorism?
Did you ever think about how Facebook is using this pool of seemingly private data?
Facebook has long maintained the fact that it’s purely a technology platform, using its user data to develop artificial intelligence that better serves its users – a tweak in the News Feed algorithm here and some targeted advertising there. After all, the best way to retain users is to know what users want.
There are very few ways in which Facebook has ever admitted to using their data to intervene in their users’ lives (aside from maybe convincing us to buy something we don’t need on Amazon because of their incredibly, on-point ads).
In fact, Facebook even battled the FBI’s attempt to expand surveillance last year, urging congress to halt the rewrite of a US surveillance law that would allow the FBI to gather web history like they gather phone records.
Despite their hard-and-fast views on privacy, it seems like Facebook has changed its tune, and it may be a result of our current political climate.
In an unprecedented and sweeping letter, Facebook CEO Mark Zuckerberg announced he would finally use his platform’s power to truly intervene in people’s lives – from changing the way we view news to detecting terrorism through artificial intelligence
Changing the Way We See The News
Recently, it seems like Donald Trump’s largest grievance about his presidency is so-called “fake news.” While Trump may be totally off-base by swearing that esteemed news sources with decades-long legacies of excellence like The New York Times and CNN are purely fabricated because he doesn’t like the content they report, he’s not completely far off that the social media fake news exists and sways (or at least momentarily confuses) the public.
A whopping 44 percent of all adults get their news from Facebook, according to the Pew Research Center. 62 percent of all Americans get their news from social media in general, and for every article posted from a highly esteemed publication like Vice, Vox, or Mic, regardless of political leaning, there’s one article posted from a completely fabricated, made-up news site.
The scariest part is that according to Vox, fake news on Facebook is more viral than the actual, real news. It’s more likely to be shared and more likely to show up in your News Feed than what’s actually happening in our world.
Maybe it’s an article from The Onion or a similar satire site that a user didn’t really pick up on (hey, it happens to the best of us, even if we wouldn’t like to admit it) or a heavily-slanted op-ed being passed off as fact.
Maybe, it’s just a story that’s completely made up and posted to a blog that cites zero sources and is only trying to get a rise out of people. Either way, if over half of all Americans use social media to get their news, yet most of the news that ends up being widely shared is false, this is a huge, huge problem.
Facebook has raised major controversy because they plan to vet fake news by using artificial intelligence. The tech giant has its own AI called Deep Text which extracts meaning from the words we post to analyze them contextually.
Facebook will use this to down-rate fake news. News that AI deems highly sensationalized, biased, or fabricated will be shown to users much less than articles that are well-sourced, unbiased, and printed in well-respected publications.
For some people, this raises a concern. If the freedom of the press is a constitutional right, shouldn’t Facebook give every article an equal chance of being seen by users? Sure, everyone has a right to create whatever kind of stories they want, regardless of the truth, but just because you can write it, doesn’t mean it should be shared.
Hire expert developers for your next project
1,200 top developers
us since 2016
Even as a publicly-traded company, Facebook has a responsibility to uphold the integrity of the news if almost half of all Americans use their service like people used to use TV’s cable news networks.
No one likes spam, and sensationalized, inaccurate news touted on such a highly-used platform, despite the fact that it didn’t initially intend to be a news source, is just as bad as your cable news network making up a study about how eating junk food 24/7 will help you lose weight, or how the Illuminati is behind Donald Trump’s presidency.
It may be interesting, sure, but it’s just not helpful.
Fighting Terror Through Artificial Intelligence
Another way in which Facebook plans to utilize its droves of user data is to fight terrorism. Now, this may seem a bit hypocritical because Facebook attempted to block Congress from forcing major tech companies to hand over web history, but despite what users think, Facebook doesn’t actually have to read users’ private messages at all.
Facebook plans to use artificial intelligence to screen users for terrorism – they’re not planning on having the FBI comb through the average user’s data.
For example, the service has previously used purely metadata to identify spammers on WhatsApp without ever looking at the actual messages. When asked about the balance between safety and privacy, Zuckerberg said, “You can have two goals on things, even if they‘re a little bit in conflict, and make progress on both.”
It’s unrealistic to think that all 1.86 billion Facebook users are using the platform for good. If you and I can seamlessly use the service to organize our brunch plans, think about how easy it is for terrorists to organize widespread recruitment efforts and events.
Even if every FBI agent or police officer made a fake account to go undercover (though frowned-upon, this does happen), they still wouldn‘t be able to comb through the almost two billion users and all of the data they post. Artificial intelligence can.
Facebook plans to build an AI that can detect terrorist recruitment and “quickly remove anyone trying to use [the service] to recruit for a terrorist organization.” It‘s still a bit far off because it needs to be able to decipher between news about terrorists attacks and the actual planning of the terrorist attacks, but within the next couple of years, we could see major changes.
Facebook is still in its infancy. If the service was a human being, it’d probably be just about getting its braces off and finally learning how to accurately cover acne with powder concealer. In fact, it probably wouldn’t have even had its first kiss yet.
In its short 13 years, the service has gone from a dorm-room project to one of the most influential social networks on the planet. It’s still figuring things out, but the new changes that Mark Zuckerberg has proposed could truly change the world for the better.
Frequently Asked Questions
AI is being coupled with big data analytics in order to help identify patterns of behavior that might indicate terrorism. Much of the technology doing this is secret, however, revelations by Edward Snowdon showed just how extensive this surveillance is.
AI is currently being used to improve everything from the accuracy of virus checkers to identify threats, all the way up to large corporations using it to help prevent system breaches by hackers.
Artificial intelligence certainly has the potential to pose a huge risk to humanity in the distant future. In order to prevent this, companies will need to act responsibly in regard to the AI systems that they develop and governments will need to enact speedy laws to curtail any dangerous new technologies.