How Facebook got addicted to spreading misinformation
Synopsis
Facebook’s algorithms weren’t created to filter out what was false or inflammatory; they were designed to make people share and engage with content that was most likely to outrage or titillate them. The work of its Responsible AI team, tasked with tackling AI bias, is irrelevant to fixing problems of misinformation, extremism, and political polarisation.
By Karen HaoJoaquin Quiñonero Candela, a director of AI at Facebook, was apologizing to his audience.It was March 23, 2018, just days after the revelation that Cambridge Analytica, a consultancy that worked on Donald Trump’s 2016 presidential election campaign, had surreptitiously siphoned the personal data of tens of millions of Americans from their Facebook accounts in an attempt to influence how they voted. It was the biggest privacy breach
- FONT SIZE
AbcSmall
AbcMedium
AbcLarge
Sign in to read the full article
You’ve got this Prime Story as a Free Gift
₹399/month
Monthly
PLAN
Billed Amount ₹399
₹208/month
(Save 49%)
Yearly
PLAN
Billed Amount ₹2,499
15
Days Trial
+Includes DocuBay and TimesPrime Membership.
₹150/month
(Save 63%)
2-Year
PLAN
Billed Amount ₹3,599
15
Days Trial
+Includes DocuBay and TimesPrime Membership.
Already a Member? Sign In now
Why ?
-
Sharp Insight-rich, Indepth stories across 20+ sectors
-
Access the exclusive Economic Times stories, Editorial and Expert opinion
-
Clean experience with
Minimal Ads -
Comment & Engage with ET Prime community -
Exclusive invites to Virtual Events with Industry Leaders -
A trusted team of Journalists & Analysts who can best filter signal from noise
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.