Google Seeks to Break Vicious Cycle of Online Slander

For many years, the vicious cycle has spun: Websites solicit lurid, unverified complaints about supposed cheaters, sexual predators, deadbeats and scammers. People slander their enemies. The anonymous posts appear high in Google results for the names of victims. Then the websites charge the victims thousands of dollars to take the posts down.

This circle of slander has been lucrative for the websites and associated middlemen — and devastating for victims. Now Google is trying to break the loop.

The company plans to change its search algorithm to prevent websites, which operate under domains like BadGirlReport.date and PredatorsAlert.us, from appearing in the list of results when someone searches for a person’s name.

Google also recently created a new concept it calls “known victims.” When people report to the company that they have been attacked on sites that charge to remove posts, Google will automatically suppress similar content when their names are searched for. “Known victims” also includes people whose nude photos have been published online without their consent, allowing them to request suppression of explicit results for their names.

The changes — some already made by Google and others planned for the coming months — are a response to recent New York Times articles documenting how the slander industry preys on victims with Google’s unwitting help.

Credit…David Crotty/Patrick McMullan via Getty Images

“I doubt it will be a perfect solution, certainly not right off the bat. But I think it really should have a significant and positive impact,” said David Graff, Google’s vice president for global policy and standards and trust and safety. “We can’t police the web, but we can be responsible citizens.”

That represents a momentous shift for victims of online slander. Google, which fields an estimated 90 percent of global online search, historically resisted having human judgment play a role in its search engine, although it has bowed to mounting pressure in recent years to fight misinformation and abuse appearing at the top of its results.

At first, Google’s founders saw its algorithm as an unbiased reflection of the internet itself. It used an analysis called PageRank, named after the co-founder Larry Page, to determine the worthiness of a website by evaluating how many other sites linked to it, as well as the quality of those other sites, based on how many sites linked to them.

The philosophy was, “We never touch search, no way no how. If we start touching search results, it’s a one-way ratchet to a curated internet and we’re no longer neutral,” said Danielle Citron, a law professor at the University of Virginia. A decade ago, Professor Citron pressured Google to block so-called revenge porn from coming up in a search of someone’s name. The company initially resisted.

Google articulated its hands-off view in a 2004 statement about why its search engine was surfacing anti-Semitic websites in response to searches for “Jew.”

“Our search results are generated completely objectively and are independent of the beliefs and preferences of those who work at Google,” the company said in the statement, which it deleted a decade later. “The only sites we omit are those we are legally compelled to remove or those maliciously attempting to manipulate our results.”

Google’s early interventions in its search results were limited to things like web spam and pirated movies and music, as required by copyright laws, as well as financially compromising information, such as Social Security numbers. Only recently has the company grudgingly played a more active role in cleaning up people’s search results.

The most notable instance came in 2014, when European courts established the “right to be forgotten.” Residents of the European Union can request that what they regard as inaccurate and irrelevant information about them be removed from search engines.

Google unsuccessfully fought the court ruling. The company said that its role was to make existing information accessible and that it wanted no part in regulating content that appeared in search results. Since the right was established, Google has been forced to remove millions of links from the search results of people’s names.

More pressure to change came after Donald J. Trump was elected president. After the election, one of the top Google search results for “final election vote count 2016” was a link to an article that wrongly stated that Mr. Trump, who won in the Electoral College, had also won the popular vote.

A few months later, Google announced an initiative to provide “algorithmic updates to surface more authoritative content” in an effort to prevent intentionally misleading, false or offensive information from showing up in search results.

Around that time, Google’s antipathy toward engineering harassment out of its results began to soften.

The Wayback Machine’s archive of Google’s policies on removing items from search results captures the company’s evolution. First, Google was willing to disappear nude photos put online without the subject’s consent. Then it began delisting medical information. Next came fake pornography, followed by sites with “exploitative removal” policies and then so-called doxxing content, which Google defined as “exposing contact information with an intent to harm.”

The removal-request forms get millions of visits each year, according to Google, but many victims are unaware of their existence. That has allowed “reputation managers” and others to charge people for the removal of content from their results that they could request for free.

Pandu Nayak, the head of Google’s search quality team, said the company began fighting websites that charge people to remove slanderous content a few years ago, in response to the rise of a thriving industry that surfaced people’s mug shots and then charged for deletion.

Google started ranking such exploitative sites lower in its results, but the change didn’t help people who don’t have much information online. Because Google’s algorithm abhors a vacuum, posts accusing such people of being drug abusers or pedophiles could still appear prominently in their results.

Slander-peddling websites have relied on this feature. They wouldn’t be able to charge thousands of dollars to remove content if the posts weren’t damaging people’s reputations.

Mr. Nayak and Mr. Graff said Google was unaware of this problem until it was highlighted in The Times articles this year. They said that changes to Google’s algorithm and the creation of its “known victims” classification would help solve the problem. In particular, it will make it harder for sites to get traction on Google through one of their preferred methods: copying and reposting defamatory content from other sites.

Google has recently been testing the changes, with contractors doing side-by-side comparisons of the new and old search results.

The Times had previously compiled a list of 47,000 people who have been written about on the slander sites. In a search of a handful of people whose results were previously littered with slanderous posts, the changes Google has made were already detectable. For some, the posts had disappeared from their first page of results and their image results. For others, posts had mostly disappeared — save for one from a newly launched slander site called CheaterArchives.com.

CheaterArchives.com may illustrate the limits of Google’s new protections. Since it is fairly new, it is unlikely to have generated complaints from victims. Those complaints are one way Google finds slander sites. Also, CheaterArchives.com does not explicitly advertise the removal of posts as a service, potentially making it harder for victims to get it removed from their results.

The Google executives said the company was not motivated solely by sympathy for victims of online slander. Instead, it is part of Google’s longstanding efforts to combat sites that are trying to appear higher in the search engine’s results than they deserve.

“These sites are, frankly, gaming our system,” Mr. Graff said.

Still, Google’s move is likely to add to questions about the company’s effective monopoly over what information is and is not in the public domain. Indeed, that is part of the reason that Google has historically been so reluctant to intervene in individual search results.

“You should be able to find anything that’s legal to find,” said Daphne Keller, who was a lawyer at Google from 2004 to 2015, working on the search product team for part of that time, and is now at Stanford studying how platforms should be regulated. Google, she said, “is just flexing its own muscle and deciding what information should disappear.”

Ms. Keller wasn’t criticizing her former employer, but rather lamenting the fact that lawmakers and law enforcement authorities have largely ignored the slander industry and its extortionary practices, leaving Google to clean up the mess.

That Google can potentially solve this problem with a policy change and tweaks to its algorithm is “the upside of centralization,” said Ms. Citron, the University of Virginia professor who has argued that technology platforms have more power than governments to fight online abuse.

Professor Citron was impressed by Google’s changes, particularly the creation of the “known victims” designation. She said such victims are often posted about repeatedly, and sites compound the damage by scraping one another.

“I applaud their efforts,” she said. “Can they do better? Yes, they can.”

Aaron Krolik contributed reporting.

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! TechiLive.in is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – admin@techilive.in. The content will be deleted within 24 hours.
Exit mobile version