Google Turmoil Exposes Cracks Long in Making for Top AI Watchdog

[ad_1]

For more than three years, Google held up its Ethical AI research team as a shining example of a concerted effort to address thorny issues raised by its innovations. Created in 2017, the group assembled researchers from underrepresented communities and varied areas of expertise to examine the moral implications of futuristic technology and illuminate Silicon Valley’s blind spots. It was led by a pair of star scientists, who burnished Google’s reputation as a hub for a burgeoning field of study.

In December 2020, the division’s leadership began to collapse after the contentious exit of prominent Black researcher Timnit Gebru over a paper the company saw as critical of its own artificial intelligence technology. To outsiders, the decision undermined the very ideals the group was trying to uphold. To insiders, this promising ethical AI effort had already been running aground for at least two years, mired in previously unreported disputes over the way Google handles allegations of harassment, racism, and sexism, according to more than a dozen current and former employees and AI academic researchers.

One researcher in Google’s AI division was accused by colleagues of sexually harassing other people at another organisation, and Google’s top AI executive gave him a significant new role even after learning of the allegations before eventually dismissing him on different grounds, several of the people said. Gebru and her co-lead Margaret Mitchell blamed a sexist and racist culture as the reason they were left out of meetings and emails related to AI ethics, and several others in the department were accused of bullying by their subordinates, with little consequence, several people said. In the months before Gebru was let go, there was a protracted conflict with Google sister company Waymo over the Ethical AI group’s plan to study whether its autonomous-driving system effectively detects pedestrians of varying skin tones.

The collapse of the group’s leadership has provoked debate in the artificial-intelligence community over how serious the company is about supporting the work of the Ethical AI group—and ultimately whether the tech industry can reliably hold itself in check while developing technologies that touch virtually every area of people’s lives. The discord is also the latest example of a generational shift at Google, where more demographically diverse newcomers have stood up to a powerful old guard that helped build the company into a behemoth. Some members of the research group say they believe that Google AI chief Jeff Dean and other leaders have racial and gender blind spots, despite progressive bona fides—and that the technology they’ve developed sometimes mirrors those gaps in understanding the lived experiences of people unlike themselves.

“It’s so shocking that Google would sabotage its efforts to become a credible centre of research,” said Ali Alkhatib, a research fellow at the University of San Francisco’s Center for Applied Data Ethics. “It was almost unthinkable, until it happened.”

The fallout continues several months after Gebru’s ouster. On April 6, Google Research manager Samy Bengio, who Ethical AI team members came to regard as a key ally, resigned. Other researchers say they are interviewing for jobs outside the search giant.

Through a spokesman, Dean declined a request to be interviewed for this story. Bengio, who at Google had managed hundreds of people in Ethical AI and other research groups, didn’t respond to multiple requests for comment.

“We have hundreds of people working on responsible AI, with 200+ publications in the last year alone,” a Google spokesman said. “This research is incredibly important and we’re continuing to expand our work in this area in keeping with our AI principles.”

Before Google caused an uproar over its handling of a research paper in the waning weeks of 2020, Mitchell and Gebru had been co-leads of a diverse crew that pressed the technology industry to innovate without harming the marginalised groups many of them personally represented.

Under Dean, the two women had developed reputations as valued experts, protective leaders, and inclusion advocates, but also as internal critics and agitators who weren’t afraid to make waves when challenged.

Mitchell arrived at Google first, in 2016, from Microsoft. In her first six months at Google, she worked on ethical AI research for her inaugural project, trying to find ways to alter Google’s development methods to be more inclusive and produce results that don’t disproportionately harm particular groups. She found there was a groundswell of support for this kind of work. Individual Googlers had started to care about the subject and formed various working groups dedicated to the responsible use of AI.

Around this time, more people in the technology industry started realising the importance of having employees focused on the ethical use of AI, as algorithms became deeply woven into their products and questions of bias and fairness abounded. The prevailing concern was that biases in both the data used to train AI models and the people doing the programming were encoding inequalities into the DNA of products already being used for mainstream decision-making around parole and sentencing, loans and mortgages, and facial recognition. Homogenous teams were also ill-equipped to see the impact of these systems on marginalised populations.

Mitchell’s project to bring fairness to Google’s products and development methods drew support within the company, but also skepticism. She held many meetings to describe her work and explore collaborations, and some Google colleagues reported complaints about her personality to human resources, Mitchell said. A department representative told her she was unlikable, aggressive and self-promotional based on that feedback. Google said it found no evidence that an HR employee used those words.

“I chose to go to Google knowing that I would face discrimination,” Mitchell said in an interview. “It was just part of my calculus: if I really want to make a substantive and meaningful difference in AI that stretches towards the future, I need to be in it with people. And I used to say, I’m trying to pave a path forward using myself as the pavement.”

She had made enough of an impact that two colleagues who were interested in making Google’s AI products more ethical asked Mitchell if she would be their new manager in 2017. That shift marked the foundation of the Ethical AI team.

“This team wasn’t started because Google was feeling particularly magnanimous,” said Alex Hanna, a researcher on the team. “It was started because Meg Mitchell pushed for it to be a team and to build it out.”

Google executives began recruiting Gebru later that year, although from the beginning she harbored reservations.

In December 2017, Dean, then head of the Google Brain AI research group, and his colleague Samy Bengio attended a dinner in Long Beach, California, hosted by Black in AI, a group co-founded by Gebru, and Dean pitched Gebru on coming to work for Google.

Even before she began the interview process, Gebru had heard allegations of employee harassment and discrimination from friends at the company, and during negotiations, she said, Google wanted her to enter at a lower level than she thought her work experience dictated. But Mitchell had asked if Gebru would join her as co-lead of the Ethical AI team, and that was enough of a draw.

“I did not go into it thinking this is a great place,” Gebru said in an interview. “There were a number of women who sat me down and talked to me about their experiences with people, their experiences with harassment, their experiences with bullying, their experiences with trying to talk about it and how they were dismissed.”

Gebru had emerged as one of a handful of artificial intelligence researchers who was well-known outside scientific circles, bolstered by landmark work in 2018 that showed some facial recognition products fared poorly in categorising people with darker skin, as well as earlier research on using Google Street View to estimate race, education and income. When Gebru accepted her job offer, Dean sent her an email expressing how happy that made him. On her first day on Google’s campus as an employee in September 2018, he gave her a high five, she said.

The relationship between Gebru, a Black Eritrean woman whose family emigrated from Ethiopia, and Dean, a White man born in Hawaii, began with mutual respect. He and Gebru have discussed his childhood years spent in Africa. He has donated to organisations supporting diversity in computer science, including Black Girls Code, StreetCode Academy, and Gebru’s group, Black in AI. He has also worked to combat HIV/AIDS through his work with the World Health Organization.

That relationship started to fray almost immediately, according to people familiar with the group. Later that fall, Gebru and another Google researcher, Katherine Heller, informed their bosses that a colleague in Dean’s AI group had been accused of sexually harassing others at another institution, according to four people familiar with the situation. Bloomberg isn’t naming the researcher because his accusers, who haven’t spoken about it publicly before, are concerned about possible retribution.

The researchers had learned of multiple complaints that the male employee had touched women inappropriately at another institution where he also worked, according to the people. Later on, they were told the individual asked personal questions about Google co-workers’ sexual orientations and dating lives, and verbally assailed colleagues. Google said it had begun an investigation immediately after receiving reports about the researcher’s misconduct at the other institution.

Around this same time, a week after an October report in the New York Times that said former executive Andy Rubin had been given a $90 million (roughly Rs. 680 crores) exit package despite employee claims of sexual misconduct, thousands of Google employees walked off the job to protest the company’s handling of such abuses by executives.

In the aftermath of that report, tensions over allegations of discrimination within the research division came to a head at a meeting in late 2018, according to people familiar with the situation.

As Dean ate lunch in a Google conference room, Gebru and Mitchell outlined a litany of concerns: the alleged sexual harasser in the research group; disparities in the organisation, including women being given lower roles and titles than less-qualified men; and a perceived pattern among managers of excluding and undermining women. Mitchell also enumerated ways she believed she had been subjected to sexism, including being left out of email chains and meeting invitations. Mitchell said she told Dean she’d been prevented from getting a promotion because of nebulous complaints to HR about her personality.

Dean struck a skeptical and cautious note about the allegations of harassment, according to people familiar with the conversation. He said he hadn’t heard the claims and would look into the matter. He also disputed the notion that women were being systematically put in lower positions than they deserved and pushed back on the idea that Mitchell’s treatment was related to her gender. Dean and the women discussed how to create a more inclusive environment, and he said he would follow up on the other topics.

In the succeeding months, Gebru said she and her colleagues went to Dean and other managers and outlined several additional allegations of harassment, intimidation, and bullying within the larger Google Research division that encompassed the Ethical AI group. Gebru accompanied some women to meetings with Dean and Bengio and also shared written accounts of other women in the organisation who said, more broadly, that they experienced unwanted touches, verbal intimidation, and perceived sabotage from their bosses.

About a month after Gebru and Heller reported the claims of sexual harassment and after the lunch meeting with Gebru and Mitchell, Dean announced a significant new research initiative, and put the accused individual in charge of it, according to several people familiar with the situation. That rankled the internal whistleblowers, who feared the influence their newly empowered colleague could have on women under his tutelage.

Nine months later, in July 2019, Dean fired the accused researcher, for “leadership issues,” people familiar said. At the time, higher-ups said they were waiting to hear back from the researcher’s other employer about the investigation into his behavior there. His departure from Google came a month after the company received allegations of the researcher’s misconduct on its own premises, but before that investigation was complete, Google said. He later also exited his job at the other institution.

The former employee then threatened to sue Google, and the company’s legal department informed the whistleblowers they might hear from his lawyers, according to several people familiar with the situation. The company was also vague about whether it would defend its employees who reported the alleged misconduct to Google, saying it would depend on the nature of the suit, and company attorneys suggested the women hire their own counsel, some of the people said.

“We investigate any allegations and take firm action against employees who violate our workplace policies,” a Google spokesman said in a statement. “Many of these accounts are inaccurate and don’t reflect the thoroughness of our processes and the consequences for any violations.”

While the Ethical AI team was privately having a hard time fitting into Google’s culture, there was still little indication of trouble externally, and the company was still touting the group and its work. It gave Gebru a diversity award in the fall of 2019 and asked her to represent the company at the Afrotech conference to be held in November of that year. The company also showcased Mitchell’s work in a blog in January 2020.

The Ethical AI team continued to pursue independent research and to advise Google on the use of its technology, and some of its suggestions were heeded, but several recommendations were rebuffed or resisted, according to team members.

Mitchell was examining Google’s use of facial-analysis software and she implored Google staffers to use the term “facial analysis” rather than “facial recognition,” because it was more accurate and the latter is a biometric that may soon be regulated. But her colleagues were reluctant to budge.

“We had to pull in people who were two levels higher than us to say what we were saying in order to have it taken seriously,” Mitchell said. “We were like, ‘let us help you, please let us help you.’”

But several researchers in the group said it was clear from the responses they were getting internally that Google was becoming more sensitive about the team’s interests. In the spring of 2020, Gebru wanted to look into a dataset publicly released by Waymo, the self-driving car unit of Google parent Alphabet.

One of the things that interested her was pedestrian-detection data, and whether an individual’s skin tone made any difference in how the technology functioned, Gebru and five other people familiar with the situation said. She was also interested in how the system processed pedestrians of various abilities – such as if someone uses a wheelchair or cane – and other matters.

“At Waymo, we use a range of sensors and methodologies to reduce the risk of bias in our AI models,” a spokeswoman said.

The project became bogged down in internal legal haggling. Google’s legal department asked that researchers speak with Waymo before pursuing the research, a Waymo spokeswoman said. Waymo employees peppered the team with inquiries, including why they were interested in skin colour and what they were planning to do with the results. Meetings dragged on for months before Gebru and her group could move ahead, according to people with knowledge of the matter.

The company wanted to make sure the conclusions of any research were “reliable, meaningful, and accurate,” according to the Waymo spokeswoman.

There were other running conflicts. Gebru said she had gotten into disputes with executives because of her criticism that Google doesn’t accommodate enough languages spoken around the world, including ones spoken by millions of people in her native region of East Africa. The company said it has multiple research teams collaborating on language model work for the translation of 100 languages, and active work on extending to 1,000 languages. Both efforts include many languages from East Africa.

“We were not criticising Google products,” Mitchell said. “We were working very hard internally to help Google make good decisions in areas that can affect lots of people and can disproportionately harm folks that are already marginalised. I did not want an adversarial relationship with Google and really wanted to stay there for years more.”

Despite the friction, during a performance review in spring 2020, Dean helped Gebru get a promotion. “He had one comment for improvement, and that’s to help those interested in developing large language models, to work with them in a way that’s consistent with our AI principles,” Gebru said.

Gebru took his advice – language models would later be the topic of her final paper at Google, the one that proved fatal to her employment at the company.

Though tensions in the Research division had been building for months, they began to boil over just before the US Thanksgiving holiday last year. Gebru, about to head out on vacation, saw a meeting from Megan Kacholia, a vice president in Google Research, appear on her calendar mid-afternoon, for a chat just before the end of the day.

In the meeting held by videoconference, Kacholia demanded she retract a paper that had already been submitted for a March AI fairness conference, or remove the names of five Google co-authors. The paper in question—“On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”—surveyed the risks of large language models, which are essential for AI systems to understand words and generate text. Among concerns raised were whether these models are sucking up more text from corners of the Internet, like Reddit, where biased and derogatory speech can be prevalent. The risk is that AI dependent on the models regurgitates prejudiced viewpoints and speech patterns found online. A Google language model that powers many US search results, called BERT, is mentioned throughout the paper.

Gebru responded by email, refusing to retract the research and laying out conditions around the paper, saying that if those conditions couldn’t be met, she’d find a suitable final day at the company.

Then, on December 2, as Gebru was driving across the US to visit her mother on the East Coast, she said she received a text message from one of her direct reports, informing her that Kacholia sent an email saying Google had accepted Gebru’s resignation. It was the first Gebru heard of what she has come to call Google “resignating” her—accepting a resignation she says she didn’t offer.

Her corporate email turned off, Gebru said she received a note to her personal account from Kacholia, saying Google couldn’t meet her conditions, accepted her resignation, and thought it best that it take immediate effect. Members of Gebru’s team say they were shocked and that they rushed to meet with any leaders who would listen to their concerns, and their demand to reinstate her employment with a more senior role. The team still thought they could get Google to fix things — Mitchell said she even composed an apology script on Dean’s behalf.

Rapprochement with Gebru never came.

“I am basically bewildered at how many unforced errors Google is making here,” said Emily M. Bender, the University of Washington linguistics professor who co-authored the controversial 2021 paper along with Gebru, Mitchell and their Google co-workers. “Google could have said yes to this wonderful work that we’re doing, and promoted it, or just been quiet about it. With every step, they seem to be making the worst possible choice and then doubling down on it.”

The handling of Gebru’s exit from Ethical AI marks a rare public misstep for Dean, who has accrued accolades in computer science circles over his career at Google. He developed foundational technology to support Google’s search engine in the early days and continues to work as an engineer. He now oversees more than 3,000 employees, but he still codes two days a week, people familiar with the situation said.

Dean, a longtime vocal supporter of efforts to expand diversity in tech, dedicated an all-hands meeting to discuss Black Lives Matter after the police murder of George Floyd. Dean and his wife have given $4 million (roughly Rs. 30 crores) to Howard University, a historically Black institution. They have also donated to his alma mater the University of Washington, Cornell University, and several other schools to improve computer science diversity. He presided over the addition of headcount for the Ethical AI team to build a more diverse group of scientists.

“Jeff Dean is a good human being – across the board and on these issues,” said Ed Lazowska, a computer science professor at the University of Washington, who has known Dean for 30 years, since he enrolled there, and has worked with him on various donations to the school. Lazowska said Dean, for his part, is “distressed” about the way things have played out. “It’s not just about his reputation being damaged, it’s about the company, and I’m sure it’s about what’s happening to the group – this is something that’s very important to him.”

Gebru said that in the absence of being listened to as an individual, she would sometimes use her papers to get Google to take fairness or diversity issues seriously.

“If you can’t influence things internally at Google, sometimes our strategy was to get papers out externally, then that gets traction,” Gebru said. “And then that goes back and changes things internally.”

Google said that Dean has emailed the entire research organisation multiple times and hosted large meetings addressing these issues, including laying out a new organisation structure and policies to bolster responsible AI research and processes.

At Google, managers are evaluated by their reports in a variety of areas, and the data becomes public to people within their organisations. Dean’s job-performance ratings among his reports have taken a hit in recent internal employee poll data seen by Bloomberg, particularly in the area of diversity and inclusion, where they fell 27 percentage points from a year earlier to 62 percent approval.

In February, Google elevated Marian Croak, a prominent Black vice president who managed site reliability, to become the lead for the Responsible AI Research and Engineering Center of Expertise, under Dean. In her new role, Croak was put in charge of most teams and individuals focused on the responsible use of AI.

The reorganisation was meant to give the team a fresh start, but the day after Google announced the move, Mitchell was fired, reopening a wound for many team members.

Five weeks earlier, Mitchell had been locked out of her email and other corporate systems. Google later said Mitchell had “exfiltrated” business-sensitive documents and private data of other employees. A person familiar with the situation said she was sending emails previously exchanged with Gebru about their experiences with discrimination to a personal Google Drive account and other email addresses.

The fractures at Google’s Research division have raised broader questions about whether tech companies can be trusted to self-regulate their algorithms and products to ensure there aren’t unintended, or ignored, consequences.

Some researchers say Google’s legal department is now a big part of their work in an unhealthy way. One of Dean’s memos in February outlined the permanent specific role of legal for sensitive research — giving the lawyers a prominent position in informing the decisions of research leaders. He’s also taken a more pragmatic approach to the AI ethics group, telling them that when they raise issues, they should also offer solutions, rather than just focusing on benefits and harms, multiple people said. Google said Dean believes the researchers have a responsibility to discuss already-developed techniques or research that seeks to address those harms, but doesn’t expect them to develop new techniques.

Without team leads or direction, several Ethical AI team members say they don’t know what will come next under Croak’s tenure. Their leaders have told them they will find a replacement for Mitchell and Gebru at some point. Gebru’s ouster interrupted the Waymo research effort, but the company said it has since proceeded. Waymo will review the results and decide if it wants to give the researchers approval to publish a paper on the study. Otherwise, the conclusions may remain private.

While continuing to work on ongoing projects, the AI ethics researchers are wading in the doldrums of defeat. Their experiment to exist on an island within Google, protected by Gebru and Mitchell while doing their work, has failed, some researchers said. Some other researchers, focused on the responsible use of AI, are more sanguine about their prospects at Google, but declined to be quoted for this story.

“It still feels like we’re in a holding pattern,” said team member Alex Hanna.

© 2021 Bloomberg LP


Is OnePlus 9R old wine in a new bottle — or something more? We discussed this on Orbital, the Gadgets 360 podcast. Later (starting at 23:00), we talk about the new OnePlus Watch. Orbital is available on Apple Podcasts, Google Podcasts, Spotify, and wherever you get your podcasts.

For all the Latest Technology News Click here

For Latest News & Update please Follow us on Google News

Also, if you like our efforts, consider sharing this story with your friends, this will encourage us to bring more exciting updates for you

[ad_2]

Read original article over here

Exit mobile version