Twitter, Elon Musk dismantled the ethical AI team

Twitter, Elon Musk dismantled the ethical AI team


Last March, shortly after announcing plans to acquire Twitter, Elon Musk began to think about making open source the algorithm that determines how tweets are displayed in users' feeds, so that it could be inspected at the search for any prejudices, to the delight of the entrepreneur's fans and those who believe that the platform leans to the left.

On November 5, however, as part of the aggressive cost reduction plan that provides for the firing thousands of the company's employees, Musk's executive team cut a group of artificial intelligence (Ai) researchers who were trying to make Twitter's algorithms more transparent and fair.

Twitter gets rid of "Meta"

Rumman Chowdhury, director of the Twitter Ml Ethics, Transparency, and Accountability team (Meta, which has no ties to the company that controls Facebook), wrote in a tweet that she was interested in the mass layoffs implemented by the new management (Chowdhury did not seem enthusiastic about working for Musk, as shown in the tweet below, where he talks about "relief").

Twitter content This content can also be viewed on the site it originates from.

The former Twitter director told US that the group's business had been suspended following the upcoming acquisition by Musk: "We were told, in no uncertain terms, to don't shake things up, "he said. Chowdhury also reported that her working group had done important new research on political bias, which could have helped Twitter and other social networks prevent certain views from being unfairly penalized. Joan Deitchman, senior manager of Twitter's Meta unit, confirmed the layoff of the entire team. Kristian Lum, another division member and former machine learning researcher, said "the entire Meta team, except one person" has been fired. US could not reach any Twitter or team members for a comment.

Twitter content This content can also be viewed on the site it originates from.

With the emergence of an increasing number of problems related to artificial intelligence - including prejudices about ethnicity, gender and age - many technology companies have decided to equip themselves with a team of "Ai ethics".

Unpublished openings

Twitter's Meta unit has proven more helpful than others in publishing details of the company's AI issues and allowing outside researchers to examine its algorithms.

Twitter made an outrageous decision last year by allowing details of the group's AI bias to be published, after some users noticed that an algorithm for cropping photos appeared to favor white faces. The group also launched one of the first contests that allowed external researchers to test the platform's algorithm for other biases.

Last October, Chowdhury's team then posted some information about Twitter's involuntary political bias, showing how right-sided news sources were in fact promoted over left-leaning news sources.

The reactions

Many outside researchers see the Meta team's dismissal as a blow not only to Twitter but also to efforts to improve artificial intelligence: "What a tragedy," wrote Kate on Twitter. Starbird, associate professor at the University of Washington who studies disinformation online.

Twitter content This content can also be viewed on the site it originates from.

"The Meta team was one of the few good case studies in which a tech company ran an AI ethics group that interacted with the public and academia with substantial credibility, "explains Ali Alkhatib, director of the Unive's Center for Applied Data Ethics rsity of San Francisco.

Alkhatib points out that Chowdhury is highly regarded within the AI ​​ethics community and that her team has done truly valuable work in empowering big tech companies. "There aren't many business ethics teams that are worth taking seriously," says Alkhatib.

According to Mark Riedl, an artificial intelligence professor at Georgia Tech, the algorithms used by Twitter and other computer giants social media have enormous consequences on people's lives and must be studied. "It is difficult to understand from the outside whether Meta had any impact within Twitter, but the potential was there," says the teacher.

According to Riedl, allowing outsiders to study Twitter's algorithms was an important step towards greater transparency and understanding of AI issues.

Avoiding simplifications

In practice, Musk's idea of ​​making Twitter's algorithm open source would be very complicated. There are many different algorithms that affect the way information is posted on the platform, and it's hard to understand without it. access to real-time data that is provided to them in the form of tweets, views and likes.

The idea that there is an algorithm with an explicit political bias could lead to an oversimplification of a a system that can hide prejudices and more insidious problems. Uncovering these biases is exactly the job the Twitter Meta team was doing. "There are not many groups that rigorously study the biases and errors of their algorithms - highlights Alkhatib -." Meta did. "Until a few days ago, at least.

This content originally appeared on sportsgaming. win US.

Powered by Blogger.