Use case applications of machine learning can have a vast impact on algorithms and tweets surfacing on the platform, recognizing a need to work more on this front, Twitter has introduced the Responsible Machine Learning Initiative.
As systems often tend to behave differently as opposed to what they were designed for, these alterations may also change the way people use Twitter, and the platform will also be analyzing these effects. Addressing such issues, Twitter has crafted the Responsible Machine Learning Initiative, by basing it on few pillars:
- Taking responsibility for algorithmic decisions
- Equity and fairness of outcomes
- Transparency about decisions and how Twitter arrived at them
- Enabling agency and algorithmic choice
Also Read: Twitter rebrands advertising product suite
The Responsible ML working group comprises various interdisciplinary departments of Twitter and is made up of people from across the company, including technical, research, trust and safety, and product teams, which is led by the ML Ethics, Transparency, and Accountability (META) team, a dedicated group of engineers, researchers, and data scientists.
The group would be researching, analyzing, assessing, and understanding the impact of ML decisions, and potential harms of the algorithms. A few of the analyses to be rolled out in the coming months include a gender and racial bias analysis of our image cropping algorithm, fairness assessment of Home timeline recommendations across racial subgroups, content recommendations for different political ideologies across seven countries.
Twitter is also exploring explainable ML solutions to help users understand Twitter algorithms, what informs them, and how they impact what users see on Twitter. The Responsible ML initiative is also open for feedback and questions, as the platform assesses the fairness and equity of the automated systems.