The feature by Twitter allows users to review their Tweets that can potentially be harmful or contain hate speech. It is currently being tested on iOS.
Twitter started testing this feature back in May 2020 and has relaunched it now after making few revisions, in an attempt to limit hate speech.
In another version of this experiment initiated in August 2020, more information on why the user is seeing it was also added, along with the platform considering the context of why the language has been used.
With the latest version, the warning prompts urge the users to review a reply when it may be potentially harmful or offensive to others and lets them choose their words again when they resort to abusive language.
Tweets identified as harmful or offensive will display a prompt with three options ‘Tweet’, ‘Edit’, and ‘Delete’. If a user thinks they received the prompt by mistake, they can also share feedback with the platform.
From celebrities to the general public, Twitter has seen some major fights erupt, that have often turned into controversies. The feature is a step towards curbing the growing level of hate speech on the platform.
Several users have heated arguments while conversing on various topics and end up saying things that might be considered mean. The prompt is designed to ingrain a healthier about amongst users and look back and review what they say, before posting a Tweet.