The social network is testing a new rating that would clearly signal users when they are dealing with “good robots,” meaning they see content from an automated but authentic source.
They Tag Good Robots on Twitter
Robots flooding social media have become more dominant in recent years, and not usually because of the positive effects of their presence. Increasingly, these programs are used to spread political propaganda and disinformation, thereby influencing public opinion and democratic processes, but the malicious role of software robots in the pandemic and the rise of false news about COVID-19 vaccinations has also been spectacular.
However, together with their “image problem” above, bots are tools that can serve particularly useful purposes: from natural disasters to armed amok runs, they can be used to share information quickly in all emergencies, for example, but can also be used effectively in various areas of information, education or entertainment. . Of course, identifying useful and harmful applications is not an easy task, especially for online users.
Twitter recently announced that it is testing a new feature in its community microblogging service to differentiate and make “good” robots recognizable. In practice, this would be a label that lets users know that specific accounts work automatically but by posting legitimate content, putting in context the interaction with non-human actors on the platform. All of this is said to have been based on Twitter’s own research, which found that users would have a growing need for this.
There is nothing wrong with good robots
The company also cites material from Carnegie Mellon University, published last year, which finds that roughly half of the Twitter accounts that emerged in connection with the pandemic may have been behind some sort of automated solution. While the company once again emphasizes that it is constantly weeding out deceptive and irregular identities from its network, this time it would twist one thing and, like the Verified Accounts program, will begin to authenticate software bots.
As with personal accounts that appear on Twitter, bots would show up well-known blue tags. These, by default, indicate that the person in question is really the person to whom he or she is surrendering, and not some fraudster trying to misuse his or her name. However, in the case of robots, it is not the person but the content that will be relevant: while verification for a live user does not qualify the content it publishes, for “good robots” the label would indicate the authenticity of the information provided.
At first glance, Twitter seems to take some risk by rating accounts like this, assuming they remain accurate at all times. Jack Dorsey, who runs the company, explained at a Senate committee hearing years ago that he thinks users have a right to know if they’re communicating with a human or a robot, but identification is an increasingly difficult task: not all bots use the Twitter API. And their behavior can also be deceptive.
It is not yet known how long the test period, which started a few days ago, will last and when the provider will make the Automated Account tags widely available. In any case, Twitter has already gained enough experience through its development programs so far to get into testing the new system at all.