Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Yuel Rothand previously Leader of the trust and safety of Twitter, Now in the matchHe shares his concerns about the future of the open social network and its ability to combat wrong information, random mail, and other illegal content, such as sexual assault materials (CSAM). In a recent interview, Ruth was concerned about the lack of moderation tools available for Fediiverse – the open social network that includes applications such as Mastodon, Threads, Pixelfed and others, as well as other open platforms like Bluesky.
He also remembered the main moments of trust and safety on Twitter, such as his decision to ban President Trump from the platform, and the wrong information is spread by Russian robot farms, and how Twitter users, including CEO Jack Dormy, fell prey to robots.
On podcast revolutionRuth noted that the efforts made in building online communities are more democratic via the open Internet are also those that have the lowest number of resources when it comes to moderate tools.
“Looking at MASTODON, given other services based on Activitypub (protocol), looking at Bluesky in its early days, then looking at topics when Meta began to develop them, what we saw is that many of the services that were tending to the most difficult control of society gave their least technical tools to manage their policies.”
He also saw a “very large collapse” on the open internet when it came to the transparency and legitimacy that Twitter had one day. Whereas, it can be said that many at the time did not agree to the Twitter decision to ban Trump, the company explained the logical basis for doing this. Now, social media providers are very concerned about preventing bad actors from their games to the point that they are rarely explaining themselves.
Meanwhile, in many open social platforms, users will not receive a notification about their banned publications, and their publications will disappear – there was no even an indication of others on the presence of the post.
“I do not blame startups for being startups, or new parts of programs that lack all bells and biles, but if the poem from the project increases the legitimacy of democracy, and what we did is to retreat from the ruling, then, this has already succeeded at all?” Ruth wonders.
TECHRUNCH event
San Francisco
|
27-29 October, 2025
He also raised issues on the economies of moderation and how the federal approach was not yet sustainable to this front.
For example, an organization called IFTAS (Independent Safety and Safety) was working to build moderation tools for Fediverse, including providing Fed Area with access to CSAM tools, but it ran out of money and I had to close a lot From her projects earlier in 2025.
“We have seen it coming two years ago. Iftas saw that. Everyone who works in this field is largely volunteer for their time and efforts, and this does not go to a large extent, because at some point, people have families and need to pay bills, and calculate the accumulated costs if you need to operate ML models to discover certain types of bad content,” explained. “Everything becomes costly, and the economies of this union approach remain in trust and safety are not completely added. In my opinion, it is still.”
Meanwhile, Bluesky chose to employ supervisors and employment in trust and safety, but it limits the moderation of its own application. In addition, they provide tools that allow people to customize their moderation preferences.
“They are doing this work on a large scale. It is clear that there is room for improvement. I like to see them a little more transparent. But they are doing the right things,” Ruth said. However, with increasing decentralization in service, Bluesky will face questions about when it has the responsibility of protecting the individual on the needs of society, as he notes.
For example, with Doxxing, someone could not see that their personal information was spread over the Internet because of how their moderate tools are formed. But it should still be the responsibility of someone to apply this protection, even if the user is not on the main Bluesky app.
Another issue facing Vivars is that the decision to prefer privacy can thwart moderation attempts. Although Twitter tried not to store the personal data that you did not need, it still collects things like the user’s IP address, and when they reached the service, the identifiers of the device and more. This company helped when you need a forensic analysis of something like the Russian dwarf farm.
At the same time, Adviverse Adviverse may not collect the necessary records or not look at them if they believe it is a violation of the user’s privacy.
But the truth is that without data, it is difficult to determine who is really the robot.
Ruth gave some examples of this from Twitter, noting how users have become to respond to the “robot” to anyone who did not agree to it. He says he initially alerted and reviewed all these publications manually, as he examined hundreds of cases of “robot” charges, and no one was right. Even the co -founder of Twitter and former CEO Jacques Dorshi Halls is a victim, re -tweet From a Russian actor claimed to be Crystal JohnsonBlack woman from New York.
“The CEO of the company loved this content, and was swollen it, and he had no way to know as a user that Crystal Johnson was in fact a Russian dwarf.”
The subject of the discussion was in a timely manner how Amnesty International was changing the scene. Ruth referred to recent research from Stanford that found that in a political context, LLMS models (LLMS) can be more convincing than humans when they are properly controlled.
This means that the solution that depends only on the analysis of the content itself is not enough.
Instead, companies need to track other behavioral signals – such as if some entities create multiple accounts, use automation to publish, or publish at strange times of the day that correspond to different time areas, as suggested.
“These are inherent behavioral signals even in really convincing content. I think this is where you should start this,” said Ruth. “If you start with the content, you are in the arms race against the leadership of artificial intelligence models and you have already lost.”