As X, formerly Twitter, notoriously struggles to get advertisers to spend anywhere near what they used to pre-Elon Musk’s more lax approach to moderation, making a space that’s safe for brands has become increasingly important for social platforms.

It’s even more important for a company incurring the high costs of live-streaming, like Twitch, which relies on advertisers to get as close to profitable as possible.

But live-streaming comes with its own set of moderation challenges. New content floods onto Twitch nonstop, and users are able to send messages to streamers immediately.

Campaign US and PRWeek chatted with Twitch executives at TwitchCon over the weekend to find out how the platform approaches brand safety.

Twitch said it relies on a mix of artificial intelligence (AI) that automatically detects violations of its community guidelines and humans that can manually moderate chat rooms.

But these defenses can’t take care of every safety concern live-streaming generates. Hate raids, which are when users flood a stream with abusive messages, have plagued LGBTQ+ and Black streamers for years.

And while Twitch offers a list of suggestions for combating targeted attacks, such as increasing the intensity of automated moderation or only allowing followers to chat, it acknowledges that there isn’t a perfect solution.

“Unfortunately, there is no single solution to most safety-related issues, particularly when it comes to hate raid attacks from malicious and highly-motivated actors,” its guide on targeted attacks reads.

Instead, Twitch offers a wide set of tools that streamers and their moderators can use to address issues as they may come up, such as automated moderation or delays that show moderators messages before they appear in chat.

“Safety is never an end state,” said Angela Hession, SVP of customer trust at Twitch. “We are constantly updating and evolving and I think we do that fairly quickly because live-streaming is ephemeral.”

While Twitch makes the tools, it doesn’t do a lot of manual enforcement itself, instead largely relying on streamers and moderators to do their own moderation. Since neither are direct employees of the platform, and moderators are usually appointed by streamers themselves and rarely paid, Twitch sits in an interesting position of relying heavily on volunteers to keep its platform safe. 

To keep a pulse on how moderation is enforced, Twitch has two moderators sitting on its safety advisory council, Hession said. When pitching the platform’s brand safety pitch to advertisers, it points back to its broader site-wide measures such as AI automation.

“Yes, moderators are part of our community, but what I would say is that we have powerful site-wide safety as well,” Hession said. “Moderators are important, but you have to remember that layer of site-wide safety is the foundation of our community guidelines, and then the extra part is the moderators.

“For advertisers, what is more important is that moderators care so much about the community that they’re more about welcoming people into the community. That’s the role they really see themselves as.”

However, appointing moderators is Twitch’s first suggestion for combating targeted attacks in its online guide, making these volunteers integral to the overall process of keeping chat rooms safe.

In an attempt to keep its safety updates frequent and in line with what the community wants, Twitch contacts streamers through lines of communication that are easier to interact with than a blog post, such as its Discord server or its own live-streaming channel. 

Twitch also finds those channels useful for announcing other policy changes.

“What we’ve come to realize is that to earn trust, we just have to open up the lines of communication,” said Mike Minton, chief monetization officer at Twitch. “We have to give them more understanding of why we make the decisions that we make. We have to give them the opportunity to provide their feedback and be heard.”

At TwitchCon, the platform announced that it will begin punishing Twitch users for doxing and swatting, the acts of revealing a streamer’s location or making a hoax call to emergency services to get a swat team sent after them, committed off the platform.

It also announced simulcasting which allows streamers to broadcast on multiple platforms at once, such as YouTube or Kick, which comes with a new set of brand safety complexities.

According to talent agents and streamers, Kick has a brand safety problem that stems from its toxic community. Streamer Tubbo has gone as far as to tell gaming publication Polygon that he’d “rather kill [himself] than join Kick” because “it’s a platform for bigotry and hate speech.”

“Kick’s in a weird spot because it would be hard to call them brand safe at the moment,” said Ryan Morrison, CEO of Evolved Talent, which manages popular streamers such as xQc and Amouranth. “And it’s not Kick as a brand, it’s the streamers…as it stands if you look at Kick headlines, it’s all about the controversies on it.”

As some of Kick’s more toxic streamers simulcast on Twitch, the latter will be forced to handle those communities.

Content originating from other platforms that is broadcast on Twitch will be subject to the same community guidelines and enforcement, a Twitch representative said via email.

An increasing number of Twitch streams may start coming from Kick, as the platform has signed major non-exclusive deals with streamers such as xQc and Amouranth, and most recently, Nickmercs.

This story first appeared on Campaign US.