Fighting off spam and abusive or hateful messages is harder than most people imagine. Especially when it comes to popular online services, the labor force necessary to actually enforce an effective moderating system is practically impossible to achieve. However, Twitter's Periscope might have just found a solution that's more efficient than the ones used by most other companies. The live video streaming service will become community-policed.
Periscope has just introduced a real-time moderating system that will not only allow the users to report abusive messages, but to also make them disappear from the chat. For those of you who don't know how the service works, Periscope allows people to broadcast live-videos that the other users watch and comment. The comments get overlaid on the stream and eventually float off the screen as newer messages are being typed. This new moderation system allows those who are watching the stream to flag a specific message as spam or abusive, thus making the text instantly disappear from their screens. The person who has submitted the report will also no longer see any of the other messages sent from the same source.
Furthermore, the app will select a random group from the other people who are watching the broadcast and ask them to vote whether that specific comment is actually abusive or not in order to verify the report's veracity. If the other viewers agree, the person who sent the abusive message will be notified and their ability to chat will be disabled for a time. If the same commenter gets reported a second time, his/her ability to chat will be disabled for the entire duration of the stream. For convenience reasons, broadcasters can select whether they want the real-time moderation feature enabled on their stream or not, and the users can opt out of being asked to vote.
This new feature is already rolling out and should be available to worldwide Periscope users.