Modulate Raises $30 Million To De-Toxify Online Game Chat With AI
Modulate reduces online voice chat toxicity using AI and machine learning. They have raised $30 million in a Series A funding round just announced.
Created on August 19|Last edited on August 19
Comment
For decades, aggressive shouting matches and targeted harmful language were staples of the online videogame voice chat experience, but as times and attitudes change, and with the medium of gaming well into the mainstream, that kind of toxic online behavior isn't tolerated quite as much.
Always-attentive human moderation isn't feasible for large-scale games and player reporting is notoriously unreliable in many ways, so Modulate decided to come up with their own solution. With advancements in machine learning, why not bring an AI into the mix for automated toxicity detection? Modulate's ToxMod application does just that, integrating into game applications to detect and flag toxicity automatically.
It works using machine learning, not just for voice transcription, but also detecting emotion and context to avoid false-positives when what might seem like toxic language was actually acceptable. It constantly learns, too, adapting to the community and game that it's placed into.
Who's funding Modulate and where's the money going?
Modulate has announced a Series A funding round totaling $30 million. The funding round was led by Lakestar with additional support by Hyperplane, Everblue, and Galaxy.
Modulate will continue to grow it's employee base, nearly doubling it already since earlier this year. They will be working to develop improvements to their ToxMod app for predictive toxicity detection in the aim to prevent offenses before they happen, and plan to increase the number of languages that the platform supports (currently only supporting English). They also have a number of game partnerships set up to be announced soon.
Find out more
Add a comment
Tags: ML News
Iterate on AI agents and models faster. Try Weights & Biases today.