French tech company Mistral AI has launched a new online moderation tool based on the AI ​​model Ministral 8B that can detect and remove offensive or illegal posts automatically. (There is still a risk of some misjudgments, however.)

According to Techcrunch, for example, some studies have shown that posts about people with disabilities can be flagged as “negative” or “toxic” even though that’s not the case.

Initially, Mistral’s new moderation tool will support Arabic, English, French, Italian, Japanese, Chinese, Korean, Portuguese, Russian, Spanish and German, with more languages ​​are on the way later. Mistral in July launched a large language model that can generate longer tranches of code faster than other open-source models.

Explore IT Tech News for the latest advancements in Information Technology & insightful updates from industry experts! 

Source: https://www.computerworld.com/article/3601520/mistrals-new-tool-automatically-deletes-offending-content.html