
Microsoft will start ranking artificial intelligence (AI) models based on their safety performance, in a bid to address growing concerns about the protection of data within AI products.
The company will offer the ranking system feature to its cloud customers who can use it to assess AI products like OpenAI and China’s DeepSeek, according to The Financial Times.
Microsoft will add a “safety” category to its “model leaderboard”, launched earlier this month for developers to evaluate models including China’s DeepSeek and France’s Mistral, said Sarah Bird, Microsoft’s head of Responsible AI.
The leaderboard is currently available for tens of thousands of Microsoft clients using the company’s Azure Foundry developer platform. It will likely affect the overall demand for AI models and applications.
Bird claims that the addition of a “safety” feature will allow people to “directly shop and understand” which AI models are best fitted to their trust and risk requirements.
According to Bird, safety ratings will build upon benchmarks such as Microsoft’s ToxiGen, used to detect hate speech, and the Center for AI’s Safety Weapons of Mass Destruction Proxy benchmark, used to evaluate whether AI could be used to create biochemical weapons.
Microsoft has recently offered free cybersecurity support to European governments, aimed to boost intelligence-sharing on AI-based threats and help to prevent and disrupt attacks.
Your email address will not be published. Required fields are markedmarked