Open-source AI revolution continues: Microsoft adds DeepSeek R1 to Azure cloud, GitHub


Microsoft is offering Western users a secure alternative to running China's breakthrough AI model DeepSeek R1 on Chinese servers. The Redmont Giant integrates the powerful open-source assistant into Azure Cloud and GitHub.

Chinese AI startup DeepSeek’s R1 shook Silicon Valley. The recently introduced open-source model outperforms the most powerful closed-source models and is comparable to the OpenAI’s flagship o1. However, many concerns arose regarding censorship, user personal data collection, and even exposure on the web. DeepSeek availability was disrupted by a cyberattack.

Microsoft has added DeepSeek R1 to the model catalog on Azure AI Foundry and GitHub, which is part of a portfolio containing over 1,800 other AI models.

ADVERTISEMENT

Users can try the model for free without worrying about their data being sent to China. The Redmond giant says R1 is accessible on “a trusted scalable, and enterprise-ready platform.”

“DeepSeek R1 has undergone rigorous red teaming and safety evaluations, including automated assessments of model behavior and extensive security reviews to mitigate potential risks,” Microsoft said.

The tech giant offers built-in model evaluation tools to quickly compare outputs, benchmark performance, and scale AI-powered applications.

“One of the key advantages of using DeepSeek R1 or any other model on Azure AI Foundry is the speed at which developers can experiment, iterate, and integrate AI into their workflows,” Microsoft boasts.

According to the provided information, Deepseek R1 use is currently priced at $0, but the use is subject to rate limits, which may change at any time.

“Pricing may change, and your continued use will be subject to the new price. The model is in preview; a new deployment may be required for continued use”, the pricing description reads.

To use R1 on Azure, users need to subscribe and enter credit card details. On GitHub, users can try the model in the playground and also use the API for their projects. Free and Copilot individual users can initiate up to 50 requests per day.

Microsoft plans to also introduce ‘distilled flavors’ of the DeepSeek R1 models for users to run locally on their Copilot+ PCs.

ADVERTISEMENT
Gintaras Radauskas jurgita Konstancija Gasaityte profile Ernestas Naprys
Get our latest stories today on Google News

“We’re bringing NPU-optimized versions of DeepSeek-R1 directly to Copilot+ PCs, starting with Qualcomm Snapdragon X first, followed by Intel Core Ultra 200V and others,” Microsoft said.

The small 1.5 billion parameter model will be released in the AI Toolkit first, with 7B and 14B variants arriving soon. Optimized small models let developers deploy AI-powered applications that run directly on-device.

According to the blog post, optimized models can provide the first token in 130 milliseconds and generate 16 tokens per second for short prompts on a variety of neural processing units (NPUs) in the Windows ecosystem.