All News

Microsoft Bans DeepSeek App Over Data Security and Propaganda Risks

Microsoft has banned its employees from using the DeepSeek app due to concerns over data security and potential Chinese propaganda influence. The company also refrains from listing DeepSeek in its app store. While Microsoft offers a modified version of DeepSeek’s AI model on Azure after safety evaluations, the app itself poses risks related to data storage in China and censorship. This move highlights growing scrutiny of AI tools with geopolitical and privacy implications.

Published May 8, 2025 at 09:06 PM EDT in Artificial Intelligence (AI)

Microsoft has officially banned its employees from using the DeepSeek application, citing serious concerns about data security and the potential for propaganda influence. This announcement was made by Microsoft vice chairman and president Brad Smith during a Senate hearing, marking the first public acknowledgment of such a restriction by the tech giant.

The core of Microsoft’s concern lies in the fact that DeepSeek stores user data on servers located in China. According to DeepSeek’s privacy policy, this data is subject to Chinese law, which requires cooperation with intelligence agencies. Additionally, DeepSeek’s AI model is known to censor topics sensitive to the Chinese government, raising fears about the app’s answers being influenced by Chinese propaganda.

Despite these concerns, Microsoft has made a version of DeepSeek’s R1 AI model available on its Azure cloud platform. However, this version has undergone rigorous safety evaluations and modifications to remove harmful side effects, according to Smith. The key difference is that while the app sends data back to China, the Azure-hosted model can be controlled more securely by users.

DeepSeek is open source, allowing anyone to download and run the model on their own servers, potentially avoiding data transmission to China. However, risks remain, including the potential for the AI to propagate biased or insecure outputs. Microsoft’s decision highlights the broader challenges of balancing AI innovation with data privacy and geopolitical risks.

Interestingly, Microsoft does not ban all AI chat competitors from its Windows app store. For example, Perplexity is available, while apps from Google, Microsoft’s main competitor, are not found in the store. This selective approach underscores the complex interplay of security, competition, and regulatory considerations in AI deployment.

Broader Implications for AI Security and Trust

Microsoft’s stance on DeepSeek reflects growing global scrutiny of AI technologies that involve cross-border data flows and content moderation influenced by authoritarian regimes. For enterprises and governments, this raises critical questions about how to ensure AI tools are secure, transparent, and free from undue influence.

The ability to modify AI models, as Microsoft did with DeepSeek’s R1, demonstrates a path forward for mitigating risks while harnessing powerful AI capabilities. However, it also requires significant expertise and resources to conduct thorough safety evaluations and red teaming exercises.

As AI adoption accelerates, organizations must weigh the benefits of open-source innovation against the imperatives of data sovereignty, privacy, and ethical use. Microsoft’s public position on DeepSeek serves as a case study in navigating these complex trade-offs.

Keep Reading

View All
The Future of Business is AI

AI Tools Built for Agencies That Move Fast.

QuarkyByte provides deep insights into AI governance and security, helping organizations navigate risks like data sovereignty and content integrity. Explore how our analyses can guide safe AI adoption and compliance in complex geopolitical landscapes. Partner with QuarkyByte to ensure your AI solutions meet the highest standards of trust and safety.