Key Points:
• Microsoft Security provides threat protection, posture management, data security, and governance to secure AI applications.
• The DeepSeek R1 model is available on Azure AI Foundry and GitHub, and organizations can use Microsoft Security to secure and govern its use.
• Microsoft Defender for Cloud provides AI security posture management capabilities to detect cyberattack surfaces and vulnerabilities in AI workloads.
A Strong Security Foundation for AI Transformation
A successful AI transformation starts with a strong security foundation. As AI development and adoption increase, organizations need visibility into their emerging AI apps and tools. Microsoft Security provides threat protection, posture management, data security, compliance, and governance to secure AI applications built with the DeepSeek R1 model and gain visibility and control over the use of the separate DeepSeek consumer app.
Secure and Govern AI Apps on Azure AI Foundry and GitHub
The DeepSeek R1 model is available on Azure AI Foundry and GitHub, joining a diverse portfolio of over 1,800 models. Customers can build production-ready AI applications with Azure AI Foundry, accounting for their varying security, safety, and privacy requirements. Similar to other models provided in Azure AI Foundry, DeepSeek R1 has undergone rigorous red teaming and safety evaluations, including automated assessments of model behavior and extensive security reviews to mitigate potential risks. Microsoft’s hosting safeguards for AI models are designed to keep customer data within Azure’s secure boundaries.
Start with Security Posture Management
AI workloads introduce new cyberattack surfaces and vulnerabilities, especially when developers leverage open-source resources. Therefore, it’s critical to start with security posture management, discovering all AI inventories, such as models, orchestrators, grounding data sources, and the direct and indirect risks around these components. Microsoft Defender for Cloud’s AI security posture management capabilities can help security teams gain visibility into AI workloads, detect cyberattack paths that can be exploited by bad actors, and get recommendations to proactively strengthen their security posture against cyberthreats.
Safeguard DeepSeek AI Workloads with Cyberthreat Protection
While having a strong security posture reduces the risk of cyberattacks, the complex and dynamic nature of AI requires active monitoring in runtime as well. No AI model is exempt from malicious activity, and monitoring the latest models is critical to ensuring AI applications are protected. Integrated with Azure AI Foundry, Defender for Cloud continuously monitors AI workloads for unusual and harmful activity, correlates findings, and enriches security alerts with supporting evidence. This provides security operations center (SOC) analysts with alerts on active cyberthreats such as prompt injection cyberattacks and sensitive data leaks.
Comprehensive Data Security
In addition, Microsoft Purview Data Security Posture Management (DSPM) for AI provides visibility into data security and compliance risks, such as sensitive data in user prompts and non-compliant usage, and recommends controls to mitigate the risks. This enables data security teams to create and fine-tune their data security policies to protect that data and prevent data leaks.
Prevent Sensitive Data Leaks and Exfiltration
The leakage of organizational data is among the top concerns for security leaders regarding AI usage, highlighting the importance for organizations to implement controls that prevent users from sharing sensitive information with external third-party AI applications. Microsoft Purview Data Loss Prevention (DLP) enables organizations to prevent users from pasting sensitive data or uploading files containing sensitive content into Generative AI apps from supported browsers.
In this article, we’ve discussed how Microsoft Security solutions can help organizations secure and govern AI apps built with the DeepSeek R1 model and other AI models. By leveraging Microsoft Defender for Cloud, Defender for Cloud Apps, and Purview Data Security Posture Management (DSPM), organizations can discover, secure, and govern AI apps, as well as prevent sensitive data leaks and exfiltration. To learn more and get started with securing your AI apps, visit the additional resources provided.
Read the rest: Source Link
You might also like: Why Choose Azure Managed Applications for Your Business & How to download Azure Data Studio.
Remember to like our facebook and our twitter @WindowsMode for a chance to win a free Surface every month.
Discover more from Windows Mode
Subscribe to get the latest posts sent to your email.