Site icon Windows Mode

Unhackable AI on Azure: Ensuring the Future of Intelligent Automation

Unhackable ai on azure ensuring the future of intelligent automation.png

Key Points:

• Microsoft does not use customer data to train shared models or share logs/content with model providers.
• AI models are treated as customer content and handled with the same protection as documents and email messages.
• Microsoft’s AI platforms (Azure AI Foundry and Azure OpenAI Service) are 100% hosted on Microsoft’s own servers, with no runtime connections to the model providers.

As the world of generative AI models continues to rapidly evolve, it’s crucial to scrutinize the security and trustworthiness of these models before integrating them into your AI system. Microsoft is committed to providing a secure and trustworthy environment for developers to explore and innovate.

One crucial aspect of ensuring the security of AI models is protecting against malicious activities that could compromise the models and the runtime environment. In this article, we’ll delve into how Microsoft secures the models and the runtime environment.

To set the record straight, Microsoft does not use customer data to train shared models or share logs/content with model providers. Our AI products and platforms are part of our standard product offerings, subject to the same terms and trust boundaries you’ve come to expect from Microsoft. Your model inputs and outputs are treated as customer content and handled with the same protection as your documents and email messages.

Moving on to model security, Microsoft’s approach is built around a "zero-trust" architecture. This means that Azure services do not assume that things running on Azure are safe. Instead, we focus on defending against malicious activities, just as we would with any other software running in a VM.

To further mitigate the risk of concealing malware inside an AI model, we scan and test our models for embedded malicious code, common vulnerabilities, and backdoors before release. For high-visibility models like DeepSeek R1, we go even further, involving teams of experts to analyze the model’s source code and functionality to detect potential issues.

Of course, no scans can detect all malicious activities. As security professionals, it’s essential to recognize that trust in these models should come from trusted intermediaries like Microsoft and an organization’s own trust in their provider.

For a more secure experience, Microsoft’s security products can be used to defend and govern AI models. You can read more about how to do this here. Additionally, it’s crucial to evaluate each model not just for security but for its relevance to your specific use case, by testing it as part of your complete system.

In summary, our approach to securing models on Azure AI Foundry involves scanning and testing high-visibility models before hosting them in the Azure AI Foundry Model Catalogue, monitoring for changes that may impact the trustworthiness of each model for our customers. By understanding the key points outlined above, you can make informed decisions when integrating AI models into your system.

Learn more about Microsoft Security solutions and how to secure your AI models and customer data.

Read the rest: Source Link

You might also like: Why Choose Azure Managed Applications for Your Business & How to download Azure Data Studio.

Remember to like our facebook and our twitter @WindowsMode for a chance to win a free Surface every month.

Exit mobile version