
Best approaches for managing AI security risks
Microsoft are launching an AI security risk assessment methodology today as a first step toward empowering enterprises to audit, track, and enhance the security of AI systems. Furthermore, we are releasing new upgrades to Counterfit, our open-source tool for analyzing the security posture of AI systems.
There is a strong need to protect AI systems against enemies. Counterfit has been frequently downloaded and investigated by companies of all sorts, from startups to governments and large-scale corporations, in order to proactively safeguard their AI systems. From a different perspective, the Machine Learning Evasion Competition we organized to help security professionals hone their skills in defending and attacking AI systems in a realistic setting saw record participation, more than doubling the number of participants and techniques from the previous year.
SECURITY RISK ASSESMENT FRAMEWORK
'AI raises new trust, risk, and security management requirements that conventional controls can not cover.' Microsoft did not want to design a new procedure to fill this gap. It identified that security personnel are already overburdened. Furthermore, people think that, while AI system assaults constitute a new security risk, conventional software security approaches are applicable and may be applied to address this unique risk. To that purpose, we designed our AI security risk assessment in the spirit of existing security risk assessment frameworks.
We think that in order to fully analyze the security risk of an AI system, we must consider the complete system development and deployment lifecycle. In reality, a reliance on academic adversarial machine learning to secure machine learning models oversimplifies the problem. This indicates that in order to effectively protect the AI model, we must account for safeguarding the whole supply chain as well as management.
We know that safeguarding AI systems is a team sport based on our own operational expertise at Microsoft in creating and red teaming models. Model architectures are created by AI researchers. Machine learning engineers create pipelines for data intake, model training, and deployment. Security architects create security rules that are suited for the situation. Threats are dealt with by security experts. To that purpose, we envisioned a structure in which each of these stakeholders would be included.
'At Boston Consulting Group, designing and building safe AI is a cornerstone of AI product development' (BCG). Assets like Microsoft's AI security risk management framework can be essential contributions as the societal need to safeguard our AI systems becomes more obvious. We currently use best practises from this framework in the AI systems we build for our clients, and we're thrilled that Microsoft created and open-sourced it for the benefit of the whole industry.' —Jack Molloy, BCG Senior Security Engineer
As a result of our Microsoft-wide collaboration, our framework features the following characteristics :
1 ] Gives a complete view of AI system security. We examined each stage of the AI system lifecycle in a production environment, from data collection to data processing through model deployment. We also considered AI supply networks, as well as controls and regulations related to backup, recovery, and contingency planning for AI systems.
2] Outlines machine learning dangers and suggestions for mitigating them. We enumerated the threat statement at each stage of the AI system development process to directly assist engineers and security specialists. Then, in the context of safeguarding AI systems, we presented a set of recommended practises that overlay and strengthen current software security norms.
3 ] Allows businesses to undertake risk assessments. The framework enables gathering information on the existing state of security of AI systems in a company, performing gap analysis, and tracking the security posture's improvement.
read more about Microsoft related news:
https://www.mindstick.com/news/1021/microsoft-discreetly-informed-major-xbox-exclusive-sports-into-apple