As artificial intelligence (AI) and machine learning systems become increasingly important to our lives, it’s critical that when they fail we understand how and why. Many research papers have been dedicated to this topic, but inconsistent vocabulary has limited their usefulness. In collaboration with Harvard University’s Berkman Klein Center, Microsoft published a series of materials that define common vocabulary that can be used to describe intentional and unintentional failures.
Read Solving the challenge of securing AI and machine learning systems to learn more about Microsoft’s AI taxonomy papers.
The post Finding a common language to describe AI security threats appeared first on Microsoft Security.