AI innovations are growing at a rapid pace. The breakthroughs are not just coming from large enterprises, there are amazing AI developments coming from startups or individuals as well. Regardless of how large or small the source is, one fact remains the same, AI systems do not always function as intended and have the potential to cause harm. This is evident in news headlines or growing public scrutiny when AI systems miss expectations. The question is, are there standards or guidelines to ensure the systems do not cause harm in society or are trustworthy? As a result, there's an increasing demand for government regulations on AI across industries. Common areas of concern are whether AI systems treat people fairly; respect people's security and privacy; or provide transparency. During the machine learning lifecycle, some of the critical factors that would impact the model's behavior are not fully assessed in the model development process, which can lead to undesirable outcomes. This not just affects society, but also the reputation of the organizations or developers of the AI systems. That's why Responsible AI is essential.
Microsoft has createdResponsible AI principles
Implementing a Responsible AI strategy is a challenge many organizations struggle with. As a result, Microsoft has standardized theResponsible AI practicesand made them available for other companies or machine learning professionals to adopt in designing, building, testing or deployment of their AI systems. For instance, customers or developers can leverage the responsible AIimpact assessment template
Finally, the company has been a key contributor to research andopen-source toolsto empower developers and organizations with the tools they need to discover and mitigate issues that would cause models to perform in a responsible manner. For building machine learning models, data scientists and AI developers can access theResponsible AI dashboardavailable in Azure Machine Learning, which is built on leading responsible AI OSS tools for debugging machine learning models. The company has taken measures to ensure that the Azure Cognitive Services are not used for harm if they fall in the wrong hands, so some of the services havelimited or restricted access. Organizations mustsubmit an applicationto be fully vetted before they can use selected AI services. This is to ensure that developers and organizations have access to tools or use the services in a manner that does not cause threats to human rights; discriminate certain groups from getting life opportunities; or risk of physical or psychological injury.
This is a syndication to an original post on Medium.