As AI becomes more deeply embedded in our everyday lives, it is incumbent upon all of us to be thoughtful and responsible in how we apply it to benefit people and society. A principled approach to responsible AI will be essential for every organization as this technology matures. As technical and product leaders look to adopt responsible AI practices and tools, there are several challenges including identifying the approach that is best suited to their organizations, products and market.
Today, at our Azure event, Put Responsible AI into Practice, we are pleased to share new resources and tools to support customers on this journey, including guidelines for product leaders co-developed by Microsoft and Boston Consulting Group (BCG). While these guidelines are separate from Microsoft’s own Responsible AI principles and processes, they are intended to provide guidance for responsible AI development through the product lifecycle. We are also introducing a new Responsible AI dashboard for data scientists and developers and offering a view into how customers like Novartis are putting responsible AI into action.
Introducing Ten Guidelines for Product Leaders to Implement AI Responsibly
Though the vast majority of people believe in the importance of responsible AI, many companies aren’t sure how to cross what is commonly referred to as the “Responsible AI Gap” between principles and tangible actions. In fact, many companies actually overestimate their responsible AI maturity, in part because they lack clarity on how to make their principles operational.
To help address this need, we partnered with BCG to develop “Ten Guidelines for Product Leaders to Implement AI Responsibly”—a new resource to help provide clear, actionable guidance for technical leaders to guide product teams as they assess, design, and validate responsible AI systems within their organizations.
“Ethical AI principles are necessary but not sufficient. Companies need to go further to create tangible changes in how AI products are designed and built,” says Steve Mills, Chief AI Ethics Officer, BCG GAMMA. “The asset we partnered with Microsoft to create will empower product leaders to guide their teams towards responsible development, proactively identifying and mitigating risks and threats.”
The ten guidelines are grouped into three phases:
- Assess and prepare: Evaluate the product’s benefits, the technology, the potential risks, and the team.
- Design, build, and document: Review the impacts, unique considerations, and the documentation practice.
- Validate and support: Select the testing procedures and the support to ensure products work as intended.
With this new resource, we look forward to seeing more companies across industries embrace responsible AI within their own organizations.
Launching a new Responsible AI dashboard for data scientists and developers
Operationalizing ethical principles such as fairness and transparency within AI systems is one of the biggest hurdles to scaling AI, which is why our engineering teams have infused responsible AI capabilities into Azure AI services, like Azure Machine Learning. These capabilities are designed to help companies build their AI systems with fairness, privacy, security, and other responsible AI priorities.
Today, we’re excited to introduce the Responsible AI (RAI) dashboard to help data scientists and developers more easily understand, protect, and control AI data and models. This dashboard includes a collection of responsible AI capabilities such as interpretability, error analysis, counterfactual, and casual inferencing. Now generally available in open source and running on Azure Machine Learning, the RAI dashboard brings together the most used responsible AI tools into a single workflow and visual canvas that makes it easy to identify, diagnose, and mitigate errors.
Putting responsible AI into action
Organizations across industries are already working with Azure’s AI capabilities, including many of the responsible AI tools that are part of the Responsible AI dashboard.
One example is Novartis, a leading, focused medicines company, which earlier this year announced its eight principles for ethical use of AI. Novartis is already embedding AI into the workflow of their associates and have many instances across the value-chain in which AI is used in day-to-day operations. With AI playing such a critical role in enabling their digital strategy, Microsoft’s responsible AI is an integral piece to ensure AI models are built and used responsibly.
“This AI dashboard enables our teams to assess AI systems’ accuracy and reliability, aligned with our framework for ethical use of AI, to ensure they’re appropriate for the intended context and purpose, as well as how to best integrate them with our human intelligence.”—Nimit Jain, Head of Data Science, Novartis
Another example is Philips, a leading health technology company, which uses Azure and the Fairlearn toolkit to improve their machine learning models’ overall fairness and mitigate biases, leading to better management of patient wellbeing and care. And Scandinavian Airlines, an Azure Machine Learning customer, relies on interpretability in their fraud detection unit to understand model predictions and improve how they identify patterns of suspicious behavior.
Missed the digital event? Download the guidelines and tool
While we are all still navigating this journey, we believe that these new resources will help us take a thoughtful step toward implementing responsible AI. If you missed the event, make sure to watch the recording and download the resources available. Together with Microsoft, let’s put responsible AI into practice.