M365 Copilot will always follow Microsoft’s rigorous approach to Responsible AI. Copilot respects your organizations security, compliance, and privacy policies for Microsoft 365, so you can trust it to be enterprise ready. Microsoft’s work is informed by decades of research on AI, grounding and privacy-preserving machine learning as well as our Responsible AI Standard and a core set of AI principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.
Where can I learn more about Responsible AI?
- You can visit the Microsoft Responsible AI website, which provides an overview of Microsoft’s approach, principles, practices, tools, and policy work on responsible AI. You can also explore the latest updates on AI policy, research, and engineering from Microsoft experts.
- You can read the Microsoft Responsible AI Standard, which is a framework for building AI systems according to six principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. The standard includes guidance on how to design, build, and test AI systems in a responsible way.
- You can check out the Responsible AI blog, where you can find reflections and insights from Microsoft leaders and practitioners on responsible AI topics such as governance, policy, research, engineering, and tools.
- You can take some online courses on Microsoft Learn or Microsoft Learn, where you can learn about the concepts and best practices of responsible AI, such as how to assess fairness, reliability, privacy, security, transparency, and accountability of AI systems using Azure Machine Learning or the Cloud Adoption Framework.
- You can also join the Microsoft Responsible AI Community, where you can connect with other responsible AI enthusiasts, share your experiences and challenges, and learn from experts and peers.