Making it easier for companies to build and ship AI people can trust

Generative AI is transforming many industries, but businesses often struggle with how to create and deploy safe and secure AI tools as technology evolves. Leaders worry about the risk of AI generating incorrect or harmful information, leaking sensitive data, being hijacked by attackers or violating privacy laws — and they’re sometimes ill-equipped to handle the risks. “Organizations care about safety and security along with quality and performance of their AI applications,” says Sarah Bird, chief product officer of Responsible AI at Microsoft. “But many of them don’t understand what they need to do to make their AI trustworthy, or they don’t have the tools to do it.” To bridge the gap, Microsoft provides tools and services that help developers build and ship trustworthy AI systems, or AI built with security, safety and privacy in mind. The tools have helped many organizations launch technologies in complex and heavily regulated environments, from an AI assistant that summarizes p...