Microsoft has open sourced a key piece of its AI security, offering a toolkit that links data sets to targets and scores results, in the cloud or with small language models. Credit: KanawatTH / Shutterstock At the heart of Microsoft’s AI services is a promise to deliver reliable and secure AI . If you’re using Azure AI Studio to build and run inference, you’re benefiting from a hidden set of tools that checks inputs for known prompt attacks. At the same time, outputs are tested to keep the risk of error to a minimum. Those tools are the public-facing side of a growing internal AI security organization, one that looks to build tools and services that avoid the risks associated with exposing AI applications to the entire world. The […]
Original web page at www.infoworld.com