Microsoft: Why AI sometimes gets it wrong - and big strides to address it


  • Staff

 Microsoft Source:

Around the time GPT-4 was making headlines for acing standardized tests, Microsoft researchers and collaborators were putting other AI models through a different type of test — one designed to make the models fabricate information.

To target this phenomenon, known as “hallucinations,” they created a text-retrieval task that would give most humans a headache and then tracked and improved the models’ responses. The study led to a new way to reduce instances when large language models (LLMs) deviate from the data given to them.

It’s also one example of how Microsoft is creating solutions to measure, detect and mitigate hallucinations and part of the company’s efforts to develop AI in a safe, trustworthy and ethical way.

“Microsoft wants to ensure that every AI system it builds is something you trust and can use effectively,” says Sarah Bird, chief product officer for Responsible AI at the company. “We’re in a position of having many experts and the resources to invest in this space, so we see ourselves as helping to light the way on figuring out how to use new AI technologies responsibly — and then enabling everyone else to do it too.”

Technically, hallucinations are “ungrounded” content, which means a model has changed the data it’s been given or added additional information not contained in it.

There are times when hallucinations are beneficial, like when users want AI to create a science fiction story or provide unconventional ideas on everything from architecture to coding. But many organizations building AI assistants need them to deliver reliable, grounded information in scenarios like medical summarization and education, where accuracy is critical.

That’s why Microsoft has created a comprehensive array of tools to help address ungroundedness based on expertise from developing its own AI products like Microsoft Copilot.

Company engineers spent months grounding Copilot’s model with Bing search data through retrieval augmented generation, a technique that adds extra knowledge to a model without having to retrain it. Bing’s answers, index and ranking data help Copilot deliver more accurate and relevant responses, along with citations that allow users to look up and verify information.

“The model is amazing at reasoning over information, but we don’t think it should be the source of the answer,” says Bird. “We think data should be the source of the answer, so the first step for us in solving the problem was to bring fresh, high-quality, accurate data to the model.”

Microsoft is now helping customers do the same with advanced tools. The On Your Data feature in Azure OpenAI Service helps organizations ground their generative AI applications with their own data in an enterprise-grade secure environment. Other tools available in Azure AI help customers safeguard their apps across the generative AI lifecycle. An evaluation service helps customers measure groundedness in apps in production and against pre-built groundedness metrics. Safety system messages templates make it easier for engineers to instruct a model to stay focused on sourcing data.

The company also announced a real-time tool to detect groundedness at scale in applications that access enterprise data, such as customer service chat assistants and document summarization tools. The Azure AI Studio tool is powered by a language model fine-tuned to evaluate responses against sourcing documents.

Microsoft is also developing a new mitigation feature to block and correct ungrounded instances in real time. When a grounding error is detected, the feature will automatically rewrite the information based on the data.

“Being on the cutting edge of generative AI means we have a responsibility and an opportunity to make our own products safer and more reliable, and to make our tools available for customers,” says Ken Archer, a Responsible AI principal product manager at Microsoft.

The technologies are supported by research from experts like Ece Kamar, managing director at Microsoft Research’s AI Frontiers lab. Guided by the company’s ethical AI principles, her team published the study that improved models’ responses and discovered a new way to predict hallucinations in another study that looked at how models pay attention to user inputs.

“There is a fundamental question: Why do they hallucinate? Are there ways we can open up the model and see when they happen?” she says. “We are looking at this from a scientific lens, because if you understand why they are happening, you can think about new architectures that enable a future generation of models where hallucinations may not be happening.”

Kamar says LLMs tend to hallucinate more around facts that are less available in internet training data, making the attention study an important step in understanding the mechanisms and impact of ungrounded content.

“As AI systems support people with critical tasks and information-sharing, we have to take every risk that these systems generate very seriously, because we are trying to build future AI systems that will do good things in the world,” she says.

Learn more about Microsoft’s Responsible AI work.


 Source:

 
In the meantime, I wonder: Why AI sometimes gets it right? :think:
I guess every decade has it's thing, 200x had X-services, XP, XFire.
 

My Computer

System One

  • OS
    Windows 11 Home
    Computer type
    PC/Desktop
    CPU
    AMD Ryzen 5 3600 & No fTPM (07/19)
    Motherboard
    MSI B450 TOMAHAWK 7C02v1E & IFX TPM (07/19)
    Memory
    4x 8GB ADATA XPG GAMMIX D10 DDR4 3200MHz CL16
    Graphics Card(s)
    MSI Radeon RX 580 ARMOR 8G OC @48FPS (08/19)
    Sound Card
    Creative Sound Blaster Z (11/16)
    Monitor(s) Displays
    24" Philips 24M1N3200ZS/00 (05/24)
    Screen Resolution
    1920×1080@165Hz via DP1.4
    Hard Drives
    ADATA XPG GAMMIX S11 Pro SSD 512GB (07/19)
    PSU
    Seasonic M12II-520 80 Plus Bronze (11/16)
    Case
    Lian Li PC-7NB & 3x Noctua NF-S12A FLX@700rpm (11/16)
    Cooling
    CPU Cooler Noctua NH-U12S@700rpm (07/19)
    Keyboard
    HP Wired Desktop 320K + Rabalux 76017 Parker (01/24)
    Mouse
    Logitech M330 Silent Plus (04/23)
    Internet Speed
    400/40 Mbps via RouterOS (05/21) & TCP Optimizer
    Browser
    Edge & Brave for YouTube & LibreWolf for FB
    Antivirus
    NoAV & Binisoft WFC & NextDNS
    Other Info
    Backup: Hasleo Backup Suite (PreOS)
    Notifier: Xiaomi Mi Band 7 NFC (05/24)
    Headphones: Sennheiser RS170 (09/10)
    Phone: Samsung Galaxy Xcover 7 (02/24)
    2nd Monitor: AOC G2460VQ6 @75Hz (02/19)
Back
Top Bottom