Helping to make AI more trustworthy with SAS

Written By:
Published:
Content Copyright © 2024 Bloor. All Rights Reserved.
Also posted on: Bloor blogs

Helping to make AI more trustworthy with SAS banner

In July 2024 SAS introduced a new feature of its flagship Viya platform that helps customers monitor the trustworthiness of AI models. The SAS repository catalogs AI models within an enterprise, treating them as just another data asset, like applications or databases. The catalog includes tracking the lineage of the models, from development to deployment including version control.

A key aspect of the product is the “model card” of an AI. These include details such as the training data used in the model, the intended use of the model and, where available, the accuracy of the model and the ability to report if the model drifts significantly from its expected accuracy. Model drift can happen in a number of ways. Imagine that you had a model of traffic flows in cities, set up in 2019. In 2020 the world encountered Covid, and lockdowns were implemented in many cities across the world. Obviously, this would have a drastic effect on traffic flow and volume, which would drop massively during lockdown and resume slowly after a lockdown was eased. Other models may see less dramatic impacts, but you can see how real-world deployments of models can be affected by changes in circumstances. Being alerted when a model seems to be drifting is clearly a useful feature.

Model cards can be thought of as analogous to a nutrition label on a packet of food that we buy in a supermarket. The latter may list calories, fat content, ingredients, sell-by date etc. In the same way a model card helps give users background information about each AI model in use within an enterprise.

In another use of AI, SAS has an interesting customer in the form of READDI, a non-profit biotech company that aims to identify viruses that could cause future pandemics like Covid, and pro-actively develop antiviral treatments for them. Since 2021 they have been using SAS Viya for machine learning to help in more quickly identifying molecule compounds that may be suitable for anti-viral drug development.

The new feature seems fairly clearly a useful step forward, though at the moment the feature is aimed at machine learning models. It would be useful if the same approach could be taken to generative AI models such as ChatGPT, Perplexity, and Gemini, which have the tendency to “hallucinate” and produce answers that have statistically plausible but factually erroneous elements. The ability to test and measure accuracy in these models is still emerging, though various academic studies are already published in this area. The reliability of such models is a hot topic, and any tools to help in this area would be a big help. I am sure that this kind of further development will be on the future roadmap of vendors such as SAS.

Post a public comment?