AI in GRC, and GRC for AI
Published:
Content Copyright © 2024 Bloor. All Rights Reserved.
Also posted on: Bloor blogs
My recent attendance at the annual GRC Summit gathering in London illuminated well how AI can be brought to bear on the ever-growing challenges that enterprises face from the compliance burden, and how to use it to better serve organizational aims with governance risk and compliance (GRC) capabilities.
MetricStream, one of the top GRC solution players and a thought leader in the market, has organized these events for almost 10 years, and I’ve attended several in the past. They always feature truly excellent insight from GRC practitioners (many being MetricStream customers), leaders, advisers, and industry partners.
AI was naturally a high-profile topic of discussion. It has become a core feature of many GRC solutions in recent years (‘AI in GRC’), helping customers address tasks for which they would otherwise need to increase staff numbers. With more than one presenter highlighting that regulatory and legislative changes globally can bombard organizations with over 250 individual compliance obligations to consider – daily! – the already well-informed audience knew that automation and AI were essential tools. Compliance would rival healthcare as a sector of mass employment if all the complex work had to be done by human intelligence alone. There are a few rapidly growing smaller companies specializing in the analysis of regulatory change, complementing well the growing range of AI-driven capabilities within the much broader scope of GRC solutions of major players like MetricStream.
Peeking further inside the advantages available from AI, it is helping to take a major step forward in automation because of its ability to ‘decode’ unstructured information. Laws and regulations are typically complex, and couched in formats and language that make them nearer to Byzantine than being accessible and intelligible. Using AI to break these down into elements that are relevant to industries and organizations enables human expertise to be used for wider consideration of the implications, and ideally to look ahead more broadly at the business implications.
AI-enabled capabilities with unstructured data can also broaden the scope of data used in risk scoring. This has been restricted in the past to considering data that can be judged by digital processes as a value, or within a range of values (i.e. structured data). Being able to use AI to apply some quantification/insight into customer and market data, that typically expresses the real world via the multiple languages that humans use, can help risk scoring to provide more insight. These particular AI-driven capabilities also broaden the range of enterprise data that can valuably be integrated into GRC processes from non-GRC domains. Consequently, AI will indirectly accelerate progress towards an objective that MetricStream has expressed for years – to drive relevant GRC processes into the front- and back-office operations of more business areas. Quite simply, the more enterprise data that can be consumed and analyzed by GRC, the more value it can generate. With risk and compliance becoming everyday factors in business decisions, more users need access to calculated insight, plus the ability to look via interactive AI tools into the underlying reasoning. Tools like MetricStream’s embedded GRC co-pilots will help users – for example, explaining how operations like compliance and risk scoring have influenced automated decisions.
With AI already becoming such a key capability, advanced organizations are keen to empower business users to deploy it where they see potential benefits. Interestingly, one of MetricStream’s telco customers revealed that their HR policies now ensure that every single employee has a key objective to take up AI training, which includes guidance on how AI must be used responsibly. Illustrating how its investment in AI had delivered an unexpected benefit, a growing loss from the theft of street-based connection boxes had been significantly reduced. These were being targeted by crime syndicates making sophisticated plans to steal the boxes for valuable metal. AI was used to predict areas where the next spate of crime might strike and so where protection would be most effective.
As well as these and other areas in which AI can help to power GRC, GRC solutions themselves have a role to play in helping organizations ensure their AI investments and operations are included in their overall risk management and compliance approach (‘GRC for AI’). General issues such as data privacy gain a new relevance in the context of enterprise data now being used within AI models. And AI-specific considerations such as whether there is bias or discrimination in models become core enterprise concerns. The GRC community is helping to advance formal guidance to guide organizations, such as the OCEG AI Governance Capability Model, and a planned ISO standard relating to AI usage. We listened to insight into the envisaged management of AI capabilities over the long term – as with many technologies, considerations of responsible management come along after the excitement over potential. Organizations’ IT teams will face the downsides such as recurring operational costs, and the need for training and skills – but with AI holding the promise of potentially widespread benefits, the overall impact on finances may still be positive.
A further blog covers a second major topic from GRC Summit – that of Resilience.