Surviving the AI-pocalypse with the BCS SM-ITAM
Published:
Content Copyright © 2023 Bloor. All Rights Reserved.
Also posted on: Bloor blogs
Amongst my responsibilities in the Dept. of the Bleedin’ Obvious, I am empowered to point out that there is a bit of an AI (so-called Artificial Intelligence) storm at present. Everybody is adding AI to their products – or, at least, adding the letters “AI” to their product names. And yet, all a “generative AI” really does is to add the next word to a sentence it is generating based on a superficial knowledge of a large subset of the Internet. It is also using a lot of resources and generating a lot of heat whilst doing it.
What is interesting is that this process, which you’d expect to rapidly degenerate into generating meaningless rubbish, exhibits on “emergent behaviour” (if the datasets that it trains on are large enough) and appears to generate coherent and useful, natural language, answers to questions. I suppose that this is unsurprising, really: if you ask “what is a cat”, most of the answers (statistically) that a search engine finds are reasonably useful answers to the question.
What is also interesting is that a fertile ground for such tools seems to be identifying solutions to service and asset management issues (most of which have probably been solved before, somewhere) and things like root cause analysis of problems.
I suppose that what concerns me is that these solutions often seem to be validated with hindsight (if you know the answer, you can make the “right” problem formulation, to generate that answer) and that the place of people (who are far more “intelligent” than any current AI) in the process is sometimes overlooked. It is fairly easy to show that a generative AI can generate plausible rubbish and even invent supporting citations; what may be difficult, is detecting this when you don’t know what the answer really is.
As an aside, I was recently talking with Zoho, which has featured AI since before it was fashionable. It sees a future with “domain specific AI” – presumably, if you expect a generative AI to write, say, python code, it will do a better job if it is fed on a diet of python language manuals, coding patterns and so on, as long as you have a big enough dataset. Fed on the Internet as a whole, it may decide that a widely used “bad practice” is an acceptable answer to your question.
One good place for a discussion of such issues is probably the BCS’ Systems Management and IT Asset Management Specialist Group’s annual conference: “AI-pocalypse – surviving and thriving in the age of intelligent ITSM & ITAM. Will AI make better decisions than humans to manage assets and services? At what cost?” (Tue, 17 Oct 2023 08:30 – 19:00 BST). Register here. More details here.
This promises to be an interesting conference with top speakers. The opening keynote, for example, is “The future of work in the AI Era” by Adrian Dickson, Global Tech Strategist – CTO, at Microsoft.
Yes, AI is being overhyped. But, treated as “augmented intelligence” (that is, with a human in the loop somewhere) I think that there will be many productive AI use cases, especially around managing the complexity and volumes that human support personnel are expected to deal with these days. The trick will be in recognising when AI is useful and not misleading – and this BCS SM-ITAM SG conference should help you get to grip with the issues.