OpenText bets the company on AI
Published:
Content Copyright © 2023 Bloor. All Rights Reserved.
Also posted on: Bloor blogs
I was at the OpenText World 2023 conference in Las Vegas last week, hearing about its new focus on AI. I suppose that one’s first reaction is to question investing in AI at the top (inflated expectations) of its hype curve, but a moment’s reflection tells one that OpenText has been involved with AI for years, back to its foundations in document management and is well-placed to ride the hype curve out to real productivity.
This was confirmed when I talked to people at OpenText. There is enthusiasm for AI but, especially, “domain specific” AI; that is, AI trained on data specific to the area it will operate in (although, if you expect to get the innovation AI promises, I think you will need to be careful about over-restrictive selection of training data and, of course, biased selection).
I do think that “domain specific” is going to be the best approach to the use of AI commercially. People in OpenText see AI as an opportunity, but they also see the potential issues with bias and explainability, which is a good sign. One person also highlighted his concerns with “black AI”, the use of AI to facilitate intrusion attacks and fraud, which impacts the security and resilience of legitimate automation. There is already a “black” version of ChatGPT, WormGPT, to help novice criminals prepare exploits, produce effective extortion letters, and so on.
The key launch for OpenText AI is OpenText Aviator, which enables OpenText Cloud Editions 23.4. Chief Product Officer, Muhi Majzoub, talks about how the various AI-based Aviators (for Content, Experience, Business Network, IT Operations, Service Management, Cybersecurity and DevOps) can be used to improve the experience and efficiency of your business automation in his blog. Before you jump into using generative AI for your business, I do recommend reading widely about other organisation’s experiences with it. Also, you must design your AI applications so you can explain, at a high level, how their results were arrived at to those impacted by them.
One interesting presentation at OpenText World was around the issue of bias and accuracy in AI, by Dr Joy Buolamwini of the Algorithmic Justice League. OpenText, unlike some companies using AI, is very interested in ethics and bias in AI, and in a very practical way. Facial recognition AI has put innocent people in risk of jail: Porcha Woodruff, for example is the first woman known to be wrongfully accused as a result of facial recognition technology. In her presentation, Joy put some numbers on the issue. To give just one example, looking at the accuracy of facial recognition platforms from several major vendors, they seem pretty accurate, 95-99% – on lighter faces. On darker faces, however, accuracy drops to only 78-87%.
Coming home, I came through an e-Gate at the UK border. It seemed to have some trouble with my face but let me though eventually. But if I’d been black, presumably I’d be getting used to e-Gates not recognising me. Although someone has told me that UK e-Gates don’t really use AI – a person compares your photo with the photo in your passport. If true, perhaps the UK Border Force is worried about AI bias too (or perhaps it is simply behind with its use of technology).
Nevertheless, since our Govt. seems to see Facial Recognition as a key part of controlling immigration, street crime, public disorder and so on, often in ways that only look reasonable in conjunction with AI [see here], I do hope that the government intends to collect (and make use of) stats on possible bias against, say, black faces – even if those affected only experience longer and less comfortable interactions – and doesn’t just rely on regulations.
I haven’t read Joy’s book yet, but on the basis of her lecture, it should be good reading for anyone interested in AI bias and ethics – and, yes, that should be all of us.
So, think AI, think OpenText. Think improving user experience, natural language interfaces, finding new (and useful) patterns in data; AI is a force for good. But, more than that, think about possible unintended consequences, unintended bias, and the “computer says NO” (black box) effect. Use AI to help you identify issues with AI, perhaps. And remember that your enemies may be using AI against you, AI is the new disruptive force in IT, to rival the Internet.
My next blogs will look at DevOps and mainframe modernisation at OpenText World. I imagine that AI will crop up there too.