Workflow management with Kyvos - Balancing AI with human intelligence
Published:
Content Copyright © 2025 Bloor. All Rights Reserved.
Also posted on: Bloor blogs
Artificial Intelligence is at the top of the hype cycle these days – and it isn’t really “intelligence” at all, it is more “advanced computational analytics”. So I was interested when Kyvos, a firm which has been in the BI and Analytics space for a couple of decades, wanted to talk to me about its FLOW WFM product, which is based on Artificial, Human and Business Intelligence.
Kyvos seems to have a global presence, with offices in USA, India and Australia, and some of the largest global brands as its customers. Flow is a complete workflow management suite, committed to outcome delivery and service level agreements (SLAs). It seems to have just about every connectivity function you’ll need and, of course plenty of AI, rules and machine learning. What caught my attention is the emphasis it places on balancing artificial Intelligence, human intelligence and business intelligence. Everybody says that you must sanity-check AI results before using them and this is backed up by research; Andy Hayler at Bloor, for example pointed me at a Rand report [The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed: Avoiding the Anti-Patterns of AI | RAND ] which says “By some estimates, more than 80 percent of AI projects fail — twice the rate of failure for information technology projects that do not involve AI” .It goes on to suggest some of the root causes for failure:
- Industry stakeholders often misunderstand — or mis-communicate — what problem needs to be solved using AI.
- Many AI projects fail because the organization lacks the necessary data to adequately train an effective AI model.
- AI projects can fail simply because the organization focuses more on using the latest and greatest technology than on solving real problems for their intended users – personally, I might place this at number one.
- Organizations might not have adequate infrastructure to manage their data and deploy completed AI models, which increases the likelihood of project failure.
- AI projects may fail because the technology is applied to problems that are too difficult for AI to solve – I would emphasize that AI is not any kind of universal panacea, that can be applied mindlessly.
I’d oversimplify this a bit and say that an overriding cause of AI failure is treating AI as if it was magic. Sure, you start out checking the results from the AI, on your simple, fairly obvious problems, but after a while, you get lazy and just accept that “The Computer Knows Best”. The trouble is, that the problems you apply AI to get harder and less tractable (note the 5 Rand points above) and how AI gets its results isn’t particularly transparent to its users. I guess the real danger is not when you get silly results from your AI, but when you get plausible, but WRONG, results. And there is good research showing that AI models degrade over time, so it is easy to build up unwarranted confidence in your models. What is going to happen when most of the data available to build models from is itself AI-generated is anyone’s guess.
Which brings me back to Kyvos. I see it as a low-code way to build automated business outcomes, using AI and machine learning, but only as parts of an overall process, not as standalone “magic bullets”. I like workflow management and automation, and I can see how AI could help, as part of the whole. The governance of AI is becoming of increasing concern (Paul Bevan at Bloor pointed me at this paper but a good start will be making it part of an overall development process, instead of a fashionable silo. You can read some Kyvos case studies on the web
I was impressed by Flow benchmarking, in particular that it attempts to put metrics around short- and long-term forecast accuracy and so on. Again, this can be seen as part of AI governance. Not only must AI be part of an overall process, not an end in itself, but in a mature organization, it should be managed by metrics. AI must be part of a fact-based regime if it is going to be a useful innovation rather than a fashionable – and rather resource-intensive – fad. And, I suppose, that at least some of the enthusiasts for AI, unlike Kyvos, could be accused of seeing it mainly as a huge generator of their own shareholder value – it is AI which is powering Nvidia Corporation to overtake Intel, for example – so, with AI, it is definitely a case of “caveat emptor”.