Talking with Johan den Haan of Mendix
Published:
Content Copyright © 2015 Bloor. All Rights Reserved.
Also posted on: The Norfolk Punt
Johan den Haan is CTO of Mendix, and an original thinker in the sphere of model-driven development (MDD), which he thinks is “necessary, but not sufficient” (see here). I was impressed enough by his ability to stand apart from his own product to reference his writings in one of my blogs on MDD.
So, I jumped at the chance of a face-to-face interview with Johan, while he was visiting London recently.
A lot of our talk was around development culture. Johan thinks that there’s a roughly 50% split in the developer community, between those that really like coding and those that like solving business problems and aren’t very interested in doing any more coding than they really have to (with the latter inclining more towards MDD tools like Mendix). So, his discussions with CIOs in his potential Mendix customers tend to be around the need for ensuring that there is an appropriate development culture around the use of MDD, if an organisation is going to exploit it successfully. Cultural change needs top down commitment and resources, and will only be seen as cost effective by mature companies that take a medium/longterm view of whole life-cycle costs and benefits
Much of our further discussion was around the issues that the code-lovers usually raise when talking about model-based development platforms, and especially interpreted models such as Mendix: possible lock-in; performance and scalability; and the tendency to “fall off a cliff” (in terms of ease of use and debugging) when you reach a certain point, as business complexity increases. These are issues I met when looking at tools like Uniface way back in the last century (and, given an appropriate culture, they were manageable back then); and they are still issues raised today about any form of model-driven development, not just Mendix.
Johan clearly recognises them too, because Mendix is doing something about them – so, even if you think you’ve already made your mind up in this area, it’s worth taking a second look. To start with, Mendix has rewritten its proprietary stack to use Cloud Foundry; and it has partnered with companies such as HP and Pivotal. This is an industry endorsement which increases the size of the Mendix ecosystem and lowers its adoption risk. It also means is that you don’t have to blindly trust Mendix’s coding if you’re worried about fundamental scalability. Cloud Foundry is Open Source and widely-used and well-respected; and a Mendix solution, implemented sensibly, should be as robust and scalable as Cloud Foundry itself.
The other possible Mendix issue is lock-in and, while the potential for this can be overstated (see here, for a non-Mendix discussion around this), it did concern me. Now, however, Mendix is working on making its repository data structures fully open, with published APIs into its models. This isn’t released yet, but as far as I can see, it should go most or all of the way towards removing any concerns about Mendix lock-in – although I can’t be sure until it is released, of course, and people have tried using the APIs.
That leaves the cliff-edge issue, and this is really a question of implementation. Any model-driven development solution, whether based on model interpretation or code generation, can reach the point where a behaviour that is causing real business-level problems can only be debugged by delving into the fundamentals of the technology that underlies the model execution – and programmers used to working at the model level are unlikely to have the skill and experience needed to debug issues, effectively, at the bare technology level. How often this causes a real problem; and how easy it is to get useful low-level support when you do fall over the cliff; is simply a matter of how well the tool, and its associated ecosystem, is implemented.
As Johan says, “all we can do is push the cliff back as far away as possible, away from our customer’s daily experience”. How well Mendix succeeds, potential customers can find out by asking existing customers, working in similar areas. My feeling is that generated code is still easier to debug in extremis (experts in debugging code written in the common languages are widely available) – but how often is debugging at this level ever needed? Perhaps the answer varies from application to application.
We then talked about future directions. Personally, I think that the important thing in model-driven development, regardless of implementation, is the Model, which must model the business with a wider scope than just the automated systems. Without modeling the manual business processes the automated system is embedded in, how can one validate the inputs/outputs of the automated system; and change the man-machine boundary, as the cost of technology changes? In practice, this may involve federating a pure development model with a system’s architecture or business process management model in another product, of course.
In model-driven-development, hierarchical, linked, models let mere humans understand and manage complexity beyond the “5+/-2” things our brains can deal with at one time; and automated completeness and consistency checks provide important assurance of quality.
However, one could – perhaps should – go far beyond this. Modern semantic analysis tools can, potentially, do a lot more with such models: looking for “actionable insights” from patterns that can be reused at the business level; from looking for potential problems, with scalability, for example; from looking for areas of business complexity where governance resources might be best focused. Once an organisation has institutionalised MDD, it has a resource in its models – real IP, because “the model is the business” – that could be exploited further, for significant benefits to the business.