Application Integrity: the Coverity approach.
It seems to us that one of the big problems with security is its “siloisation”; it’s often thought of as something special, something you get the specialists to do after the fact. The problem is, of course, that “non-functional” requirements that could relate to security or integrity are often left, by developers, to the security specialists—who assume that the developers have taken care of them. This often means that real issues fall through the cracks: is a buffer overflow a coding issue, a security issue or both? Who should be testing for it? Is an unaudited “superuser” a design issue or a security issue or both? Is a software crash a development quality issue or the mother of all “denial of service” issues?
In reality, we think that development is moving, at the behest of its paymasters in the business, towards “holistic service delivery”—and “holistic” means that application integrity and resilience should be designed in from the first. An automated service should do exactly what the business needs, no more and no less—and, presumably, the business doesn’t need services that can be taken over by criminals and used to divert company resources into their pockets. Controls to prevent dysfunctional behaviour should be designed in—the security professionals should probably be advising on what is needed, not bolting on security technology after the application is built (as Microsoft found out when it tried to make earlier versions of Windows more secure).
So we were interested to hear of the latest offerings from here.