This article on TechRepublic.com today caught my eye. Apparently there’s lobbying in Europe to make software developers legally liable if security vulnerabilities in their software result in tangible losses for end users.
At first glance, such a thing might seem fair and reasonable. Why shouldn’t software developers take responsibility for the software they write? Even if you ignore the utter ubiquity of “no responsibility for consequential damages” clauses that exist in contracts for the sale of goods and services throughout commerce, “those damn Red Bull-swilling dirty t-shirted late-starting programmer geeks didn’t do their jobs right, and it’s cost me money! Right? I got hacked, right dammit?!?”
Wrong. It’s hard to fathom the degree of disconnection from the real world exhibited by Dr Richard Clayton to push such a notion so hard for so long. Even the poison burger analogy doesn’t stand up to a moment’s scrutiny. Making safe hamburgers was a process perfected decades before my birth, and it’s quite reasonable to expect that those engaged in the business of making and selling them should observe all the “best practises” (i.e. health regulations) that exist, and be held accountable if they don’t.
However writing software without bugs, let alone without security vulnerabilities, is simply not a perfectible process with the tools, programming languages and educational regime currently in place. No one expects you to make safe hamburgers when there’s someone standing 100 meters away on the other side of the tall brick wall that surrounds you, squirting concentrated liquid ratsack from a bottle over your brick wall and hitting your BBQ hotplate with uncanny accuracy. As bizaare as this analogy sounds, it’s pretty much what’s going on all around our digital lives every minute of every day.
Before Microsoft even launched Vista, they touted it as “the most secure Windows ever!”. At the time any reasonable thinking geek’s response would be “Well, maybe it will be, maybe it won’t - we won’t know until it’s been out in the wild for a while, the true test of a product’s security”. Cue surprise, it was little or no more secure than XP (64bit versions of Vista suffered slightly less exploits). Same with Windows 7 three years later.
Adobe, one of the most targeted software vendors, has been hacking & slashing its way through its code for years trying to stop being the world’s security vulnerability punching bag. Reader/Acrobat X & Flash v11 with their fancy new sandboxing technology (from Microsoft) were supposed to stem the deluge of exploits. Well they did, a little, but read the news just within the last month to see how not-100%-effective that’s turned out to be.
Even some of the oldest, most ‘peer reviewed’, widely used, open-source code has had shocking security vulnerabilities discovered earlier this year.
Experienced programmers can sit and stare at code - their own or someone elses - for hours with a view to weeding out security vulnerabilities and still not see them. Hackers work from an entirely different starting point and mindset when they go hunting for exploits.
Maybe one day, perhaps, this industry will have matured - its tools, its technologies, its communication stacks, its programming languages, its test suites, its learning/training/certification structures - to a stage where a finger can legitimately be pointed to a specific party if an end-user suffers material loss due to a security vulnerability in software, and enjoy this litigious utopia espoused by Dr Richard Clayton. But we are SO FAR from that possible reality now, in fact further from it than we’ve ever been, that I just shake my head at how naive he appears to be. At a time when the IT industry, as a whole, is losing the battle against “hackers”, wouldn’t it be nice if people like him were trying to help solve the problem, rather than shilling for lawyers?