By Leo Notenboom
My question is perhaps more of an industry one than a personal
computing question. Because malware, viruses, spam and the similar user-beware problems affect just about everyone who uses the ‘net for their daily informational needs, why hasn’t the technology industry tackled these issues head on? These are the problems that ultimately affect the non-computer savvy general user the most devastatingly.
Perhaps the question can be simplified: On the foreseeable horizon, will there be
a time when users will not have to worry about viruses and malware? And why can’t computer developers simply make one that is virus-free now?
Are there existing machines, platforms, etc, which can affordably take the
risk out of using the internet? It just seems that no matter how careful one
is or what virus software they use, the “bug” eventually gets them and
huge problems ensue. You would think that the profit potential would be so
significant that the developers out there would be jumping all over this
opportunity – the bug-free system.
You’re actually asking two separate questions:
Is it possible to create or write bug-free software?
Is it possible to create a computer system that is impervious to malware?
The practical answer to both is, unfortunately, no.
Bug Free Software
It sounds really simple: if we just wrote software more carefully, used better tools or techniques, or hired better programmers, we should be able to get rid of every possible bug, right? No mistakes. Ever.
There is no such thing as bug-free software. Period.
Yes, some software is better or worse than others, but as an absolute
measure, no software ever reaches perfection.
There are three problems at play here: complexity, time, and
functionality.
Complexity
What most people fail to grasp is the incredible complexity behind most of
our computer systems today. It’s truly mind-boggling to think of the thousands, if not
hundreds of thousands of man-years of effort that have gone into getting any your
computer to boot and run effectively. (I’m being OS-agnostic here. I don’t
care if it’s Windows, Mac or Linux – they’re all incredibly complex
beasts.)
People that understand are amazed that they work at all. I know I am.
Make it less complex? Well, that means making it do less, be capable
of less, and be less functional.
Whatever you decide to cut out is important to someone. I don’t care which
feature you hate the most and would love to see cut completely from the next
version of whatever product you care to name. There’s someone, perhaps lots
of someones, who care deeply about that feature and would be incredibly upset
to see it removed.
Computers are general purpose devices and people expect computers to be
capable of many things – even many things that haven’t been thought of
yet.
And that leads to incredible complexity.
Time
So why not just take more time to get it right?
There’s a strong argument for that, and you’ll often see difficult decisions
being made throughout the life of a software project, jettisoning features and
functionality so that more time can be spent on getting what remains correct.
Or you’ll see projects take longer than planned because of the extra time that it required
to meet a minimal quality bar.
But the practical reality is that software that never ships does no one any
good. At some point, a trade-off has to be made between spending more time developing software or deciding that it’s good enough, knowing that it will never, ever be perfect.
It’s not that the people working these projects are stupid – far, far from
it. Writing today’s intensely complex systems in a way that meets everyone’s
expectations in a reasonable amount of time is hard. Very hard.
It’s not an excuse, it’s a reality. And the reality is that mistakes will be
made.
FaceBook URL: Leo’s Facebook
Twitter URL: http://twitter.com/askleo