February 10, 2003
Phenotropics and Bugs
Jaron Lanier recently pointed out that we should rethink how we build software in order to eliminate bugs and enable ourselves to create applications with greater than 20-30 million lines of code.
I agree with the sentiment about reducing bugs, and also agree with the need to come up with better ways to develop and integrate software, but the proposed approach of using pattern recognition to to do this automatically seems to me would introduce more chaos rather than less. My experience is that even with a lot of human involvement, it's challenging to integrate systems in sensible ways and I find it hard to believe that this will happen by the software guessing how it should interconnect.
I disagree with the point about small bugs not tolerated in software but are in biology. I'm no biologist, but I believe that small bugs such as a single mistake in DNA can certainly have large effects and be fatal in some cases. Also, the reality is that all software we use today contains bugs that mostly don't result in software fatality, from our operating systems to our apps to our cellphones, and in some cases what behavior people see as a bug others see as a design decision.
I'm not convinced that what we want is bigger applications in any case. I think it would be much more effective to have collections of smaller applications that can be used in combination, and enable people to create these applications with less code.
Reducing the amount of code people need to write can directly reduce the number of bugs. Most professional programmers make 100-150 errors in every 1,000 lines of code they write (as found by Carnegie Mellon University). High level languages have helped a lot along these lines and what are known as 4th generation languages like CFML in ColdFusion have dramatically reduced the amount of code one needs to write.
Software development can continue to evolve through greater levels of abstraction, just as it has evolved from flipping switches on a console, to assembly languages, to languages like Pascal, C and Java, to widely available software components like Microsoft's Windows APIs and J2EE. I believe a more promising direction around improving software development in the next decade will be around what's called Intentional Programming where the intent of the developer is captured at yet a higher level, rather than the pattern recognition approach of phenotropics.
I find that the biggest challenges in software continue to be building things that people actually want, and designing user interfaces that are truly usable and enjoyable. We need to have great strides forward in the interface between the software and the human just as much as we need to improve the interface between software components.
10 Feb 03 09:53 AM
I completely aggree that it would be more effective to have many smaller applications. I believe that we are already beginning to see the implementation of this. For example, web services are allowing many small web applications to communicate data between each other. All of these applications together could be considered one application, I believe. This reduces the complexity of each individual application, reducing the probability of errors.
Just my 2cents.
Smaller programs is the way to go - hell it's the Unix philosophy! Compare the stability of Unix based systems (particularly ones based on microkernel architecture like Mach) where the OS is made up of thousands of tiny separate programs all interacting to more monolithic designs like the Windows platform. The proof is in the pudding.
Today hardware is so cheap it makes sense to "waste" some of it's performance in abstraction - this is exactly what OS X does and why it is so much more advanced than it's rivals.
We've been designing programs according to how the computer will be running them for too long. It's time to write programs according to how they will be used.
Say what you will about jaron's arguments, you have to admit he can play the saguaro cactus better than any of us!
I'd kinda agree with Jaron that small flaws in biology tend to degrade more gracefully than they do in software, due mostly to the massively redundant structure of biological systems in general. efficiency isn't as big a deal in biology as in code (I mean, in code you tend to want to only allow one code path to perform a certain function, whereas biological systems are loaded with vestigial code paths.. if one part breaks down, others tend to get enabled). Maintenance tends to be done by incremental layered growth rather than by rewriting older pieces.
What this has to do with how I write a dataGrid, though, I dunno.. We still need code to be as light as possible, since we're all still dealing with the old serial wire medium to share it...
Kevin said: "I find that the biggest challenges in software continue to be building things that people actually want, and designing user interfaces that are truly usable and enjoyable."
And of course, I agree. However, to come full circle, I find that designing simpler, more intuitive interfaces requires more and more complexity on the back end. A couple years ago, as we did design iterations of HotBot, we would spend hours and hours figuring out the proper "user experience" of things that had no real client-side interface. Results relevance, contextual recommendations, and even spell checking were all more or less invisible to end users. But they were part of the experience, and the responsibility of the design team to get right.
I agree that IP will have a huge impact on programming, saving time ond development and debugging. As a visual model it would be interesting to see Flash allow you to create IP trees that can then reflect AS code.
I think the simple programming models and the "raw" visual interfaces contribute to achieve a quality standard that integrates both error debugging and full usability
I'm agree with Jaron that in software, there's a chaotic relationship between the source code and the observed effects of programs. And i think that chaos exists
Why should anyone believe what this Lanier guy says? Where did he come from? I gather he is a high-school dropout with no academic degrees, who has barely even learned to program. Is there any evidence that says otherwise? Consider this brilliant quote from him: "programming language sucks. In mathematics, even on the deepest levels, it's not clear that you can get rid of notation, whereas with programming it's really clear that the language is junk and all you're doing is telling the computer to do something".
Comments on this entry are now closed
You can of course make comments in your own blog, and Trackback
continues to be available to reference your post here.