When discussing program auralisation with people many of the same questions come up. On this page I have attempted to give answers to some of the commonest queries . If you have any further questions please feel free to contact me by email.
Bugs and debugging
Who is the system for?
A bug is an error in a program that causes the program to malfunction. The malfunction may take the form of the program crashing (terminating without warning) or it may allow the program to execute but cause it to produce wrong answers.
You can think of a program as being like a recipe for baking a cake. A bug would be an error in one or more of the steps of the recipe. For example, the recipe may instruct you to add 10 spoons of salt instead of 1 spoon, or it may tell to you to set the oven temperature too high. Either error would cause the cake to come out wrong.<back to questions>
That's a good question. In fact, no programmer sets out to write bad programs and most try very hard to avoid putting bugs into their code. Software engineering practices enforce rigorous testing procedures to identify as many bugs as possible. But modern software systems are very large. Microsoft Windows, for example, has over 40 million lines of programming language code in it. Can you imagine writing a recipe with 40,000,000 instructions and not making a mistake somewhere along the line?
Writing programs can be a very complex task and it is very hard to avoid making small mistakes in logic. It is these logical errors that are the bugs and which cause programs to malfunction.<back to questions>
Have you ever used a piece of software that didn't work properly? That's because it has bugs in it. Bugs stop programs from doing what they're intended to do. In the worst cases, this may mean that nuclear power plant control systems fail, or that space rockets explode -- a software bug was the cause of the Ariane explosion.
Bugs are easy to introduce but hard and expensive to locate and remove.<back to questions>
Historically, programmers have relied on visual debugging tools (such as probes, animations & diagrams, and code inspectors). The main problem with this is that you can only look at one thing at a time. If you're looking at an animation of the program behaviour then you can't also look at the program source code listing at the same time. By using sound it is possible to listen to one aspect of the program while looking at a visual representation (perhaps of a different aspect).
Sound is also a temporal phenomenon, that is, it unfolds over time. Graphics are spatial. Sound can be used to represent the temporal aspects of a program (e.g. program flow).
Finally, the blind and visually impaired, who may have difficulty using graphical visualisations, can make use of sonifications. This widens access considerably.<back to questions>
People have an innate (in-built) ability to listen to and to comprehend music (else the pop music industry could not exist). Music can communicate several streams of information in parallel (at the same time) - that's how bands and orchestras work! By using a musical framework to organise the auralisation, we can map several program features to auditory representations at once. Using a musical framework to organise the auralisations also means that all the program features are playing in the same key, at the same tempo, with the same meter, and so on. That makes it much easier to pick out the individual parts than if an ad-hoc auralisation scheme had been used.<back to questions>