Land of Po Homepage CAITLIN project homepage Sitemap Search the web Feedback School of Informatics homepage Eat the word homepage Staff List Northumbria University Homepage Page banner with picture hotspots to: Feedback, Guest book, Site Map, Search, Caitlin, Eat the Word, School of Informatics, Northumbria University, & Dilbert

 

Go to

Home
Up one level

Dilbert link icon

 

Software Systems Design Methods: A Potted History

Hit Counter

The text that follows is also available as an Adobe Acrobat document. Just click the PDF logo on the right.

horizontal rule

In Days of Yore

The development of software system design methods has been something of a melting pot. The earliest programmable computers were used by mathematicians and physicists for solving very difficult problems (e.g. Bletchley Park which cracked the Enigma code and the Manhattan Project which developed the atom bomb). Later on other sciences like chemistry and astronomy would make use of the new digital computers. Engineers would use computers for solving design problems. Out of this emerged the so-called discipline of computer science which found its own uses for the ever larger machines.

Figure 1

As computers became more powerful and more groups were able to use them, so more and more people developed software to run on the machines. The development of the transistor allowed computer memories to grow rapidly; consequently installed software also grew in size and complexity. By the time large commercial operations such as banks and insurance companies set up their early data processing departments the size of software programs had grown so much that the new third generation languages (3GL) were developed to help manage the growing source code size.

Out of all this emerged different styles and practices for designing and writing computer programs. Naturally, some approaches were more methodical and rigorous than others. With the increasing problems of software development and maintenance in the 1960s and '70s (eloquently described by Frederick Brooks) came an acceptance that some "structured" approach was needed. The early '70s saw the emergence of so-called "structured programming" which Jackson's celebrated JSP method coming along in 1975.

We are now in the position of being able to choose from a range of different methods each designed to meet specific needs within the software development industries.

Figure 2 The SSDM Domain

A good general purpose design method should aim to address all stages of the system development lifecycle. As we shall see, not all methods attempt to address every aspect.

Early methods

The development of methods of systems design has followed a path which has been much criticised, wrongly in our view. Methods have evolved which have allowed the building of systems of considerable complexity and utility. The major problem is that user requirements for change have occurred at an alarming rate and hardware technology has progressed at a rate which has exceeded many expectations and has in fact fuelled many of the requests for change. Add to this the growing realisation that change is necessary for rapid business growth and is at least easier with computer technology, and it is readily seen what has put computer systems to the test. The result is that many methods have not facilitated change, at least not at the rate which is expected of them.

The very earliest methods assumed a fairly simplistic life cycle and a fairly common pattern of:

bullet

Data capture, followed by

bullet

Data validation, followed by

bullet

Ordering of ‘valid’ data, followed by

bullet

Updating a master file, followed by

bullet

Extraction of required information from the master file.

Later systems merely combined one or more of the above processes. The problems of design within such a pattern are:

bullet

What data are required to be kept in the master file(s)? – (the trick here was to anticipate the information requirements);

bullet

How, when and where to capture and check the basic systems data?;

bullet

How many processes are required in maintaining the systems data and producing the necessary business information?;

bullet

Matching hardware and systems software to the anticipated application software.

The problems were not normally very difficult to solve in a static situation, but become enormous in a highly dynamic business environment. Why?

bullet

Because new data requirements tended to necessitate a very large number of systems changes most particularly in the software;

bullet

New processes tended to affect existing ones often in unpredictable ways;

bullet

The business environment has a much more complex structure than the simple notion of collect data and transform it into information.

Evolution

Very rapidly systems analysts/designers started to use a variety of diagrammatic aids to describe existing and new systems (not that this was new in itself—in the business world, Organisation and Methods analysts have used a number of different charting methods for many years). These, in the main, both clarified the specification of systems and their implementation—when they were up-to-date. Modern CASE (Computer Aided Software Engineering) tools using quite sophisticated graphics techniques aim to eliminate this problem.

While tools were being developed (comparatively slowly), the software design aspects of computer systems concentrated at first on the notions of modularity, top-down development and step-wise refinement culminating in the almost universally accepted if ill-defined ‘structured programming’ which was then extended into various forms of structured systems analysis and design. Although many of the methods overlap and terminology is by no means universally accepted we can at least recognise three basic categories of method worthy of study:

bullet

Data flow-oriented design;

bullet

Data structure-oriented design, and

bullet

Object-oriented design.

Of these, you will be most familiar with the first two. We shall be reviewing these and developing a deeper understanding of one of the most specific methods in the second category (JSD). It is possible to view JSD as having a largely object orientated flavour, and we shall attempt to bring this out.

Methods/Methodologies

One definition of a methodology is "a collection of procedures, techniques, tools and documentation aids which will help the systems developers in their efforts to implement a new system".

The objectives of methodologies often differ greatly. The following list gives six reasonable objectives.

bullet

To record accurately the requirements for an information system. The users must be able to specify their requirements in a way which both they and the systems developers will understand, otherwise the resultant information system will not meet the needs of the users.

bullet

To provide a systematic method of development in such a way that progress can be effectively monitored. Controlling large scale projects is not easy, and a project which does not meet its deadlines can have serious cost implications for the organisation. The provision of checkpoints and well defined stages in a methodology should ensure that project planning techniques can be effectively applied.

bullet

To provide an information system within an appropriate time limit and at an acceptable cost. Unless the time spent using some of the techniques included in some methodologies is limited, it is possible to devote an enormous amount of largely unproductive time attempting to achieve perfection. A methodology reflects pragmatic considerations.

bullet

To produce a system which is well documented and easy to maintain. The need for future modifications to the information system is inevitable as a result of changes taking place in the organisation. These modifications should be made with the least effect on the rest of the system. This requires good documentation.

bullet

To provide an indication of any changes which need to be made as early as possible in the development process. As an information system progresses from analysis through design to implementation, the costs associated with making changes increases. Therefore the earlier the changes are effected, the better.

bullet

To provide a system which is liked by those people affected by that system. The people affected by the information system may include clients, managers, auditors, and users. If a system is liked by them, it is more likely that the system will be used and be successful.

Stages in SSDM development

Just as we can trace the development of programming languages from first generation languages through to today's fourth-generation environments, so we can map a generational development of systems methods.

First generation methods

In the first-generation methods of the 1960s developers tended to rely on a single technique and modelling tool, although various techniques and tools existed for varied sets of problems (see the melting pot earlier). These early methods made use of the so-called structured techniques associated with program design. These centred around functional decomposition as a way of reducing complexity, a form of divide et impera. Structured design became ingrained and almost innate and spawned the development of techniques such as data-flow diagramming.

Despite rapid technological changes, there is much inertia in the software industry and many organisations did not adopt any form of structured analysis and design until the 1980s.

Second generation methods

If the first generation models were data- or process-oriented, the more mature second generation models placed much more emphasis on the construction and checking of models. The aim was to provide a smoother path from initial requirements gathering and specification through to design and implementation. So, we see life-cycle models as being integrated much more into the design methods. The models used were seen sequential with each individual model addressing different stages in the life cycle. Thus, the first model constructed would aims to capture system requirements in policy terms (i.e., what must the system do). This description would then be elaborated on and refined in later stages to show how these requirements could be realised using available technology. The logical/physical model approach of SSADM is a good example of this.

Where first generation methods tended to model a system from a single viewpoint, e.g. looking only at data or only at process, second generation methods recognised that both data and function are equally important aspects of a single system.

Third generation methods

In the second generation of methods, although models are used, the view of them is still rather low level. That is, a 2G method tends to deal with individual, discrete diagrams. Issues about how analysis and design units fit together and interact tended to be ignored.

In 1983, with the introduction of Jackson System Development, we see the emergence of the third generation of system design methods. In JSD we have a more holistic approach to system design. The second generation methods, although taking a multi-viewpoint model-based approach still compartmentalised and dealt with individual diagrams and models. In the third generation we see more concern with the system as a whole (from policy statement right down to implementation) rather than with its different parts.

We see third generation methods attempting to focus on the 'real world' of the system; much attention is given to the essential policy and purpose of the required system. Any models constructed support the transition from problem statement through to implementation without losing sight of this high-level view.

The Conventional (NCC) Approach

The conventional approach advocated most lucidly by the NCC has been described in detail elsewhere. Essentially it consists of the so called ‘waterfall model’ life-cycle of:

bullet

Feasibility study;

bullet

System investigation;

bullet

Systems analysis;

bullet

Systems design;

bullet

Implementation;

bullet

Review and maintenance;

and was characterised by attempts to produce technical and user documentation using standard forms and check lists. The underlying document categories are as shown in Table 1.

1

Background Terms of reference.

2

Communications Discussion records, correspondence, manuals etc.

3

Procedures System outline, run charts, flow charts etc.

4

Data Document specs., file specs., record layouts etc.

5

Supporting Information Grid charts, organisation charts, data item definition, hardware & software facilities.

6

Testing Test dat spec., test plans, test operations, test logs.

7

Costs Development & operation cost information.

8

Performance Estimates or reports of timings, volumes, growth etc.

9

Documentation Control Copy control, amendments incorporated list, outstanding amendments.

Table 1: NCC Document types

The major criticisms of this approach were:

Failure to meet the needs of management: Although systems developed by this approach often successfully deal with such operational processing as payroll and the various accounting routines, middle management and top management have been largely ignored by computer data processing. There is a growing awareness by managers that computers ought to be helping the organisation to meet its corporate objectives.

Unambitious systems design: Producing a computer system that mirrors the current manual system is bound to lead to unambitious systems which may not be as beneficial as more radical systems.

Models of processes are unstable: The conventional methodology attempts to improve the way that the processes in businesses are carried out. However, businesses do change, and processes need to change frequently to adapt to new circumstances in the business environment. Because computer systems model processes, they have to be modified or rewritten frequently. It could be said therefore that computer systems, which are ‘models’ of processes, are unstable because the real world processes themselves are unstable.

Output driven design leads to inflexibility: The outputs that the system is meant to produce are usually decided very early in the development process. Design is ‘output driven’ in that once the output is agreed, the inputs are decided and the processes to convert input to output can be designed. However, changes to required outputs are frequent and because the system has been designed from the outputs backwards, changes in outputs usually necessitate a very large change to the system design.

User dissatisfaction: Sometimes systems are rejected as soon as they are implemented, often because the user requires more flexibility than the computer system has been designed to give. Users have found it difficult to understand technical matters which are often given far more attention than the underlying business problems.

Problems with documentation: Although claimed to be one of the major benefits of the NCC approach, it has been criticised for being too technical, too easy to leave until too late, too easy to forget altogether and too easy to forget to update.

Incomplete systems: Exceptions are frequently ignored because they are too expensive, and often not diagnosed or simply forgotten.

Application backlog: Some users literally have to wait for years for a system to be implemented. Others simply do not bother asking.

Maintenance workload: Keeping operational systems going whether they have been designed well or badly will nearly always take first place sometimes leading to ‘patches’ upon ‘patches’. The maintenance workload is in most cases an increasing one.