In 1967, a NATO study group coined the phrase "software engineering" as a response to their belief that the current software crisis could be solved by adapting existing engineering practices to software development. This crisis was characterized by the consistent development of low quality software which exceeded cost limits and development deadlines. Twenty years later, the software crisis was still thriving. Consider the following analysis done by the Government Accounting Office (GAO) in 1979 on the state of management information systems software development [Air Force 1996]. Out of the 163 contractors and 113 government personnel surveyed,
- 60% of their contracts had schedule overruns,
- 50% of their contracts had cost overruns,
- 45% of the software could not be used,
- 29% of the software was never delivered, and
- 19% of the software had to be reworked to be used.
Even today the software crisis is a significant problem that software engineering must address. Dr. John Dalbey of California Polytechnic State University organized the following information about the current state of the software crisis [Dalbey 1998]. Work through the interactive version of his quiz in the box below.
In the September 1994 issue of Scientific American, W. Wayt Gibbs chronicled the following example of the modern software crisis in his article "Software's Chronic Crisis":
"Denver's new international airport was to be the pride of the Rockies, a wonder of modern engineering. Twice the size of Manhattan, 10 times the breadth of Heathrow, the airport is big enough to land three jets simultaneously in bad weather. Even more impressive than its girth is the airport's subterranean baggage-handling system. Tearing like intelligent coal-mine cars along 21 miles of steel track, 4,000 independent "telecars" route and deliver luggage between the counters, gates and claim areas of 20 different airlines. A central nervous system of some 100 computers networked to one another and to 5,000 electric eyes, 400 radio receivers and 56 bar-code scanners orchestrates the safe and timely arrival of every valise and ski bag.
At least that is the plan. For nine months, this Gulliver has been held captive by Lilliputians-errors in the software that controls its automated baggage system. Scheduled for takeoff by last Halloween, the airport's grand opening was postponed until December to allow BAE Automated Systems time to flush the gremlins out of its $193-million system. December yielded to March. March slipped to May. In June the airport's planners, their bond rating demoted to junk and their budget hemorrhaging red ink at the rate of $1.1 million a day in interest and operating costs, conceded that they could not predict when the baggage system would stabilize enough for the airport to open [Gibbs 1994]."
Eventually the Denver International Airport (DIA) did open, but the advanced baggage system was only partially functioning. The four delayed openings of the airport caused many residents to speculate that DIA really stood for "Do It Again," "Doesn't Include Airlines," or "Done In April". In order to finally open the terminal, the city invested $51 million to install a conventional baggage system as a work around to the high-tech system. Ironically, the conventional system was completed four weeks ahead of schedule and $3.4 million under budget [Cook 1995]. The obvious question is: why was the high-tech system so difficult to implement?
According to Fred Brooks [Brooks 1987], part of the answer to this question is that software is inherently complex. Unlike other engineering disciplines, software systems lack repeated elements. While a building may be constructed of thousands of bricks, a software product will combine pieces with the same functionality into a single subroutine. This means that software is composed of thousands of unique parts rather than repeated parts. Software systems also have very large numbers of operational states which makes exhaustive testing impossible. A bridge, on the other hand, is also a large and complex structure, but only a handful of extreme states (i.e. inclement weather, heavy traffic, earthquakes) need to be tested to insure the reliability of the bridge. In addition to inherent complexity, Brooks also mentions some other essential qualities of software such as changeability and invisibility that contribute to the software crisis. Changeability refers to the fact that all software eventually gets changed. Clients may want to add new functionality or developers may want to port the program to a new hardware platform. While it is unthinkable that a civil engineer would be asked to move a bridge to a new location, software engineers are regularly expected to perform major modifications on existing software. Invisibility refers to the fact that software is not a physical entity. Because of this, it is difficult for the human mind to use some of its most powerful conceptual tools in the development of software.
With these difficulties in mind, the need for effective software engineering becomes even more urgent. Software systems are playing an increasingly common role in our everyday lives. Failure to develop reliable software systems can cost more than just money and time. Consider the following examples cited by Michael Lyu in the introduction to the Handbook of Software Reliability Engineering:
"Unfortunately, software can also kill people. The massive Therac-25 radiation therapy machine had enjoyed a perfect safety record until software errors in its sophisticated control systems malfunctioned and claimed several patients' lives in 1985 and 1986. On October 26, 1992, the Computer Aided Dispatch system of the London Ambulance Service broke down right after its installation, paralyzing the capability of the world's largest ambulance service to handle 5000 daily requests in carrying patients in emergency situations. In the recent aviation industry, although the real causes for several airliner crashes in the past few years remained mysteries, experts pointed out that software control could be the chief suspect in some of these incidences due to its inappropriate response to the pilots' desperate inquires during abnormal flight conditions [Lyu 1997]."
In this module, we will examine some of the fundamental concepts of software engineering such as the life cycle of software development and the two major paradigms of developing software. By the end of this section, you should be able to do the following:
- Recognize the current software crisis and the need for software engineering,
- Understand and reproduce the phases of the software life cycle,
- Compare the procedural paradigm with the object-oriented paradigm (OOP), and
- Understand the principles of the OOP.
- Air Force (1996), "Guidelines for Successful Acquisition and Management of Software-Intensive Systems: Weapon Systems Command and Control Systems, Management Information Systems," Department of the Air Force, June.
- Brooks, F. (1987), "No Silver Bullet: Essence and Accidents of Software Engineering," IEEE Computer 20, 4, 10-19.
- Cook, B. (1995), "Denver International Worth the Wait," Airport Magazine, May/June.
- Dalbey, J. (1998), "Software's Chronic Crisis: A Quiz," http://www.csc.calpoly.edu/~jdalbey/crisis_quiz.html.
- Gibbs, W. (1994), "Software's Chronic Crisis," Scientific American 271, 3, 72-81.
- Lyu, M. (1997), Handbook of Software Reliability Engineering, IEEE Computer Science Press and McGraw-Hill Publishing Company, New York, NY.