schultz@grebyn.com (Ronald Schultz) (05/21/91)
The management of object-oriented software development projects appears significantly different than the management of conventional software development, especially when it comes to developing conventional MIS (purchase order processing, general ledger,...) kinds of systems. Does anyone have any specific references, experiences, or anecdotes to share in the uniqueness of OO development management? I am particularly interested in the impacts on configuration management, software testing, software quality assurance, and project scheduling and estimating. I will post a summary to the net if enough replies are received. Thanx. Ron Schultz schultz@grebyn.com
alan@tivoli.UUCP (Alan R. Weiss) (05/24/91)
In article <1991May21.130231.1633@grebyn.com> schultz@grebyn.com (Ronald Schultz) writes: >The management of object-oriented software development projects appears >significantly different than the management of conventional software >development, especially when it comes to developing conventional MIS >(purchase order processing, general ledger,...) kinds of systems. Brad Cox of Stepstone is an acknowledged (at least by me!) expert in object-oriented systems. Try contacting him at cox@stepstone.com I curious: what makes you think that developing object-oriented software is REALLY very different from functional-procedural software? IMHO, the really important factors and parameters are very similar, but differ in the details. > Does >anyone have any specific references, experiences, or anecdotes to share >in the uniqueness of OO development management? I am particularly >interested in the impacts on configuration management, software testing, >software quality assurance, and project scheduling and estimating. I >will post a summary to the net if enough replies are received. > >Thanx. > >Ron Schultz >schultz@grebyn.com We are using IMAKE and RCS in a distributed networked environment for configuration management along with a Release Control Engineer as the prime focal-point. There is no difference endemic to OOPS in this regard (but it IS different than central-database checkin procedural systems). Software testing could take up VOLUMES, and we're still learning. I cannot spend much time on answering this very important topic, so I'll summarize and you can call me for more details: 0. BASELINE UNDERSTANDING: People are still the most important factor, not the tools or the techniques. Management of and *with* those people is #1. 1. OOPS emphasizes reuse, and atomicness. Therefore, building quality into each object and the associated methods is critical. Hence, we spend a lot of effort in what I call "up-front QA", e.g. specification and code inspections (Fagan/Gilb Methodology). If the basic objects don't work right, the system itself is is trouble. 2. We are using a modifed waterfall software development process model, because its the ONLY model EVERYONE understands. We have noticed that we are a little more iterative than we had envisioned, but that testing is easier because pieces of functionality can be dropped in later without (necessarily) impacting everything else. 3. We do "Functional Testing" (Object-Method Testing) to perform functional verification. By emphasizing the larger class objects and higher-level objects and their methods, we "test-by-extension" the lower level methods and objects (although we have test cases for those, too). 4. The system is BIG. We created an Automated Test System that included automatic positive and negative assertion generation for Functional (Object-Method) Testing mostly to prevent us from committing suicide. The problem here is to build in some AI stuff so that we can whittle down the assertion permutations from literally trillions to something a little less disk intensive :-) We are using the technique of equivalence classes to do this, but since the ATS is a 24 hr/day no-hands system and machine cycles are cheap ..... 5. We will perform System Testing just as we would with a functional-procedural system. 6. Code coverage is a bitch to try and measure. Absolute pain in the ass. Also probably necessary. :-} We run our ATS on an instrumented system, and measure whether or not certain code branches were hit. In a functional-procedural system, this is useful. In an OOPS system, the essence IS the system itself, so we're looking at it as just interesting` information that costs very little to obtain. Nice for Functional (Method-Object) Testing, useless for System Test. Brad Cox's views on total black-box only testing here have influenced our thinking to a degree here: we are going to do LOTS of user-level testing. 7. Testing is going well, and we are measuring quantitative quality, but we are also doing LOTS of Customer Input/Feedback sessions (from the design stage through Alpha and Beta). Quality Assurance: whoboy! We do formal inspections, we have specifications written to a standard for nearly everything (and those specs. were developed and inspected by both QA and Development), we have a process model everyone understands, we have a Problem Reporting and Tracking System, we have a fast-turnaround Build Environment, we do LOTS of measurements with metrics derived from both QA and testing, and we all take Quality seriously. Whadaya wanna know? :-) Project Scheduling? Er, no different from functional-procedural: a black art, consisting of equal parts alchemy, magic, and science with lots of attention paid to the process and the people. And still we struggle like everyone else with schedules. _______________________________________________________________________ Alan R. Weiss TIVOLI Systems, Inc. E-mail: alan@tivoli.com 6034 West Courtyard Drive, E-mail: alan@whitney.tivoli.com Suite 210 Voice : (512) 794-9070 Austin, Texas USA 78730 Fax : (512) 794-0623 Manager, Quality Assurance, Test, and Manufacturing _______________________________________________________________________