ra@is.uu.no (Robert Andersson) (02/07/90)
My company is currently evaluating CASE tools with W.I.M.P. style user interfaces and running under UNIX. I would like to hear success and/or horror stories from other people that have been using such tools. I am especially interested in hearing about Software through Pictures from IDE. -- Robert Andersson, International Systems A/S, Oslo, Norway. Internet: ra@is.uu.no UUCP: ...!{uunet,mcvax,ifi}!is.uu.no!ra
wasc@cgch.uucp (Armin Schweizer) (02/09/90)
In an earlier project for embedded realtime software (about 2 man years effort) we used the Teamwork-PC tool. This tool supports data flow diagrams only (no real time extensions according to hatley/pirbhai or ward/mellor, no structured design according to yourdon). As this happens often, the project was late. So we added to the two team members a third one (Remember: adding manpower to a late software project makes it later). Since the structure of this small piece of software (about 15'000 lines) was very clear, the interfaces well defined and the interaction between the functions minimal (all effects of theuse of the SA technique), the additional manpower was very efficient. The maintenance of the software is, compared with earlier developments, much easier and less error prone (remember: in each major release of the OS/360 about 1000 errors were removed, but it was estimated, that the total number was constant at 5000 errors). The people reacted first somewhat reluctant (we did it the good old way for 5 years now, why change?), but already during the project lifetime they would have shot at everybody who intended to take away this tool. Now what can the tool add to the technique? The tools are (in most cases) not only drawing aids, but provide a lot of checks and today start even to give automatic support when moving from analysis to design and coding phases! I am far away of promising 10 times higher productivity, as others have done. But the tools have a payback time, which is shorter than the first projects development time, not to speek of the maintenance time of the first project. It will be a good decision to purchase such a tool. kind regards arminius Armin R. Schweizer, CIBA-GEIGY AG, R1045.P.06, WRZ 4002 Basel / Switzerland phone: -41-61-697'79'46
dav@island.uu.net (David McClure) (02/10/90)
We're using Software through Pictures (StP) for design of some internal record-keeping and other non-production applications. We've only had StP a few months, so I hesitate to make any major pronouncements. I have seen only a brief demonstration of Cadre Teamwork and have little experience with similar products (so I am not broadly versed in what's available), but my understanding is that StP is reasonably competitive with other major vendors offering structured design tools -- other opinions? If this *is* the case, then IMO there is plenty of room for innovation in the industry (see below). As for StP, we've been primarily using the data flow diagram and entity- relationship diagram editors (DFD & ERD). Alternating between these two editors is how we do most of our design. The DFD editor follows pretty closely to Yourdon/DeMarco and Gane/Sarson (StP allows you to start up in either notation). The ERD editor is based on the Chen E-R model, and allows basic notation of relations, relationships, and attributes. From an aesthetic standpoint, there's a lot of clutter in StP. Menus tend to be have excessive depth AND breadth unnecessarily, and context-sensitive help is *noticeably* lacking at almost every level. Much could be gained from a simpler, more streamlined product -- at the very least more on-line description of options is necessary. It is tempting to think that the creators of StP actually took ALL of their users suggestions and implemented every last one of them. The documentation also reflects this seemingly haphazard presentation, and could greatly benefit from clearer exposition and explanation of the authors' direction and motivation. This is not to say that you can't get any work done with the product; on the contrary, it's several orders of magnitude beyond pencil and paper. And to be fair, greater familiarity with other products might prompt similar criticisms. Actually, I have greater concerns about lack of functionality that to my knowledge is endemic to the industry. It seems that one of the most basic ideas in structured design and analysis is the ability to "push down" or "pop up" in order to encompass higher or lower levels of detail and complexity. IMO, this is an extremely important concept that mirrors how the human brain thinks about design. Implementation of this idea is currently very primitive: in the DFD's that I have seen, being able to group and push or pop a collection of objects to another level is done by cut and paste (if at all), and NOT integrated to an associated data dictionary. This idea of "dynamic levelling" with automatic consistency updating is *essential* IMO, and would represent another order-of-magnitude leap in functionality. In ERD editors, this concept is even less matured. I admit there is a void of consensus on implementation of abstract data types and entity clustering, yet I have seen no attempt to even begin forging into this uncharted territory. I read an excellent article in Communications of the ACM (Aug/89) discussing ER clustering that I would love to see expanded on and eventually automated. Personally, I find the Chen model both inadequate and misleading for design of the information model, especially where many-to-many and non-binary relationships are concerned. Until more advanced techniques are developed, I find a simple mapping of foreign key references and expansion of more complex relationships to be more helpful than Chen notation. I would like to start a thread on these last two topics in comp.databases if I can get enough feedback. Please post any ideas you have about extended ER models and levelling/consistency checking in DFD editors, or any other topics you feel are relevant. Perhaps if we can reach some sort of consensus from the user-community on what is needed, the vendor-community could be inspired to respond (but I won't hold my breath)... -- David McClure > My opinions are not necessarily those of || serenity=>acceptance > my employer, or anyone else in this damnfool || courage=>change > world of ours; *think* for yourself. Rock on. || wisdom=>differentiate
UH2@psuvm.psu.edu (Lee Sailer) (02/13/90)
In article <1378@island.uu.net>, dav@island.uu.net (David McClure) says: > > >It seems that one of the most basic ideas in structured design and analysis >is the ability to "push down" or "pop up" in order to encompass higher or >lower levels of detail and complexity. IMO, this is an extremely important >concept that mirrors how the human brain thinks about design. Implementation >of this idea is currently very primitive: in the DFD's that I have seen, >being able to group and push or pop a collection of objects to another level >is done by cut and paste (if at all), and NOT integrated to an associated data >dictionary. This idea of "dynamic levelling" with automatic consistency >updating is *essential* IMO, and would represent another order-of-magnitude >leap in functionality. > Yes, Yes, Yes. I agree. It looks pretty simple, too, though like all things that simplicity will probably evaporate as soon as wee think about it awhile. The idea, to say it a different way, is that sometimes during design it would be useful to brainstorm a radical new rearrangement of all the subsystems. Suppose one has a fairly complete DFD that goes three or four levels deep. (After all, that's the idea, to hide complexity.) Now, I'd like to step way back and look at the whole thing from a distance, but still grovel in the complexity. So what I want is to "flatten" the tree so that there is only one level, every single primitive process and data store appears on that level, and then I want to be able to move them around on the screen, and lasso new groups of them to create a completely new tree. >In ERD editors, this concept is even less matured. I admit there is a void >of consensus on implementation of abstract data types and entity clustering, >yet I have seen no attempt to even begin forging into this uncharted >territory. >I read an excellent article in Communications of the ACM (Aug/89) discussing >ER clustering that I would love to see expanded on and eventually automated. >Personally, I find the Chen model both inadequate and misleading for design >of the information model, especially where many-to-many and non-binary >relationships are concerned. Until more advanced techniques are developed, >I find a simple mapping of foreign key references and expansion of more >complex relationships to be more helpful than Chen notation. > Perhaps because I started in the data modeling world, I find there is *more* agreement there, not less. If one is willing to compile all the functional dependencies, the tool ought to be able to do a better job of helping build the right (gasp, dare I say that word?) objects. lee
csosnu@uta.fi (Ossi Numminen) (02/13/90)
In article <1378@island.uu.net> dav@island.uu.net (David McClure) writes: >We're using Software through Pictures (StP) for design of some internal >Personally, I find the Chen model both inadequate and misleading for design >of the information model, especially where many-to-many and non-binary >relationships are concerned. Until more advanced techniques are developed, >I find a simple mapping of foreign key references and expansion of more >complex relationships to be more helpful than Chen notation. > I also find Chen model inadequate in that version which is normally supported by CASE tools. One problem are participation/existence constraints. The ability to represent these are missing in many CASE tools. Why not just to put these two together and mark the cardinalities from 0 to n. This notation is also more general that cardinalities marked from 1 to n with participation constraints marked as T or P. At least IEW handles this nicely. Why can't you find weak entity types in ER editors in CASE-tools? They are useful in many cases. The implementation of this facility should definitely be not a problem. I also wonder, why specialization/generalization hierarchies are not implemented in ER editors in todays CASE tools? they are extremely useful in every day modelling situations. It's true that the graphical notation of these extended capabilities are not commonly agreed, but the missing of them still bothers me. In some CASE tools you can use recursive relationships (entity types having a relationship to itself) but the situation could be better. Without this facility, how do you model BOM-structures or organization hierarchies in basic ER model in such a way that the idea is conceptually clear in the first look? >I would like to start a thread on these last two topics in comp.databases >if I can get enough feedback. Please post any ideas you have about extended >ER models and levelling/consistency checking in DFD editors, or any other >topics you feel are relevant. Perhaps if we can reach some sort of >consensus from the user-community on what is needed, the vendor-community >could be inspired to respond (but I won't hold my breath)... >-- >David McClure I'm looking very forward to the coming discussions. Ossi Numminen