[comp.arch] Loss of Significance register

PLS@cup.portal.com (Paul L Schauble) (11/05/88)

Looking through some old paperwork brought back to mind a very old machine I
used to work on. This machine contained an interesting feature. It has a 
register that stored the maximum postnormalizing shift performed. This was
updated for each floating point add or subtract. There was an instruction to 
store and clear this register.  

This seems like a useful feature. You could run a data item through an
algorithm, then store this register and see what precision was actually
maintained through the algorithm.

My question is that I haven't seen this feature or any equivalent in any 
modern hardware. Why? Has experience shown it to be useless? Is there some
non-obvious problem with it?

   ++PLS