aiv@euraiv1.UUCP (Eelco van Asperen) (08/03/87)
I think I've found 2 more bugs in Turbo C; the first one is about the use of hexadecimal character constants. According to the manual (p131, User's Guide): "..., modern C allows you to declare character constants in hex notation. The general format is '\xDD', where DD represents one or two hexadecimal digits (0..9, A..F). These escape sequences can be directly assigned to char variables, or they can be embedded in strings". So, the following test program should run without problems; main() { char wrong[] = "test:\x40abcd"; char right[] = "test:\x040abcd"; printf("wrong: '%s'\n\nright: '%s'\n",wrong,right); } Alas, the wrong-string is printed as 'test:\nbcd' so that 'bcd' is printed on the next line. Apparently a 3 digit hex-number is expected of which the first digit is not used hence the '\n' (=0x0A). The second bug is about interrupt-procedures; Turbo C enables you to prefix a procedure with the interrupt-keyword and the result will be that all registers are saved upon entry and restored at exit. Now, if you compile such a procedure with 'Test Stack Overflow On', then Turbo C will happily add a check for stack-overflow just after saving the registers; naturally, this will usually crash your program and most likely your PC as well. Example C-program: void interrupt handler(void) { /* nop */ } main() { printf("demo demo demo\n"); } Assembler code produced by 'tcc -c -S -N bug.c': ; Line 3 _handler proc far push ax push bx push cx push dx push es push ds push si push di push bp mov bp,dgroup mov ds,bp cmp word ptr dgroup:___brklvl,sp ;!! ERROR !! jb @2 call far ptr overflow@ @2: ; Line 4 ; Line 5 @1: pop bp pop di pop si pop ds pop es pop dx pop cx pop bx pop ax iret _handler endp Of course, you can avoid this by putting the interrupt-handler in a separate file that you compile without the check; however, this approach will not work with the integrated environment (tc) I like to use for development. I'ld like to emphasize that I like Turbo C a *lot* and that any product that sells like Turbo C (>100K copies), is bound to generate a lot more bug reports that other, less successfull, ones. Eelco van Asperen. -----------------------------------------+------------------------------------- Erasmus University Rotterdam |uucp:mcvax!{eurifb,olnl1}!euraiv1!aiv Fac. of Economics, Computer Science Dept.|earn:asperen@hroeur5 PO.box 1738 / 3000 DR Rotterdam | T H E N E T H E R L A N D S |(this space intentionally left blank) -----------------------------------------+-------------------------------------
sam@gt-eedsp.UUCP (Sam Smith) (12/08/87)
I found the following bug in Turboc v1.0. The same program works
correctly under MSC 4.0 and Unix C.
#include <stdio.h>
#include <math.h>
main()
{
long longvalue;
float floatvalue;
floatvalue = -100.0;
printf("%f\n", floatvalue);
longvalue = (long)floatvalue;
printf("%ld\n", longvalue);
}
output:
-100.0
65436
The second number should be -100 not 65436. This program works for
positive floats and doubles.
From what I can figure out the call to _ftol should return the long
value in DX:AX. AX is correct, DX should be 0xffff but is actually
0x0000. DX is not getting sign extended.
Sam Smith
Digital Signal Processing Lab, Georgia Tech, Atlanta GA 30332
Internet: sam%gteedsp@gatech.gatech.edu
uucp: ...!{decvax,hplabs,ihnp4,linus,rutgers,seismo}!gatech!gt-eedsp!sam
--
Sam Smith
Digital Signal Processing Lab, Georgia Tech, Atlanta GA 30332
Internet: sam%gteedsp@gatech.gatech.edu
uucp: ...!{decvax,hplabs,ihnp4,linus,rutgers,seismo}!gatech!gt-eedsp!sam
qintar@agora.UUCP (Jim Seymour) (02/18/88)
I almost hate to do this since I like Turbo C so much, but... In version 1.0 I noticed what seemed to be a bug in the read() function. The docs claim (and the standard dictates) that this function returns the actual byte count read in from the file. I had a program written originally for the Manx Aztec C compiler which read data from a file in 16 byte chunks and checked the return code to verify how many bytes were read. Under 1.0 it seemed that this number had no bearing on the actual byte count. It would read in all 16, but return some number close to, but not equal to, 16. Now, version 1.5 is out and I assumed the bug would be gone. However, the same symptoms exist. Has anybody else encountered this or am I doing something horribly wrong? -Jim Seymour ...tektronix!reed!percival!agora!qintar ================================================================= Cipher Systems, Inc. USMail: P.O. Box 329 1308 S.E. Division North Plains, OR 97133 Portland, OR 97202
dean@violet.berkeley.edu (Dean Pentcheff) (02/19/88)
In article <734@agora.UUCP> qintar@agora.UUCP (Jim Seymour) writes: >In version 1.0 I noticed what seemed to be a bug in the read() function. >The docs claim (and the standard dictates) that this function returns the >actual byte count read in from the file... Actually, the reference under read() claims that "... a positive integer is returned indicating the number of bytes placed in the buffer; if the file was opened in text mode, read does not count carriage returns or Ctrl-Z characters in the number of bytes read." A little experimentation clarifies this somewhat ambiguous statement. Read() (in text mode) will read the requested number of bytes from the file, but _then_ perform \r and ^Z elimination, fill your buffer with whatever remains, and return the final number of characters that ended up in your buffer (_after_ \r and ^Z removal). A suggested fix is to use the Turbo _read() function which operates purely, without \r and ^Z manipulation. Alternately, you could open the file using the O_BINARY flag, and use the normal read(). One interesting "feature" connected with this is the ambiguous case of reading a file (in text mode) which consists of only '\r' characters. The result turns out to be rather ugly: if you read a buffer's worth of '\r's, Turbo can't return a 0 for the number of characters placed in the buffer, since that is supposed to mean EOF. So, read() returns a 1, and (usually!) leaves a '\r' as the first character in the buffer. On the last read before EOF, though, it seems to leave garbage in that character position. Gag, ack, blech. -Dean ----------------- Dean Pentcheff (dean@violet.berkeley.edu) ----------------- "A university is a place where people pay high prices for goods which they then proceed to leave on the counter when they go out of the store." Loren Eiseley
Devin_E_Ben-Hur@cup.portal.com (02/21/88)
> In version 1.0 I noticed what seemed to be a bug in the read() function. > The docs claim (and the standard dictates) that this function returns the > actual byte count read in from the file. > Under 1.0 it seemed that this number had no bearing on the actual byte > count. > It would read in all 16, but return some number close to, but not equal to, > 16. > -Jim Seymour ...tektronix!reed!percival!agora!qintar You probably have your file open in text mode, in which case returns and cntrl-Z bytes are ignored in the byte count. Try opening your file in binary mode.
zu@ethz.UUCP (Urs Zurbuchen) (02/23/88)
In article <734@agora.UUCP> qintar@agora.UUCP (Jim Seymour) writes: >In version 1.0 I noticed what seemed to be a bug in the read() function. >The docs claim (and the standard dictates) that this function returns the >actual byte count read in from the file. I had a program written originally >for the Manx Aztec C compiler which read data from a file in 16 byte chunks >and checked the return code to verify how many bytes were read. I stumbled on that problem this weekend, too, when porting some Unix software to the MS-DOS environement. The problem lies with brain-damaged MS-DOS. All C compilers which handle text files as collection of lines ending in CR-LF will show that behaviour. It is, for example, true for MS-C (4.0), too. Internally, lines of a text file are ended by a LF only. On read time every CR-LF pair is translated into a single LF. When writing to a text file, all LF's are expanded to CR-LF. The number returned by read is the actual number of bytes in the buffer which it read to. Write might return a similar value. Now, you might wonder how the program knows which files to handle as text files and which as binaries. The solution is simple. It doesn't. The programmer has to know itself. You can add a parameter 'b' to the file open mode string in fopen() or set the translation mode with setmode() or include the constant O_BINARY in an open() call. This will prevent translation of CR-LF to LF and vice versa. It also allows to write ^Z which doesn't work on text files. By the way, all files are text files by default. As a solution you could treat all your files as binary files. But be sure then to replace eoln() with your own function. It doesn't recognize CR as end of line. Perhaps you would have to replace other functions as well. Hope this helps, ...urs UUCP: ...seismo!mcvax!cernvax!ethz!zu