[comp.lang.c] binary to ascii

siva@bally.Bally.COM (Siva Chelliah) (09/14/90)

I wrote a program to translate binary files to ascii.  I tried under
Microsoft C 5.1 and Turbo C 2.0. But it works with
some files and stops in the middle of some files.  When I ran it against 
itself after compiling with Microsoft C, it stopped when it tried to read 1A
Am I missing something?
Please help.  
Now I know there is a program in UNIX to do this (uuencode)
If anybody wants the other program (ascii to binary ) let me know. Of course, 
we have to fix this first. 
NOTE : both programs works fine under UNIX.

#include "stdio.h"
#include <sys/types.h>
#include "unistd.h"
#include "fcntl.h"
main (argc,argv)
 int argc;
 char *argv[];
{
 unsigned int i;
 int hl;
 long flen;
 char c;
 FILE *fp;
 if((hl=open(argv[1],O_RDONLY))==-1){
     printf("error : file not found ");
     exit(1);
 }
 while (read(hl,&c,1)==1){ 
    i=(int ) c;
    i=i &  0x00FF;   /* this is necessary because when you read, sign is 
                        extended in c   */
    printf("%02X",i); 
  }
}

cc100aa@prism.gatech.EDU (Ray Spalding) (09/15/90)

In article <574@demott.COM> kdq@demott.COM (Kevin D. Quitt) writes:
>In article <371@bally.Bally.COM> siva@bally.Bally.COM (Siva Chelliah) writes:
>>    i=(int ) c;
>>    i=i &  0x00FF;   /* this is necessary because when you read, sign is 
>>                        extended in c   */
>    Try "i = (unsigned int) c;" and you'll see it isn't necessary.  

This is incorrect (where c is a signed char).  When converting from a
signed integral type to a wider, unsigned one, sign extention IS
performed (in two's complement representations).  See K&R II section
A6.2, "Integral Conversions".
-- 
Ray Spalding, Technical Services, Office of Information Technology
Georgia Institute of Technology, Atlanta Georgia, 30332-0275
uucp:     ...!{allegra,amd,hplabs,ut-ngp}!gatech!prism!cc100aa
Internet: cc100aa@prism.gatech.edu

pfalstad@phoenix.Princeton.EDU (Paul John Falstad) (09/15/90)

In article <13680@hydra.gatech.EDU> cc100aa@prism.gatech.EDU (Ray Spalding) writes:
>In article <574@demott.COM> kdq@demott.COM (Kevin D. Quitt) writes:
>>In article <371@bally.Bally.COM> siva@bally.Bally.COM (Siva Chelliah) writes:
>>>    i=(int ) c;
>>>    i=i &  0x00FF;   /* this is necessary because when you read, sign is 
>>>                        extended in c   */
>>    Try "i = (unsigned int) c;" and you'll see it isn't necessary.  
>This is incorrect (where c is a signed char).  When converting from a
>signed integral type to a wider, unsigned one, sign extention IS
>performed (in two's complement representations).  See K&R II section

True.  But the best way to avoid the sign extension is not with a
logical and.  Use two casts:

		i = (unsigned int) (unsigned char) c;

I, for one, loathe the concept of signed chars.  I've wasted countless
hours of programming time searching for bugs caused because I forgot that
chars are signed by default.  I think chars (in fact, all integer types)
should be unsigned by default.  Comments?

Paul Falstad, pfalstad@phoenix.princeton.edu PLink:HYPNOS GEnie:P.FALSTAD
For viewers at home, the answer is coming up on your screen.  For those of
you who wish to play it the hard way, stand upside down with your head in a
bucket of piranha fish.

brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (09/15/90)

In article <13680@hydra.gatech.EDU> cc100aa@prism.gatech.EDU (Ray Spalding) writes:
> In article <574@demott.COM> kdq@demott.COM (Kevin D. Quitt) writes:
> >In article <371@bally.Bally.COM> siva@bally.Bally.COM (Siva Chelliah) writes:
    [ i = ((int) c) & 0x00FF ]
> >    Try "i = (unsigned int) c;" and you'll see it isn't necessary.  
> This is incorrect (where c is a signed char).

We discussed this a few months ago (``How to convert a char into an int
from 0 to 255?''). The conclusion was that (int) (unsigned char) c takes
a character into an integer from 0 through UCHAR_MAX. Other series of
casts do not do the job; using & is overkill.

---Dan