sewilco@mecc.UUCP (Scot E. Wilcoxon) (11/03/85)
I think John's on the right track. We don't need to just decide on a character set representation. We also need to decide what needs to be changed in UNIX. You'll see two similar articles here: character sets and UNIX commands. Application programs which need special character sets will use whatever set they need. Hopefully when they generate a text file for manipulation with a UNIX utility, that utility will have been modified to use it. Eventually there should be a standard character set, and all text-using utilities will handle it. Until then, application programs might have to keep their text to themselves. I think the best place to start international UNIX is in the user program which has the most interaction with the user. That's the shell. Without a multiple-language shell a user can't use the UNIX capabilities in the same way the creators of UNIX intended. (Maybe the traditional shell is not best, but that's a different discussion) Do any non-English shells already exist? In article <224@l5.uucp> gnu@l5.uucp (John Gilmore) writes: >... >Internally to an international program, characters would be 16 bits, >but stdio routines (printw, fprintw, sscanw, etc) would encode to a >bytestream on the way in and out. ("w" for "world" or "wide"). > >(Hmm, the non-Unix-opsys people have been looking for a way to tell when >we Unixoids are reading or writing a text file versus a binary file...now >that we propose encoding our own text files, they will have the clue.) (Qualifications? I speak American, understand British, carry on a stumbling conversation in Portuguese, and can get the gist of a Spanish newspaper article) -- Scot E. Wilcoxon Minn. Ed. Comp. Corp. circadia!mecc!sewilco 45 03 N / 93 15 W (612)481-3507 {ihnp4,mgnetp}!dicomed!mecc!sewilco