[net.unix] C I/O Question

lar@inuxc.UUCP (L Reid) (07/23/86)

*** GOBBLE THIS LINE ***

Is there a way to input a character from stdin without requiring
the user to terminate it with a <CR>.

Thanks in advance!
Laura Reid
317-845-6135

dlc@zog.cs.cmu.edu (Daryl Clevenger) (07/26/86)

Here we have a csh with command line editing and tenex style file name
completion.  In order for it to work, it must read and interpret each
character as it is typed.  By using ioctl to get the terminal flags,
one can set CBREAK (or RAW, but I think there are some problems using raw
mode) mode.  I think that you could either use this approach in your shell or
just set the bits in your desired application.  Curses also allows you to
change the input mode from COOKED to CBREAK or RAW.

jsdy@hadron.UUCP (Joseph S. D. Yao) (07/31/86)

In article <1222@inuxc.UUCP> lar@inuxc.UUCP (L Reid) writes:
>Is there a way to input a character from stdin without requiring
>the user to terminate it with a <CR>.

Several ways.  All involve doing an stty() or an ioctl()
call (depending on which version of Unix you have, or which
you feel more comfortable with -- starting from scratch, use
ioctl() if you have it).
The simplest and most general is to go into 'raw' mode.  If
you are ceertain you'll never use a pre-V7 machine, 'cbreak'
mode might be better, depending on your intentions.
If you have System V, you can also set a minimum number of
chars before input returns, and/or a timeout.
-- 

	Joe Yao		hadron!jsdy@seismo.{CSS.GOV,ARPA,UUCP}
			jsdy@hadron.COM (not yet domainised)

ken@argus.UUCP (Kenneth Ng) (08/04/86)

In article <503@hadron.UUCP>, jsdy@hadron.UUCP (Joseph S. D. Yao) writes:
> In article <1222@inuxc.UUCP> lar@inuxc.UUCP (L Reid) writes:
> >Is there a way to input a character from stdin without requiring
> >the user to terminate it with a <CR>.
> 
> Several ways.  All involve doing an stty() or an ioctl()
> call (depending on which version of Unix you have, or which
> you feel more comfortable with -- starting from scratch, use
> ioctl() if you have it).
> The simplest and most general is to go into 'raw' mode.  If
> you are ceertain you'll never use a pre-V7 machine, 'cbreak'
> mode might be better, depending on your intentions.
> If you have System V, you can also set a minimum number of
> chars before input returns, and/or a timeout.

I've had a problem using this on an AT&T 3b5 running Syst 5R2.
It seems that if you set the min character and timeout value,
the timeout does NOT occur until the process receives at least
ONE character.  The only way I've been able to get this to work
is to save the present alarm value, set an alarm for the timeout
I wanted (10 seconds), and test to see if the alarm blasted the
read out. 

> 
> 	Joe Yao		hadron!jsdy@seismo.{CSS.GOV,ARPA,UUCP}
> 			jsdy@hadron.COM (not yet domainised)

-- 
Kenneth Ng:
Post office: NJIT - CCCC, Newark New Jersey  07102
uucp(for a while) ihnp4!allegra!bellcore!argus!ken
           !psuvax1!cmcl2!ciap!andromeda!argus!ken
     ***   WARNING:  NOT ken@bellcore.uucp ***
bitnet(prefered) ken@njitcccc.bitnet or ken@orion.bitnet

Spock: "Captain, you are an excellent Starship Captain, but as
a taxi driver, you leave much to be desired."

Kirk: "What do you mean, 'if both survive' ?"
T'Pow: "This combat is to the death"

guy@sun.UUCP (08/07/86)

(Followups redirected to net.unix only, as this no longer has anything to do
with C I/O.)

> I've had a problem using this on an AT&T 3b5 running Syst 5R2.
> It seems that if you set the min character and timeout value,
> the timeout does NOT occur until the process receives at least
> ONE character.

That is exactly what's supposed to happen; there's no problem.  The
"c_cc[VTIME]" value was NOT originally intended as a read timeout.  It was
intended to work with the "c_cc[VMIN]" value in a fashion similar to the way
that some terminal drivers handle input silos on some terminal multiplexers.

The intent is that if data is coming in at a high rate, you don't want to
process each character as it comes in; you want to wait until a reasonable
number of characters have come in and process them as a group.  (In the case
of the terminal driver servicing interrupts, this means you get one "input
present" interrupt for the entire group, rather than one for each character;
in the case of a user program reading from a terminal, this means you do one
"read" system call for the entire group, rather than one for each
characters.)

However, if data is not coming in at a high rate, you don't want to wait for
more than some maximum length of time before processing the input;
otherwise, the response of the program to the input will be bursty.  If the
data rate is very variable, you want the system to handle both periods of
high and low data rates without having to explicitly switch modes or tweak
some parameter.

In the terminal driver, this is done by setting the "silo alarm" level to
the size of the group, which means that an "input present" interrupt will
occur when at least that many characters are available.  A timer will also
call the "input present interrupt" routine periodically.  That routine will
drain the silo.

This does mean that the "input present interrupt" routine may be called if
no input is present, since the timer goes off whether the silo is empty or
not.  One way of solving this is to adjust the silo alarm level in response
to the most recent estimate of the input data rate; in periods of low data
rate, the silo alarm level will be set to 1 and the timer can be disabled,
since the "input present" interrupt will occur as soon as a character
arrives.

Another way of solving this is to have the timer be part of the terminal
multiplexer, and have it go off only if the silo is not empty.

The equivalent of the silo alarm level is the "c_cc[VMIN]" value, and the
equivalent of the timer is the "c_cc[VTIME]" value.  The S3/S5 terminal
driver chooses the equivalent of the second solution to the problem of
spurious "input present" indications.  In the case of "read"s from the
terminal, it is necessary that some way of blocking until at least one
character is available be provided.  Most programs do not want to repeatedly
poll the terminal until input is available; they want to be able to do a
"read" and get at least one character from every read.

The System III driver did not support the use of the "c_cc[VTIME]" value as a
timeout.  The System V driver does; my suspicion is that somebody read the
documentation, thought it *did* act as a timeout, and filed a bug report
when it didn't.  Somebody then went off and "fixed" this "bug".  If you want
a real timeout, so that the read will complete if any data comes in *or* if
some amount of time has occurred since the "read" was performed (rather than
since a byte of data has come in), you have to set "c_cc[VMIN]" to zero; in
that case, the "c_cc[VTIME]" value acts as a read timeout.

This is explained in painful detail in the System V Interface Definition and
in Appendix C of the IEEE 1003.1 Trial-Use Standard; one hopes the
paragraphs devoted to explaining this migrate into the regular manual pages.
-- 
	Guy Harris
	{ihnp4, decvax, seismo, decwrl, ...}!sun!guy
	guy@sun.com (or guy@sun.arpa)

dan@rose3.UUCP (Dan Messinger) (08/07/86)

In article <423@argus.UUCP> ken@argus.UUCP (Kenneth Ng) writes:
>> If you have System V, you can also set a minimum number of
>> chars before input returns, and/or a timeout.
>
>I've had a problem using this on an AT&T 3b5 running Syst 5R2.
>It seems that if you set the min character and timeout value,
>the timeout does NOT occur until the process receives at least
>ONE character.

If you want to check to see if there is any input pending WITHOUT
waiting at all, then you should set min to 0.  If you are setting
min to zero, then your system is busted.

Also, something not mentioned in all manuals is that the timeout
should not be set to 1.  Use 0 for no timeout, or a value of 2
or greater.  Setting a timeout of 1 is similiar to doing a sleep(1).

Dan Messinger
ihnp4!rosevax!rose3!dan

bzs@bu-cs.BU.EDU (Barry Shein) (08/09/86)

Re: termio

From dan@rose3
>If you want to check to see if there is any input pending WITHOUT
>waiting at all, then you should set min to 0.  If you are setting
>min to zero, then your system is busted.
>
>Also, something not mentioned in all manuals is that the timeout
>should not be set to 1.  Use 0 for no timeout, or a value of 2
>or greater.  Setting a timeout of 1 is similiar to doing a sleep(1).

And setting to 0xff means wait forever.  And remember to set it all
back carefully as VTIME and VMIN are also your VEOF and VEOL chars so
you can't use both at once, not that you would want to but a microbyte
saved is a microbuck earned...And all this is void in months with an
'r' in them except when time(0) returns a prime or the characters in
the buffer sum to a perfect square...

Hey, it's useable, but I doubt more and more as time goes on that
this re-write was any improvement over the V7 stuff. Who *did*
design this "improvement"?

	-Barry Shein, Boston University