rubin@mead.UUCP (Daniel Rubin) (08/24/90)
I have a couple of technical questions.. How does a computer generate the video representation of screen memory? How does a computer update the memory that is used for the screen? I was thinking and I got stuck when I thought about a 25Mhz (fast!) processor. It seems to me that a the most memory that this processor can update at 60Hz or 60 times a second (which if I am not mistaken is the speed that a television redraws its picture and monitors are even faster) is only .417 Megabytes??? Is this correct - the most graphical data that you can malipulate is less than 1/2 of a Meg at the normal speeds of a typical television when you are using a 25Mhz processor dedicated to graphics? Is there something that I am missing - perhaps there is some sort of co-processor or something. Anybody out there who knows something about microprocessors ect.. please help me out.......... Thanks in Advance - Dan Rubin
midkiff@portia.Stanford.EDU (Neil Midkiff) (08/24/90)
In article <1211@meaddata.mead.UUCP> mead!rubin@uccba.uc.edu writes: >How does a computer update the memory that is used for the screen? > >I was thinking and I got stuck when I thought about a 25Mhz (fast!) >processor. It seems to me that a the most memory that this processor can >update at 60Hz is only .417 Megabytes??? > >Is this correct - the most graphical data that you can malipulate is less >than 1/2 of a Meg at the normal speeds of a typical television when you >are using a 25Mhz processor dedicated to graphics? Is there something that >I am missing - perhaps there is some sort of co-processor or something. You're thinking along the right lines, but the "coprocessor" is really just some dedicated logic on the video board. The memory associated with the video display is usually arranged so that the computer's processor can write into it (and maybe read out of it) whenever NEW data is put on-screen, while more or less independently, the video board is continually reading out the data 60 or more times a second, in a pattern that is timed to synchronize with the monitor's sweeping electron beam. Since this pattern is basically repetitive reading, you don't need a processor that can do computations and stuff. There are complications I'm not mentioning (like preventing flicker when both the processor and the video board want to access the same memory location at once) but this is the general idea. In most instances, there is very little new data written to the screen for each 1/60th second update, so the main processor doesn't spend most of its time writing to video memory. Of course, if you want to do "full animation" with completely new pictures each screen, then the calculations you made do apply, and you find that you need a VERY fast processor just to get the data written to the screen (let alone generating the pictures in the first place). This tells you, among other things, why fully digital TV is not yet commercially viable. End of basic lesson...let me know if you want more detailed information. -Neil
bayes@hpislx.HP.COM (Scott Bayes) (08/28/90)
> I have a couple of technical questions.. > > How does a computer generate the video representation of screen memory? > > How does a computer update the memory that is used for the screen? > > I was thinking and I got stuck when I thought about a 25Mhz (fast!) > processor. It seems to me that a the most memory that this processor can > update at 60Hz or 60 times a second (which if I am not mistaken is > the speed that a television redraws its picture and monitors are even faster) > is only .417 Megabytes??? > > Is this correct - the most graphical data that you can malipulate is less > than 1/2 of a Meg at the normal speeds of a typical television when you > are using a 25Mhz processor dedicated to graphics? Is there something that > I am missing - perhaps there is some sort of co-processor or something. > Anybody out there who knows something about microprocessors ect.. please > help me out.......... > > Thanks in Advance - Dan Rubin An excellent set of questions. I'll assume that you are concerned about the process of actually getting the numeric pixel values to be visible on a screen, and not about doing real-time animation at frame rate (for which your estimate is probably reasonably valid). The answers, for most of the micro computers we deal with these days, are that: 1) a dedicated piece of circuitry scans through a special piece of RAM called the "frame buffer", turning numeric values into displayed pixels. A digital-to-analog converter (D to A) converts the numbers to CRT driving voltages, which control intensity of the flying spot. All this happens without the CPU's connivance, once things are set up properly, and the appropriate pixel data loaded into the frame buffer. 2) the computer has a separate "port" (access path) to the frame buffer memory (which is often called "dual-ported" because of this) through which it writes or reads pixel values, almost simultaneously with the dedicated display hardware's access. 3) the display hardware, being dedicated, can render pixels far faster than could the typical CPU. An added bonus is that you don't waste precious CPU cycles updating a CRT rendering. Note that some older machines (e.g. the HP 9830) actually did have the CPU continually refresh the display. You could either display or compute on the '30, but not do both at once. Your estimate would probably be close to the mark, were that older type of architecture in effect today. Many displays use different architectures than the "shared address space via dual-ported RAM" we are used to seeing. Scott Bayes