I can understand why CRTs need one. I think I understand why LCDs have one. LCDs have one because everything before them was made for display on a CRT, right? I realize that everything I say here could be talking out of my a$$, and this may not even be the right place to talk about this, but it's something that's been on my mind for awhile. Feel free to school me if you've got the knowledge to refute anything about this concept. Can't we get rid of the refresh rate / scan-line concept, and just have every pixel on a display change only when it's supposed to? So if it's red, it's always red until it's told to be blue or off or whatever. And instead of doing it a line at a time, can't we just do it to individual pixels anywhere on the screen at any time? I realize that this wouldn't necessarily work with existing graphics technologies. Maybe something built from the ground up could accomplish this, though, eventually. We already compress video in a way that's analogous, I think: Each frame is compared to the last and only the changes are recorded. So in this concept, the "refresh rate" would be determined solely by A) the speed with which the hardware doing the rendering can deliver frames (which could perhaps have only the information that's different from the previous frame), and B) the rate at which the display's pixels can switch. But I don't see why, on a modern display, you would need to do it one scan line at a time. I assume this is totally due to backward compatibility. Anyone care to add something? Thanks for reading.