Basically you can't assume all TVs are consistently manufactured and calibrated, so the whole screen probably won't be visible.
things which should be viewable but you can lose some edges? those go in action safe.
and fullscan is the total area you need to worry about: most viewers can't see all of it, but some can
this was not always followed, so sometimes they are visible.
there's no smarts in there to say "ok hang on I'll stop drawing pixels now while I reset the beam". NOPE, the signal has to go black for a while.
And all of these are parameters that can vary from TV to TV and as tubes (and other components) age and warm up.
So there was no exactness of TV calibration.
Thus, overscan.
In the late CRT era we'd basically standardized on 4:3 rectangles for TV/Monitors but there's no reason CRTs have to be that shape.
But imagine you've got a circular display: what does the overscan look like?
The tube doesn't "know" there's parts of the raster scan that it can't light up. It does anyway, you just can't see them.
What would that look like?
WELL, HOLD ON TO YOUR HATS
bunkerofdoom.com/crt10sp4/index…
CRTs light up because an electron beam excites a phosphor. It's not like an electron beam is a flashlight, making light anywhere it hits. No phosphor = no light.
So my guess is whatever process they used to coat the front of the tube (on the inside) with phosphor caused nearby areas to get coated too, and that wasn't considered a problem.
So you can position it anywhere on the screen, and you can adjust the intensity.
So the left-to-right oscillation is 15 kilohertz.
It's switching a magnet on and off at 15,000 times a second, and that causes vibration through magnetostriction
So by the time you're 40, most people can't hear 15khz anymore.
So what if instead of having the display moving left-to-right, top-to-bottom, and just varying the intensity, you instead gave the computer full control over where it could move?
The only limiting factors are how fast the computer can tell it to move, how fast the magnetic fields can change, and the quality of the digital-to-analog-converters in the system.
You can only draw so many points/lines before they start flickering, and it turns out this is because you're taking too long to redraw them.
There's two effects causing you to see a picture:
1. Phosphor persistence
2. Persistence of vision
You want it to be fast so you don't get smearing, but not so fast you have to refresh it faster than the signal can handle.
But they had a thing where they could force a clear by flashing it with a special color
This one's in your eyes & brain. If you've ever moved a sparkler around and seen a line instead of a bright dot, you know what this one is.
It's going to happen every 30/60/70/whatever hertz whether you like it or not.
But vector displays work differently. Instead of the TV generating its own raster scan pattern and just doing it automatically forever, the pattern is under control of the computer.
It's an interesting trade-off and shows a completely different way to think about how you'd manage those systems.
If you take too long drawing, your framerate goes down, and it feels less smooth.
Phosphors are named after the element phosphorus (obviously)
But phosphorus is not a phosphor!
But phosphors are materials which exhibit phosphorescence or fluorescence, which are types of photoluminescence.
Basically they absorb photos and re-emit them
With phosphorescence it's slower, and takes milliseconds.
This is why CRTs work: The electron beam hits the phosphors, they then emit visible photons and slowly fade back to black.
It instead has chemiluminescence: a chemical reaction that gives off photons (but without any photons needed to excite it)
The new twitter interface doesn't seem to let me untag people sometimes, or I would have.