[Japhy Riddle] was tired of creating pixel art. He went to subpixel art. The idea is that since each color pixel is composed of three subpixels, your display is actually three times as dense as you think it is. As long as you don’t care about the colors, of course.
Is it practical? No, although it is related to the Bayer filter algorithm and font antialiasing. You can also use subpixel manipulation to hide messages in plain sight.
[Japhy] shows how it all works using Photoshop, but you could do the same steps with anything that can do advanced image manipulation. Of course, you are assuming the subpixel mask is identical is for any given device, but apparently, they are mostly the same these days. You could modify the process to account for different masks.
Of course, since the subpixels are smaller, scaling has to change. In the end, you get a strange-looking image made up of tiny dots. Strange? Yes. Surreal? You bet. Useful? Well, tell us why you did it in the comments!
Pixel art isn’t just for CRTs. However, subpixel art assumes that the pixels can be divided up, which is not always the case.
This would be nice for anyone looking at photographs on the typical computer monitor. What you generally get is about 100 DPI resolution, pixels per inch really, out of the normal consumer displays. All the higher resolution monitors do is increase in physical size, not actual resolving power. If you don’t want to pay thousands of dollars, the best you’ll get is around 150 pixels per inch, and that’s a rare deal that often comes with other drawbacks. Sub-pixel rendering offers you 300-450 PPI at least in one direction, which is much better. It’s approaching proper print resolutions.
So, if you could “cheat” more spatial resolution out of a monitor by rendering your pictures with sub-pixel accuracy, albeit at the expense of color accuracy, you could look at your photographs and see things like where your photo was out of focus. Normally it just won’t show up.
I realize this moment may not be the most convenient for discussing such issues, but I had to do it one way or another. There was a time they cared nothing for our eyesight. When their only experience of humanity was an electron coming at them at speed and ending his miserable existence smashing into a leaded glass screen. When I flushed my loo last night, I acted in the face of objections that it was a mere poop and of no practical use to anyone. I have learned to ignore such naysayers when quarrelling with them was out of the question.
I’m not so sure, a regular 4k 27″ is 163ppi (granted close to 150).
You can get 27″ 5K displays with 218PPI (5120×2880 pixels) for $800 like the ProArt PA27JCV
That “albeit at the expense of color accuracy” is doing a lot of heavy lifting.
Human eyes are abominable at seeing detail in blue. As a result, you’re definitely not getting 3x resolution horizontally even with maximum chroma error.
And the amount of blur needed to make the chroma error imperceptible is worse than just doing grayscale rendering in the first place.
The netpbm “pgm” and “ppm” formats can be easily coerced into displaying a grayscale image as RGB subpixels. (Change the width to 1/3 and the format from P5 to P6). Results aren’t particularly great. https://imgur.com/a/auZRvQE
I see nothing wrong with that image. Even the chroma error they mention is nowhere to be found.
If all you care for is pixel density, you could get a MacBook Pro with 250 ppi.
The problem is that it’s entirely dependent on having the same type of pixel geometry which varies based on what kind of display you have. https://en.wikipedia.org/wiki/Pixel_geometry
Stick to pixel art and leave the subpixel rendering to rendering libraries.
“You can also use subpixel manipulation to hide messages in plain sight.”
Read HaD…a lot!
A game called Drol in 1983 used a similar technique to generate colour images using a monochrome display mode. Only worked on NTSC CRT though….
This is actually used in the anti aliasing of modern browsers. If you take a screenshot of text, black on white, on a high enough resolution screen, and zoom way in, you’ll see that the left side is often blue while the right is often red. Makes for sharper text, and you can’t see it if not in a screenshot.
Yes I can see it and please stop telling me that the chroma error is imperceptible. Either the font renderer blurred it so much that they should have used grayscale rendering in the first place, or there’s nasty colored fringes on vertical details.
I must say that I agree 100% with this statement. Some people have different color perception than others, and it tends to bother me looking at sub-pixel anti-aliasing on a display that actually has good color reproduction. This and the gray text nonsense…text on a light background should be rgb(0,0,0). Combine that with the blurring resulting from the anti-aliasing and you have a recipe for eyestrain.
Human eyes are very sensitive to green, so there is a lot of spacial resolution carried by the green subpixel. Even the red plus the blue combined do not convey as much detail, so at best you are only going to get <2x the resolution with sub-pixel rendering.
Back in the day, broadcast TV cameras used a clever method to improve horizontal resolution in the luminance signal: deliberately offset the registration of the green sensor from the red and blue by half a “pixel” (in reality the spacial offset was determined by the signal bandwidth of the sensor and processing electronics). This works because there is less chroma bandwidth (resolution) in the signal and the offset becomes invisible.
You might be able to get away with sensible sub-pixel rendering if the display is organized like a Bayer filter, and many AMOLED displays are indeed structured this way. The issue there is the green sub-pixels wind up being so much smaller than red and blue because we are so sensitive to green light. At some point it probably makes more sense to simply treat the R-G and B-G pairs as full pixels that are missing a color channel and use 4:2:2 YUV chroma sub-sampling to make up for it.
Google Chrome (and Electron “apps”) font rendering is FUBAR. Firefox got it right and it looks like rest of ClearType text in Windows.
I went down the font rendering rabbit hole….once.
It really is a mess. The typeface geeks (not an insult) understandably want the characters to render a specific way, with nuances that make the text distinctive and flow well with good kerning. Unfortunately computer displays have had woefully inadequate resolution to achieve this, so a hack was to use font hinting to — as one person put it — “beat the letters into submission and force them to align to a grid” (or something like that). This was done by hand with bitmapped fonts, which are fine for user interfaces, but don’t work well with WYSIWYG applications.
The next step was to include executable code in with the font file itself that would hopefully allow for dynamic sizing without having to generate bitmaps for every possible size it might be rendered at. Now that high-DPI displays are gradually becoming more common, it seems the use of strong hinting is discouraged. For those of us using mundane 96DPI computer monitors this is then a step backwards.
The sad truth is there is a LOT of software still in use that has been written with a 96DPI display concept hard-coded into the program, and indeed many graphical software APIs used the concept of real display pixels when defining the size of widgets in the UI. In my experience it is almost impossible to gracefully scale and resize the elements without breaking the layout.
The general trend of user software turning into “apps” (e.g. electron) abstracts a lot of these graphical rendering details away, but this also carries a huge performance penalty just to perform basic tasks – suddenly, to run a simple program (“Hello World!”), I have to load a full-fat Web browser (with full-fat Web browser security problems). In extreme cases the application window gets created, but has no name (or a generic name). Then after some time goes by, it finally assumes the name of the “program” that I wished to run. This suggests that there is a lot of churning that takes place before it even processes the first statement relevant to what it was intended for.
(rant mode off) :-D
Reminds me of Nintendo headgear
Everything old is new again. Many of us did the same sort of thing on the Apple II computers where the high-res graphics had 280 pixels across with quirky color, but if you displayed to a B&W monitor, you could get 560 pixels of horizontal resolution. Perfect for B&W applications like desktop publishing software (that looked like rubbish on a color display)