The word raster comes from the Latin word for rake and has origins in Cathode Ray Tube (CRT) displays. CRTs work by shooting a narrow ray of electrons emitted from a cathode through a vacuum-filled tube at a thin sheet of phosphors; the phosphors then release the energy provided by the electrons as visible light. Early CRTs, such as those used in oscilloscopes and vector video game consoles such as that used in the 1979 Asteroids arcade game, make patterns by directing the electron beam to trace out the pattern to be displayed. Televisions and subsequently most computer monitors instead raked the electron beam across the entire screen in a repeating pattern of parallel horizontal lines, varying the intensity of the beam to create images.

An example of how raking or rastering a line in rows across a display while varying its intensity (here shown as width) can create shapes.

As memory became cheaper and other forms of displays became popular, raster shifted meaning from the raking-pattern of an electron beam in a CRT to refer to any grid of pixels. There remains an implication that the raster is stored in memory in the same order as the raster scan: the top row first, left to right, then the next row also left to right, and so on to the bottom of the screen. I’ve heard that this order was chosen because it was reading order in the languages spoken by its inventors, though I haven’t been able to find firm backing for that supposition.

Given that raster generally means a grid of pixels, the rasterization of a scene consists of a single color at each pixel. Rasterizations are the principle output of canvas-style APIs, which themselves are the backing for almost all other graphics APIs.

Pixel is a shortened form of picture element. A pixel can be thought of in several ways, but the two most common are as square (making the raster a tiling of adjacent pixels) or as a mathematical point (making the raster a void with points distributed evenly across it). These two are not equivalent, and there are pros and cons to each.

Treating pixels as mathematical points creates aliasing, where the shape of the grid interacts with the shapes in the scene to create patterns which are distracting to the eye. The most common aliasing effect is stair-step edges, causing smooth shapes to appear jagged. Much more significant, however, is the display of scene objects that are narrower than a pixel and effectively hide between the points, vanishing entirely from the rasterization.

Two examples of point-like pixels causing aliasing. The outlines are the indended shapes. The circles show the pixel locations. The colored regions are the shapes the eye sees.

Treating pixels as square regions removes the worst kinds aliasing: stair-stepped edges and thin scene objects instead look a bit blurred, but the blur is less than a pixel wide and generally does not distract the eye.

An example of an oblique one-pixel-wide rectangle rendered with square pixels. The darkness of each pixel matches how much of its area overlaps with the rectangle.

While square-like pixels have less aliasing than point-like pixels, they still do have some aliasing, particularly when used to display repeating patterns that are sized close to, but not the same as, the pixels themselves. Some examples of that can be seen using the following interactive 1D pattern resizer.

Adjust the slider to observe 1D aliasing caused by a striped ground-truth with area-like (top) and point-like (bottom) pixels. Note that when the ground-truth bars are 2 or more pixels wide both patterns look fairly good. Between 2 and 1 the point-like pixels have thick and tin stripes, while the area-like sometimes look right and other times have spurious gray bands. Below 1, both have a variety of false patterns not present in the original, with more extreme patterns for the point-like pixels.

Thin read lines show boundary between area-like pixels (top) and sampling point of point-like pixels (bottom).

In addition to still having some aliasing, square-pixel approaches add a problem that point-like pixels do not have: many scenes cannot be correctly rendered one piece at a time.

To see this problem, consider a scene containing a white background and two half-pixel-sized black rectangles, both within the same pixel. If those two black rectangles are side-by-side, together covering the full pixel then the pixel should be black. If they are fully overlapping, both covering the same part of the pixel, then the pixel should be a gray half-way between black and white. If they are partly overlapping, the pixel should be a darker gray. But if we render them one at a time, the first will work fine: we’ll add half a pixel of black to a white pixel and get a 50/50 gray. The second rectangle now adds half a pixel of black to a gray pixel, getting a darker 25/75 gray. That could be the right result, but it likely isn’t and the only way to know is to check not just the rasterization of the scene so far but the geometry of the objects that make up the scene.

A few ways a half-pixel-sized blue shape could overlap with a half-red half-white square pixel region, and the correct resulting color for each.

If we use area-like pixels to draw a quadrilateral by splitting it into two triangles sharing an edge and drawing each individually, pixels along the shared edge will be half-filled by the first triangle then half-filled again by the second resulting in only three-fourths-filled pixels along the line. As a result, area-like pixels will show a seam between the two trangles.

Two touching black triangles rendered in anti-aliased mode. Note the pixel-width blurred edges that prevent stair-step aliasing and the visible boundary line even though there is no gap between the two triangles.

By contrast, point-like pixels don’t have this problem. Points don’t have dimensions, so nothing can cover half of a point. Point-like pixels do tend to have strong aliasing artefacts, but they also let us render a scene one object at a time. One-at-a-time rendering lets us use a very simple API—one simple enough to encode in hardware—and lets us process each object in the scene in parallel on a different processor, possibly even slicing the objects into smaller pieces for more parallelism, without any change in the resulting render. That simplicity and robustness has fueled the development of dedicated graphics hardware and has made point-like pixels the dominant assumption in canvas APIs today.

But what about the aliasing? Canvas APIs generally offer various anti-aliased modes that implements (some approximation of) the square-like pixels, but without changing the simple point-like API design. If these operate on a pixel basis they don’t let 3D graphics work very well, but they can also be designed to operate on sub-pixel samples. Multisampling is often used to render the entire scene with point-like samples at a higher resolution that it will be displayed and then average groups of samples into the final displayed pixels.

Because APIs designed for point-like pixels (or point-like multisamples) can operating by creating of rasterization of each element of the scene independently, it is common to refer to all systems that implement this approach as simply rasterization and to use a more specific term (raytracing, subdivision, etc) for every other method of filling a raster with a representation of a scene. In some situations, point-like pixel APIs are named after their most popular algorithms, such as scan-converting, Bresenham, or DDA.

There are several designs of APIs and algorithms that handle square-like pixels correctly; raytracing is definitely the most popular, albeit only for 3D graphics.