WebGL / Tutorials / Concepts

This is an introduction to some core concepts. If you are already familiar with things like shaders and the graphics pipeline, feel free to skip ahead to Tutorial #1: Triangle.

Basic WebGL Concepts

A WebGL program is setup and initiated from JavaScript code. Its input is a list of 3D coordinates called vertices. Vertices can be drawn as individual points, lines between pairs of vertices, or triangles between groups of three vertices. Multiple triangles are connected to produce 3D surfaces.

These input vertices are processed in a series of steps called a graphics pipeline that draws pixels on a 2D screen. WebGL has two places in the graphics pipeline where we can insert a custom algorithm called a shader. The following diagram shows the pipeline at a high level. The steps are described below.

JavaScript
↓input vertices (3D)
Vertex Shader
↓transformed vertices (3D)
Rasterizer
↓pixel coordinates (2D)
Fragment Shader
↓color of each pixel
Screen

First, the vertex shader runs. It takes the input vertices and transforms them. It can move, rotate, or deform objects (for example, to apply perspective effects). It’s job is to make sure everything that should be visible on the screen is visible and oriented correctly. Only vertices with x, y, and z coordinates between -1 and 1 will be visible.

The transformed vertices are then sent to the rasterizer. The vertices are interpreted as triangles (or lines or points if configured to do so) and projected onto a 2D grid via a process called rasterization. It maps the 3D vertex data to pixel coordinates. This step of the pipeline is automatic and can’t be customized.

Next, the second shader runs. It’s called a fragment shader. A fragment is effectively a pixel, so it’s also called a pixel shader. It runs on every pixel of a visible triangle (or line or point) as determined during rasterization. It’s job is to convert 2D pixel coordinates to colors in RGBA format. Your computer then draws these colors on the screen.

This is a very simplified explanation. I omitted the fact the vertex shader can pass custom data to the fragment shader, but let’s not get distracted with that yet.

Shading Language

WebGL 2 shaders are written in the OpenGL ES Shading Language version 3, or “GLSL ES 3” for short. It’s an open standard for cross-platform 3D graphics managed by the non-profit Khronos Group. GL stands for “Graphics Library”. ES stands for Embedded Systems, like mobile devices, but it’s supported by desktop/laptop too for maximum compatibility.

Shading language programs run on the GPU. Compared to CPUs, GPUs have a smaller, more specialized set of operations available, like matrix multiplication. The GPU runs many more operations in parallel compared to a CPU. This is well suited for graphics-oriented tasks like calculating millions of pixels’ colors at 60 frames per second.

To run in parallel, shading language programs process each vertex or pixel coordinate in complete isolation. Vertex shader algorithms take a single vertex and output a single vertex. Fragment shader algorithms take a single pixel coordinate and output a single color. The algorithm doesn’t know what any of the other input values are and it can’t remember its previous output. These fundamental constraints make graphics algorithms significantly different from traditional programming languages.

In order to learn shading language, let’s use it! Start with Tutorial #1: Triangle.