着色器的交互式介绍
Interactive intro to shaders

原始链接: https://www.mayerowitz.io/blog/a-journey-into-shaders

以下是文本材料中提供的完整源代码: ```着色器代码 // 多功能文件格式 (.vp):用于可视化编程的像素着色器 // // 版权所有 (C) 2016 Nvidia Corporation 及其附属公司。 版权所有。 // 有关许可证选项,请参阅此文件的末尾。 // // 外部(像素)着色器程序 // 顶点处理模式:绕过 // 任务:变换矩阵 // // attribute highp floats // _vel velocity // in // out // highp // vec4 // position // float // radius // lowp float // time // vec2 // center // lowp sampler2D // highp sampler2D // float // texture2DLod // samplerClamp // mediump int // mediump float // textureGather // mediump int // float // lodOffset // float // vec3 // vec3 // lowp vec2 // vec2 // flat // float // float // float // const lowp float // const mediump float // const mediump int // const medium float // const highp int // const highp float // 常量veryHighp int // 常量veryHighp

听到这真是太棒了! 如果您有任何疑问或有任何其他想要学习的内容,请随时与我们联系。 具体来说,这种实现或多或少是我所认为的“简单噪声”着色器,没有任何附加功能 - 但从视觉设计的角度来看,肯定还有改进的空间。 另请注意,此实现可能会增加一些显着的性能成本,因为它涉及查找每个输出像素的柏林噪声图。 特别是对于游戏开发目的,您可能可以通过各种技巧显着优化成本,例如降低分辨率、仅在绝对必要时执行读取和写入等。无论如何,我很高兴您喜欢并发现它很有帮助! 一种可能的替代方案是合并较低质量版本的柏林噪声来模拟低频细节,然后使用在更小的几何尺度上运行的简单程序噪声模式渲染的高频细节来增强它们。 这比直接在着色器中的任何地方应用柏林噪波具有更好的性能。 另一种选择是执行各种简单的滤波器并将其均匀地应用在每像素采样率域中。 人们普遍认为,添加一些高通滤波有助于提高感知质量,而不会像对所有内容使用 Perlin 噪声一样牺牲性能。 关于 WebGl / Javascript 问题 - 是的,上面提供的演示是作为 WebGl 与 JavaScript 框架 CodeMirror 的组合来实现的,以实现内联编辑功能。 具体来说,托管模拟的实际画布是嵌入在 CodeMirror 实例中的 WebGl 画布元素。 无论如何,整体页面结构并不重要,因为着色器完全在服务器端执行。 Web 程序集 (WASM) 是沙箱和移植着色器和代码片段(例如little-math 和 hlsl 示例)的另一个流行选择,提供类似的功能。 请注意,除非明确提供,否则 WASM 目前缺乏回调客户端应用程序并实时交互的能力。 不过,对于静态代码,它提供了一种特别引人注目的方法,可以根据需要将高度定制的模拟和可视化嵌入到外部页面(包括教程和其他在线教育资源)中。 要回答有关 CSS 动画与着色器的第二个问题,核心区别在于 CSS 动画仅限于 2D 画布元素,而着色器在视觉复杂性和范围方面具有更大的灵活性
相关文章

原文

This article is interactive: you can play with the code and sliders to interact with the shaders. Enjoy!

What if I told you that it could takes just few lines of code to create graphics as simple as gradients or as complex as rain effects? Welcome to the world of shaders!

I’ve been fascinated by shaders for a couple of years. but each time I attempted to dive into the subject, I felt like I was learning to read and write all over again — it was overwhelming. When I transitioned this website to Svelte, I saw an opportunity to replace a simple CSS animation on my homepage with a shader-based animation. The original CSS animation manipulated the border-radius property to produce a calm and minimalist animation, illustrated below.

You might wonder why I would bother re-doing something that already exists. Well, it’s because the simplicity of the task seemed like the perfect stepping stone—challenging, yet manageable. Plus, having recently defended my PhD, I finally had the time to delve into this passion project!

I hear about shaders all the time, when scrolling generative artists on twitter X, when I want to change the look of Minecraft, or even when I want to train an AI (CUDA is basically an API for shaders). So now it’s the time to demystify this damn thing and start writing one of my own! In this article, you’ll join me on my journey as we explore the world of fragment shaders, making it as approachable as possible for a beginner with basic understanding in programing.

For anyone looking for an in-depth introduction to shaders, I highly recommend The Book of Shaders

Shaders: the good, the bad and the ugly

If you’re into video games, you’ve likely heard of shaders. They’re the magic behind enhancing lighting, conjuring up special effects, and even generating cartoonish looks (yes, that’s why there’s a ‘shade’ in ‘cel shading’). In a way, shaders is what makes modern games look so good when compared to their ’90s counterparts. But what exactly is a shader?

Let’s start simple: A shader is a small program running on your GPU that takes, at the very least, pixel coordinates as input and spits out a color as output. The reason why they are so popular in video games and computer graphics is that they are extremly fast. Their secret sauce? Parallelization. These programs are designed to work on multiple pixels at the same time, making them ridiculously efficient.

A robot drawing through iterative splash of paint.

The CPU, smart but slow

Hundreds of pipes spitting paint in a fraction of a second to draw the Joconde

The GPU, dumb and fast

Side Note: Shaders come in different dialects. For this article, I’ll focus on the OpenGL Shading Language (GLSL), mainly because it’s browser-friendly!

This incredible power comes, however, at some costs: Shaders have to be compact and low-level. This means you can’t lean on high-level abstractions or import libraries to do the heavy lifting (* laugh in javascript *). Moreover, their parallel nature makes them memoryless and stateless. This translates to: “You can’t store or share data between pixels or shader executions.” These constraints make shaders a tough nut to crack, especially if you’ve been pampered by high-level languages (guilty as charged).

Coordinates is All You Need

Shaders transform pixel coordinates into colors, encoded in RGBA—each channel ranging from 0 to 1. (It is also possible to manipulate vertex positions, but this topic is left as an exercise to the reader). Typically, coordinates are normalized between 0 and 1. In this coordinate space, (0, 0) is the lower left corner, and (1, 1) is the upper right. These coordinates are commonly referred to as st or uv by convention. Now, let’s imagine you want to write the simplest shader: a gradient where the red component increases from left to right and the green component ascends from bottom to top. That is, find the function f(x,y) in the following illustration:

(0,0)(1,0)(0,1)(1,1)=f(0.50,0.50)

Sure, it might appear too basic, but think of it as a prime playground to get cozy with shader syntax. Go ahead, check out the implementation below and tinker with it -— how about changing the gradient from black to blue?

Code show/hide
varying vec2 vUv;

void main() {
  // Normalized pixel coordinates (from 0 to 1)
  vec2 st = vUv;

  // redish in x, greenish in y
  // Try to modify the following line to have a blue gradient
  // from left to right.
  gl_FragColor = vec4(st.x, st.y, 0.0, 1.0); // RGBA
}
Hint To get a blue gradient, replace line 10 with gl_FragColor = vec4(0.0, 0.0, st.x, 1.0);

There are a few interesting things to note here about the syntax:

  • Inputs: We can declare input to the shaders that can be varying or uniform. Varying variables are different for each pixel, while uniform variables are the same for all pixels. Here, we declare a varying variable vUv, which is a 2D vector representing the position of the pixel on a plane. It is declared as varying because the value is different for each pixel on the screen.
  • Coordinates Origin: Take note, the origin of UV space is at the lower-left corner. If you’re used to SVG or HTML canvas, this might feel like driving on the other side of the road.
  • Built-in types: Just like C, shaders demand type declaration. You’ll come across a range of types suited for vectors and matrices—think vec2, vec3, vec4, mat2, mat3, and the list goes on.
  • Swizzling: Accessing elements of a vector? Easy, just use the dot notation (vec2(1, 2).x gives you 1). Want to slice and dice your vector? Use the xy notation (vec4(1, 2, 3, 4).xy returns vec2(1, 2)). If you’re working with colors, feel free to use the myvector.rgba syntax — This is entirely up to you.
  • Output: There’s no return statement. The color for each pixel is determined by the value of gl_FragColo at the end of the main() function.

So even with our super simple example, you can already feel the power of shaders. Without it, an equivalent result would have required a loop over all the pixels of the canvas — 90000 in this case — just to create this gradient. But this is just the beginning; shaders could do so much more than that.

One Step() Beyond

Now, to reproduce my original animation, I need to draw shapes with salient edges. While this may seem trivial, it is not. Forget about a handy drawCircle() function. Instead, we turn to our ever-reliable friends: math and trigonometry.

To create something like a disk, consider each pixel’s distance to the disk’s center. This distance calculation could be done using the Pythagorean theorem, however, we also have a built-in function for that: distance(vec2 p1, vec2 p2). If you map this distance to the color of the pixel, you will get a circular gradient.

But wait, you may anticipate, “a gradient is not a solid disk!” And you’d be right. The secret sauce for that is another built-in function: step(float threshold, float value). The step() function takes in the distance and sharply transitions it into either 0 or 1, depending on whether the distance crosses a certain threshold.

Noticed those jagged edges, also known as aliasing, around the disk when applying step()? That’s because the transition from 0 to 1 is a bit too abrupt. The solution is another built-in function called smoothstep(float t_start, float t_end, float x), which—as you might guess—smooths things out.

You may find it initially challenging, but this method of shaping with distance is your Swiss Army knife for crafting the mind-blowing shaders you often stumble upon online. So let’s dive a bit deeper into it!

Signed Distance Functions (SDF)

When you think of shapes, it’s natural to imagine them as a series of connected points. But here’s a twist: you can also represent shapes in terms of their distance to other points in space. This is where Signed Distance Functions (SDFs) come into play. Why “signed,” you ask? The distance is signed because it can be negative if the point is inside the shape.

To start off, let’s revisit the circle we created earlier and adapt it using SDFs. The key is to determine a function that calculates the distance from any given point in space to our circle. Starting simply, let’s find the distance to the origin. In the image below, it becomes evident that the distance d from the origin to the circle is essentially the distance from the origin to the center of the circle C minus the radius r.

d (0,0)(1,0)(0,1)(1,1)rCv

This observation translates beautifully into a function:

float circleSDF(vec2 p, float r) {
    return length(p) - r;
}

You can interpret this function in two ways. It either measures the distance from a point p to a circle centered at the origin, or the distance from the origin to the circle itself. It’s all a matter of perspective!

However, we’re rarely interested in just the distance to the origin. We want the distance to any point in the UV space. To achieve this, we merely translate the point p by the pixel’s position uv. The SDF function then returns negative distances for pixels inside the circle and positive distances for those outside. These two realms are separated by the circle, where the distance is exactly zero.

What about shading this SDF to make it visually compelling? Simple. Apply the 1. - step() function to the distance. The pixels with negative distances (inside the circle) take the value 1, and those outside take the value 0.

This article won’t delve into the other shapes you can define with SDFs—though I strongly recommend this comprehensive list by Inigo Quilez for those curious minds. Instead, we’ll focus on how to merge these individual shapes to craft our end-goal: a beautiful blob.

One and One Makes Another One

SDFs has some interesting properties, one of them is that it is especially easy to create new shapes with boolean operations. To have the union of the two SDFs, you need to take the minimum of the two distances. For pixels that are in either of the two shapes (or in both), the min() will output a negative distance, and for pixels that are outside both shapes, the min() will output a positive distance.

We end up with a new SDF that is negative inside the union of the two shapes, and positive outside. In the exemple below, I start by showing the two SDFs, one in red and one in green. With the slider, you can see the result of the union of the two shapes using the min() function.

Code show/hide
varying vec2 vUv;
uniform float u_slider;

float circleSDF(vec2 p, float r)
{
  return length(p) - r;
}

void main() {
    vec2 uv = vUv;

    // The SDF for each disk
    float d1 = circleSDF(vec2(0.6) - uv, 0.2);
    float d2 = circleSDF(vec2(0.4) - uv, 0.2);

    // Output each disk to a different color channel
    vec3 color = vec3(0.0);
    color.r = 1. - smoothstep(0., 0.01, d1); // red
    color.g = 1. - smoothstep(0., 0.01, d2); // green

    // Union of disks
    // Merging is as simple as taking the min()
    float d = min(d1, d2);
    // Set `dc` to yellow if within the union of the two circles
    vec3 dc = (1. - smoothstep(0.,0.01, d)) * vec3(1.0, 1.0, 0.);

    // FINAL COLOR
    // Mix color and dc according to slider value
    // mix(x, y, a) = x * (1.0 - a) + y * a
    color = mix(color, dc, u_slider);

    gl_FragColor = vec4(color, 1.0);
}
Slide to apply min() of the two SDFs

Joined>

Have you noticed that I used 1.-smoothstep()? This is because step() (and smoothstep()) outputs 1 when the distance is above the threshold (i.e outside the disk). To get a positive value inside the shape, we need to invert the output.

Complex shapes — like a blob! — are thus the combination of many simple SDFs. Like legos, you have many simple SDFs (building blocks) that can be combined to any shape you want. That said, a blob is smooth and jelly-like, unlike the sharp angle at the junction of our two disks. Luckily, SDFs have one last magic property for us.

Smooth operator

To create an appealing effect, we would like the shapes to blend smoothly together like in a lava lamp. However, the min() function is not smooth, it has sharp discontinuites when it transitions between two distances. Instead, we would prefer a function that smoothly shift from one distance to another. Luckily, this problem has already been solved and is unoriginally called smooth minimum. The function takes an additional argument to control the smoothing strengh (often denoted k).

Code show/hide
varying vec2 vUv;
uniform float u_slider;

float circleSDF(vec2 p, float r)
{
  return length(p) - r;
}

// Polynomial smooth min
float smin(float a, float b, float k)
{
    float h = max( k-abs(a-b), 0.0 )/k;
    return min( a, b ) - h*h*k*(1.0/4.0);
}

void main() {
    vec2 uv = vUv;

    // The SDF for each disk
    float d1 = circleSDF(vec2(0.65) - uv, 0.2);
    float d2 = circleSDF(vec2(0.35) - uv, 0.2);

    // Union of disks
    float d = 1. - smoothstep(0., 0.01, smin(d1, d2, u_slider/3.+0.001));

    gl_FragColor = vec4(vec3(d), 1.0);
}
Slide to increase the smoothing factor

k=1>

I Like to Move it

We can pass any arbitrary variable to our shader, much like the slider you’ve played with in this article. To get closer to our goal, we need to animate the circles. Doing so is as simple as feeding the shader with a time uniform that can then be used to defin the circles’ positions. Here I generate my time uniform u_time through javascript and then use it as an input in my shader to control my SDFs. The shader will refresh 60 times per second by default, each time with a new u_time value, creating a smooth animation. With a few extra balls and a bit of parameter tweeking, we end up with a cute blobby shape.

To make the blob oscillating, we can use periodic functions (e.g. sin,cos) to control each balls.

A metaball is a combination of multiple SDFs, to clean up our code, we can use a loop to combine them together, instead of manually updating the final distance variable like in our previous exemple. To further speed-up the process, we first define the centers of each balls, and then store it in an array that can be easily accessed in the loop to iteratively update the distance value. Pay attention to lines 40-43 in the code below.

Code show/hide
uniform float u_time;
varying vec2 vUv;
uniform float u_slider;

// C-style macro to define constants
#define K 0.4

float circleSDF(vec2 uv, vec2 p, float r)
{
  return length(p-uv) - r;
}

float smin(float a, float b, float k)
{
  float h = max( k-abs(a-b), 0.0 )/k;
  return min( a, b ) - h*h*k*(1.0/4.0);
}

// Map a value from -1 to 1 to out_min to out_max
float trigmap(float x, float out_min, float out_max)
{
  return out_min + (x + 1.) * (out_max - out_min) / (2.);
}

void main() {
  vec2 uv = vUv;

  // Define the center of each metaball
  vec2 c1 = vec2(0.4,trigmap(cos(u_time), 0.3, 0.4));
  vec2 c2 = vec2(trigmap(sin(u_time), 0.4, 0.7), 0.5);
  vec2 c3 = vec2(0.5, trigmap(cos(u_time), 0.6, 0.7));
  vec2 c4 = vec2(trigmap(cos(u_time), 0.4, 0.63), 0.3);
  // Store the centers in an array
  vec2 centers[4] = vec2[4](c1,c2,c3,c4);

  // Initialize the distance and define the smoothing factor
  float d = 99.;

  // Iterate over the centers and compute the sdf
  for (int i = 0; i 
Adjust the size of the metaballs

And voila, our baby’s born. You should now be ready to write some shaders of your own. If writing code is not your thing, you now have a better understanding of what’s going under the hood of node-based editor in Blender’s shader nodes or Unity’s Shader Graph.

This sad monochrome blob is functional but boring. Let’s make it juicer!

The Final Touch

To truly appreciate the magic of shaders, there’s nothing like taking the wheel and manipulating the blob in real-time. This final section will guide you on how to introduce user interactivity into your shader. Essentially, you will learn how to let users control the position of a ball within the blob by using their mouse.

First things first: We’ll use the mouse coordinates as a uniform input into the shader. This will allow real-time interaction with our creation.

Once the mouse coordinates are received, adding them to the array of ball centers will allow the user to interactively control a ball. As you see, it only takes one line of code to create interactivity!

vec2 centers[5] = vec2[5](c1,c2,c3,c4,u_mouse);

Next, it’s just fun and iterations. To get to the final result, I extensively use the mix(colorA, colorB, percent) function. It’s equivalent to if/else blocks when percent is a boolean. For example, to get red outside the metaball (where metaball == 0) and green within it, you can write.

vec3 color = mix(
        vec3(1., 0., 0.), // Red 
        vec3(0., 1., 0.), // Green
        metaball)

Finally, we get this beauty

Code show/hide
uniform float u_time;
uniform float u_slider;
uniform vec2 u_mouse;
varying vec2 vUv;

// C-style macro to define constants
#define K 0.4
#define REPEL 0.001
#define DISTLIM 0.1

float circleSDF(vec2 uv, vec2 p, float r)
{
  return length(p-uv) - r;
}

float smin(float a, float b, float k)
{
  float h = max( k-abs(a-b), 0.0 )/k;
  return min( a, b ) - h*h*k*(1.0/4.0);
}

// Map a value from -1 to 1 to out_min to out_max
float trigmap(float x, float out_min, float out_max)
{
  return out_min + (x + 1.) * (out_max - out_min) / (2.);
}

void main() {
  vec2 uv = vUv;

  // Handle Mouse
  vec2 m = u_mouse.xy; // normalize mouse coordinates
  m.y = 1.0 - m.y; // invert y axis to match the canvas
  m.x = (m.x);

  // Define the center of each metaball
  vec2 c1 = vec2(0.35,trigmap(cos(u_time), 0.3, 0.7));
  vec2 c2 = vec2(trigmap(cos(u_time), 0.3, 0.7), 0.7);
  vec2 c3 = vec2(0.7, trigmap(sin(u_time), 0.3, 0.7));
  vec2 c4 = vec2(trigmap(cos(u_time), 0.3, 0.7), 0.3);
  
  // Store the centers in an array
  vec2 centers[5] = vec2[5](c1,c2,c3,c4,m);

  // Color is function of the centroid
  vec2 ctroid = (c1 + c2 + c3 + c4) / 4.;
  ctroid *= vec2(1.3, 0.7);
  vec4 color = vec4(1.);

  // Initialize the distance and define the smoothing factor
  float d = 99.;

  // Iterate over the centers and compute the sdf
  for (int i = 0; i 

That concludes this introduction. I’m glad I’ve finally learned to write shaders! This article barely scratches the surface of the basics, but there’s no reason to be afraid anymore—neither for you nor for me. Stay tuned for future articles where we’ll explore how to elevate this blob into the third dimension. In the meantime, feel free to experiment; you can change the color scheme or tweak the positions of the balls. For updates, you can follow me on Twitter.

References

Comments

联系我们 contact @ memedata.com