Press J to jump to the feed. Press question mark to learn the rest of the keyboard shortcuts

Join the discussion

Become a redditor
15

Understanding Graphics, Shading and Rendering Concepts

Hope this is not too broad and confuse, I am trying to get the big picture:

Coming from design and creative coding, I recently started to learn OpenGL and GLSL Shaders and doing my first baby steps in writing simple (Vertex, Geometry, Fragment) shaders. My main goal is a better understanding how realtime graphics are created on the lower levels to be able to utilize them more free and directly.

While I understand the basic structure of OpenGL, the Vertex Shader -> Geometry -> Fragment shader I have difficulties understanding the Fragment Shader / GLSL Part. I understand that it is the final stage that is responsible for rasterizing the final image to the screen. GLSL sandboxes like Shadertoy only use the Fragment shader for doing amazing stuff. While creating lines, circles, or rectangles on the html canvas, processing, or openFrameworks feels straight forward, achieving the same in GLSL (Fragment shader only) seems to be unintuitive (may be not for developers and mathematicians) and different. I understand the basic concepts of distance fields or ray marching but I wonder if they are a specialty for "Fragment shader only" Art or standard and how these methods are related to other graphics concepts, e.g. of processing or OF. How do they utilize OpenGL and GLSL to create 2D and 3D Graphics ? My OpenGL / Java / C++ skills are not good enough to get this from the source code.


On the other hand I wonder how OpenGL / GLSL (Vulcan OpenCL, MS´ HSLS and what not) are related to each other and to shaders e.g. in blender (Cycles, OSL or others) . Are they similar or totally different? What other concepts exist? And finally what is a good learning path into learning them besides the fundamental math and physics to use them as broad as possible?

Many thanks!

9 comments
84% Upvoted
What are your thoughts? Log in or Sign uplog insign up
level 1
11 points · 4 days ago · edited 3 days ago

I have difficulties understanding the Fragment Shader / GLSL Part. I understand that it is the final stage that is responsible for rasterizing the final image to the screen

Rasterization (ie: transforming triangles into pixels) is a separate stage from fragment shading. What the fragment shader does it compute the output color of a single pixel.

seems to be unintuitive

It is, the approach is completely different from what you would typically do to render something efficiently.

I wonder if they are a specialty for "Fragment shader only" Art

Fragment shader only programs (à la shadertoy) are mostly restricted to demos and samples. A tech-artist job does include a lot of fragment shader writing, but for a more standard pipeline (with geometry and stuff).

how these methods are related to other graphics concepts

While methods used in shadertoy sample are also used for effects like POM or SSAO/SSR in real world 3D applications, the bulk of the work is typically done using triangles and more traditional methods.

On the other hand I wonder how OpenGL / GLSL (Vulcan OpenCL, MS´ HSLS and what not) are related to each other

  • GLSL is a shader language, just like HLSL. They do basically the same things and are very very similar (ie: if you learn one you'll be able to do both)

  • Vulkan, OpenGL and DirectX are 3D APIs, they allow you to talk to the graphic driver and to the GPU to make it render triangles and do other stuff. Vulkan and DirectX 12 are lower level than OpenGL and DirectX 11.

  • Shaders are programs that run on the GPU, that's it. Of course graphic programming, be it with OpenGL, Vulkan or whatever involves a fair bit of shader writing.

  • OpenCL is a compute API: it allow you to run code on whatever device supports it, including GPUs. It doesn't do graphics. (Think of it as a stripped down graphic API but without everything that isn't strictly necessary to tell the GPU to run X shader Y number of time on Z input).

  • Shaders in cycles (or whatever renderer) are similar to shaders in graphic APIs: they are small programs that run for each vertex/triangle/pixel. They do not necessary run on the GPU, but fill the same role as "traditional" shaders.

What other concepts exist?

Too many of them to list, are you interested in anything specific?

And finally what is a good learning path into learning them besides the fundamental math and physics to use them as broad as possible?

For non real time CG (more math & physics)

For real time stuff (more code)

level 2
Original Poster1 point · 3 days ago

Many thanks, that clarified a lot!

Too many of them to list, are you interested in anything specific.

Nothing specific, just want to get an overview of the most common concepts today. But maybe the above is already a good starting point for now. Fortunately we have the PBR Book in our college library.

level 3
1 point · 3 days ago

Nothing specific, just want to get an overview of the most common concepts today

As far as the graphic pipeline of modern GPU is concern, if you understand vertex shaders, pixel shaders, the rasterizer and compute shaders you are pretty much good.

Now there exists a ton of rendering techniques, if you are interested in these and how they use shaders (or any other pipeline stage) I can write something, but you'll have to ask about something specific (too much to talk about otherwise)

level 1
2 points · 4 days ago

I think you are confusing a lot of things.

A shader is simply a program running on the GPU.

Images rendered by OpenGL are controlled using Vertex Shaders, Geometry Shaders and Fragment Shaders in this order indeed, but this is not set in stone and more importantly this is not all there is to it. Fragment Shaders aren’t the final stage of the rendering process.

And to answer you question regarding how all Shaders programs are linked to each other ; the only thing that they share in common is that they run on the GPU. A shader written for DirectCompute could very well be doing general purpose computing calculation, whereas a Blender Cycles shader will be computing how surfaces react to light and a GLSL Geometry Shader might add vertices to a model.

level 1
2 points · 4 days ago · edited 4 days ago

There are two things you need to learn:

a) general light and rendering theory and the math behind it, which is independent of any specific language, framework or application and can be applied in any graphics programming context. I recommend writing a simple raytracer to practice these concepts. Ray Tracing In One Weekend is a good guide to that: (http://in1weekend.blogspot.com/2016/01/ray-tracing-in-one-weekend.html). There is a lot of good material on youtube that goes into details on how light simulation is implemented. Here is a good one on physically based rendering: https://www.youtube.com/watch?v=j-A0mwsJRmk


(Also forgot the excellent PBRT book, which is mentioned in the other comments! https://www.pbrt.org/)


and


b) How the GPU pipeline works, the role shaders play in realtime rendering and a shader language like GLSL, so you can apply the above in a realtime context. Once again, there is a lot of material online on that and many books. I used this book as a textbook before and it's decent (though not great): https://www.amazon.com/Foundations-Computer-Graphics-MIT-Press/dp/0262017350/ . Also the latest edition of "Realtime Rendering" is always a good resource.



level 2
Original Poster1 point · 3 days ago

Many thanks for the great links. Let´s see if I can build a ray tracer in one weekend ... :-)

level 1

"Shader" is actually a pretty bad name for what it means in opengl, d3d or vulkan today. More correct would be "programmable pipeline stage". Shading, i.e. computing the color of surfaces based on interaction with modeled light was just the first use case of these. In other rendering techniques like path tracing (cycles etc) shader usually often refers to that original meaning as a light interaction model.

Regarding the other, more broad question: when you only use the fragment shader it is implied that your entire context is a pixel/fragment. So you don't ask "how do i turn this circle into pixels", but "does this pixel belong to this circle". In terms of geometry this often means that you formulate it using implicit equations. For example for a pixel p you can test if it lies on a circle with center c and radius r by testing whether ||p-c||=r and then color the pixel based on the circle color. Just as a general hint at where this might lead you.

level 2
Original Poster1 point · 3 days ago

So you don't ask "how do i turn this circle into pixels", but "does this pixel belong to this circle".

Yes that makes it more intuitive, many thanks !

level 1
1 point · 3 days ago

I understand the basic concepts of distance fields or ray marching but I wonder if they are a specialty for "Fragment shader only" Art ...

Yes, most of the stuff you see in shader toy is special effects trickery for the sake of art. Most of these tricks are not used outside of demoscene.

There are exceptions of course, and sometimes these tricks get adopted to game engines and other real time graphics apps.

It's not that the tricks aren't cool, but plugging a distance field rendering system into a practical game engine isn't very viable. It's hard to imagine how they would interact with physics and gameplay code for example.

Community Details

11.1k

Subscribers

31

Online

A subreddit for everything related to the design and implementation of graphics rendering code. Suggested Posting Material: - Graphics API Tutorials - Academic Papers - Blog Posts - Source Code Repositories - Self Posts to Ask Questions or for Presentation - Books - Renders (Please xpost to /r/ComputerGraphics) - Career Advice - Jobs Postings (Graphics Programming only)

Create Post
r/GraphicsProgramming Rules
1.
Posts should be about Graphics Programming