Week 10 Entry - Texturing and Transparency

Texturing, Filters, Transparency, Blending, Testing, and More

Texturing is the process of representing a surface's color at every location using an image, function, or other data source. Texture coordinates are interpolated over (u, v) or (s, t) axis float variables, defined in a normalized range of [0,1] for both axes. Values for a pixel may be mapped to a texel beyond that normalized range, which is handled using one of the WebGL-defined wrapping states. Mapping texels to pixels is not often 1:1 and will again use a state for filtering how colors are defined when encountering either case of Magnification or Minification. Interpolation ... interpolation everywhere ... Trilinear filtering is a combination of bilinear filtering and mipmap texture processing (interpolating between four texels, once for the mipmap above the current texel: pixel ratio and once for the mipmap below, and finding the interpolated values that represent the closest result for a 1:1 texel size to pixel size).


Frame buffering. At this point in the pipeline, we have computed the per-fragment colors and moved forward with tests, blending, and passing of the final values for the frame buffer. The frame buffer is sometimes used synonymously with the color buffer. The frame buffer could contain the color buffer, depth buffer, stencil buffer, and more. Double buffering can be used as part of the solution to prevent flickering in a render with the motion of objects between frames. The back buffer is where the calculations will actively populate and the front buffer is what, once requested by the application, will be populated from the data in the back buffer and be displayed on the screen. The information populated into the back buffer, however, is not guaranteed to make it to the front buffer and be eventually displayed. Being that we are finding which fragments will be sent to the frame buffer, we can perform many tests that help with avoiding visual fidelity issues surrounding effects such as transparency. Such tests include the alpha test, scissor test, and stencil test. Transparency breaks a little rule with the "painters algorithm" being used to render these alpha < 1.0 objects from back to front. We do this because we are interested in the information beyond the closest object if it is not opaque, and we want those objects behind the semi-transparent ones to be blended into our calculation for the final fragment color.

Documentation, documentation, documentation...


Before I dive into the appreciation I have for the WebGL documentation, at most times, I will also mention the great help that the "WebGL Tutorial 03 - Textured Cube" YouTube video from Indigo Code provided in stepping through each required piece of getting the textures properly into the assignment. There were differences between implementations, such as Indigo Code's lack of separation between code segments into more than one file and HTML being used to load in their texture image as an element. Even with those differences, I would have been hard-pressed to figure out the progression of implementation from the guidelines and WebGL Programming Guide alone (if I were to be more interested in getting to know the process of any part of the WebGL API, I would absolutely still return to the Programming Guide, but it was not as helpful in comparison to Indigo Code's very straightforward walkthrough).

*Note for self: this page would have been more than reasonable to get going with just the documentation pages, Using textures in WebGL - Web APIs | MDN (mozilla.org).

This entire assignment was a dance between knowing which functions of the API to call at any point in the implementation and finding out how the WebGL documentation outlines the parameters used in said functions. The first assignments using WebGL gave me the impression of having to throw a ton of boilerplate code at the implementation and yet being able to change a constant or reference every so often to alter what my result would be. Having spent more time following through the code, I have a newfound appreciation for the API and how it utilizes relatively clearly named functions, constants with obvious meanings for changing functionality through their values, and a documentation source that I can easily fall back on when I need to without wasting much time.

The code segments that were finished for us in all WebGL assignments thus far still lead me to believe there is a healthy amount of information that I do not fully understand, but most of that revolves around implementations unique to JavaScript, which I feel may not be the worst dilemma to have currently. The more time I've spent reading the API documentation in conjunction with implementing what I had just been researching has helped my understanding of the implementation side of the rasterization pipeline.

The more I hear about WebGL in comparison to other graphics APIs, it sounds like a lot of favors are done for the programmer by WebGL in the background. While this is nice now, I sense the impending pressure of eventually having some middle steps I will need to understand later when using libraries like OpenGL.

Not sure if beginnings have ever been so humble.

Transparency, but with standard rendering
order with the depth test throwing away
valuable information.

Alpha test + transparent object painter's algorithm.

Dynamically mapping texel coordinates of the objects.

It just doesn't stop.

This would have been better on the triangle assignment...but alas...












Comments

Popular Posts