Skip to content

Notes on Advanced Graphics Lectures

Calum J. Eadie edited this page Oct 27, 2012 · 16 revisions

Contents

  • Lecture 5 - Open GL, Scene Graphs and Data Structures
  • Lecture 6 - Shaders

OpenGL

  • platform independent
    • hardware and os independent
    • implementation platform specific. rely on native libs
  • os independent
  • vendor neutral

State based rendering

  1. Set up the state
  2. Pass in data
  3. Date is modified by existing state

vs. OOP, where data carries its own state

  • accelerates common 3d graphics operations
    • clipping
    • hidden surface removal (Z-buffering)
    • texturing, alpha blending ( alpha - transparency )
    • NURBS and adv primitives

JOGL

Java binding for OpenGL

JOGL - Hello Square

public class HelloSquare {

  public static void main(String[] args) {
    new Thread() {
      public void run() {
        Frame frame = new Frame("Hello Square"); // Create a window to display in
        GLCanvas canvas = new GLCanvas(); // Create a canvas to render onto

        // Setup GL canvas
        frame.add(canvas); // Bind canvas to the window created
        canvas.addGLEventListener(new RendererOne()); // Add implementation of GLEventListener to render a square.
        
        // Setup AWT frame
        frame.setSize(400, 400);
        frame.addWindowListener(new WindowAdapter() {
          public void windowClosing(WindowEvent e) {
            System.exit(0);
          }
        });
        frame.setVisible(true);

        // Render loop
        while(true) {
          canvas.display(); // Continually update display.
        }
      }
    }.start();
  }
}
   
public class RendererOne extends SimpleRendererBase {

  // Render a square.
  public void render(GL gl) {
    gl.glBegin(GL.GL_QUADS); // Set GL mode so vertexs understood as beloning to a quadrilateral
      gl.glVertex3f(-1, -1,  0); // Send vertices which specify a square
      gl.glVertex3f( 1, -1,  0);
      gl.glVertex3f( 1,  1,  0);
      gl.glVertex3f(-1,  1,  0);
    gl.glEnd(); // Reset GL mode. ( ) what to?
  }
}

public abstract class SimpleRendererBase implements GLEventListener {

  public void init(GLAutoDrawable glDrawable) {
    final GL gl = glDrawable.getGL();

    gl.glClearColor(0.2f, 0.4f, 0.6f, 0.0f); // ( ) is this setting a clear colour or performing a clear?
  }
  
  /*
   * Performs the specific rendering tasks.
  */
  public abstract void render(GL gl);

  public void display(GLAutoDrawable glDrawable) {
    final GL gl = glDrawable.getGL();

    gl.glClear(GL.GL_COLOR_BUFFER_BIT);
    gl.glLoadIdentity(); // ( ) what is the identity here?
    gl.glTranslatef(0, 0, -5); // ( ) why perform translation?
    
    render(gl);
  }

  public void displayChanged(GLAutoDrawable glDrawable, boolean modeChanged, boolean deviceChanged) {
  }

  // ( ) Is this part of the GLEventListened contract or a helper method?
  public void reshape(GLAutoDrawable gLDrawable, int x, int y, int width, int height) {
    final GL gl = gLDrawable.getGL();
    final float h = (float)width / (float)(height <= 1 ? 1 : height);
    
    gl.glMatrixMode(GL.GL_PROJECTION);
    gl.glLoadIdentity();
    (new GLU()).gluPerspective(50.0f, h, 1.0, 1000.0);
    gl.glMatrixMode(GL.GL_MODELVIEW);
  }
}

Behing the scenes

  1. CPU passes streams of vertices and of data to GPU
  2. GPU processes vertices according to the state that has been set eg. state = every 4 vertices is one quadrilateral polygon
  3. GPU takes streams of vertices, colors, textures coordinates and other data
  4. Constructs polygons and other primitives
  5. Draws the primitives to the screen pixel-by-pixel

Rendering Pipeline

Local space -> World space -> Viewing space -> 3D screen space -> 2D display space

Local space

Vertices and coordinates of a surface specified relative to local basis and origin

Representation and processes

  • Local space
    • Object definition
  • World space
    • Scene composition
    • Viewing frame defn
    • Lightning defn
  • Viewing space
    • Backface culling
    • Viewing frustrum cullin
    • HUD defn
  • 3D screen space
    • Hidden-surface removal
    • Scan conversion
    • Shading
  • Display space
    • Image

Matrix Stacks

OpenGL matrix stacks to store stacks of matrices. Top most matrix is usually product of all matrices below.

Allows you to build local frame of ref - local space - and apply transforms within that space.

OpenGL has three matrix stacks:

  • Modelview - positioning things relative to each other
  • Projection - camera transforms
  • Texture - texture-mapping transformations

glMatrixMode() - sets the state for all following matrix ops glTranslate(), glRotate() - modify the current topmost matrix on the current stack

To make local changes with limited effect push a new copy of your current matrix onto top of the stack ( glPushMatrix(), modify is freely and then pop matrix off stack ( glPopMatrix() ).

Rendering simple primitives

State machine applies state to each vertex in sequence.

Need to tell GL what kind of primitive to render:

  • glBegin(GL_LINES)
  • glBegin(GL_LINE_STRIP)
  • glBegin(GL_TRIANGLES)
  • glBegin(GL_QUADS)
  • ...

Efficient rendering of primitives

Camera control

Scene graphs

Tree of schene elements where a child's transform is relative to its parent

Final transform of the child is the ordered product of all of its ancestors in the tree

OpenGL-ES

Very small memory footprint

Very low power consumption

OpenGL-ES 2.0+ emphasises shaders over software running on the phone processor.

Shaders move processing on device CPU to the peripheral GPU.

Where do shaders fit?

Local space -> World space -> Viewing space -> 3D screen space -> 2D screen space

3D screen space -> Process vertices -> Clipping, projection, backface culling -> Process pixels -> 2D screen space

Processing vertices:

  • computing diffuse shading / pixel
  • color / vertex
  • transforming vertex position
  • transforming texture co-ordinate

Processing pixels:

  • interpolating texture coordinates across polygon
  • interpolating normal for specular lighting
  • texture normal-mapping

Per vertex and per pixel behaviour can be overridden using shaders.

What's being acted on?

Each vertex and fragment ( pixel / partial pixel ) interpolated between vertices.

Position, colour, depth and other values are interpolated across polygon and passed to each pixel fragment.

What can shaders override?

Per vertex:

  • Vertex transformation
  • Normal transformation and normalization
  • Texture coordinate generation
  • Texture coordinate transformation
  • Lighting
  • Color material application

Per fragment (pixel):

  • Operations on interpolated values
  • Texture access
  • Texture application
  • Fog
  • Color summation
  • Optionally:
    • Pixel zoom
    • Scale and bias
    • Color table lookup
    • Convolution

Parallelisation

Really important aspect is that shaders executed on GPU so multiple shaders execute in parallel on the multiple processing units of the GPU.

Vertex processor

Inputs:

  • Color
  • Normal
  • Position
  • Texture coord
  • Texture data
  • Modelview matrix
  • Material
  • Lighting
  • Custom variables

Outputs:

  • Color
  • Position
  • Custom variables

Fragment processor

Inputs:

  • Color
  • Texture coord
  • Fragment coord
  • Front facing
  • Texture data
  • Modelview matrix
  • Material
  • Lighting
  • Custom variables

Outputs:

  • Color
  • Position

Communication

Types of shader parameter type:

  • Uniform parameter. Set throughout execution. Eg. surface colour
  • Attribute parameter. Set per vertex. Eg. local tangent
  • Varying parameters. Passed from vertex processor to fragment processor. Eg. transformed normal

Running shaders

Shader replace all the fixed functionality.

Need to replace:

  • transform into viewing coordinates
  • light each vertex
  • apply current interpolated color to fragments

Ambient light example

// Vertex Shader
void main() {
    // gl_ModelViewProjectionMatrix and gl_Vertex are standard inputs
    // gl_Position is a standard output
    gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;

// Fragment Shader
void main() {
    // gl_FragColor is a standard output.
    gl_FragColor = vec4(0.2, 0.6, 0.8, 1);
}

Diffuse lighting

Varying parameters are used to pass info from vertex shader to fragment shader.

// Vertex shader

varying vec3 Norm;
varying vec3 ToLight;

void main() {
    gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; // ModelViewProjectionMatrix - vertex, local -> perspective coordinates (for display)
    // Norm and ToLight are varying parameter. Automatically linearly interpolated
    // between vertices across every polygon.
    Norm = gl_NormalMatrix * gl_Normal; // NormalMatrix - normal, local -> eye coordinates
    ToLight = vec3(gl_LightSource[0].position - (gl_ModelViewMatrix*gl_Vertex)); // ModelViewMatrix - point, local to eye coorsdinates
    
// Fragment Shader

varying vec3 Norm;
varying vec3 ToLight;

void main() {
    const vec3 DiffuseColor = vec3(0.2,0.6,0.8);
    // Exact diffuse illumination calculated from local normal. Phong shading
    // ( normally for specular highlights ) applied to diffuse lighting.
    float diff = clamp(dot(normalise(Norm),normalize(ToLight)),0.0,1.0);
    
    gl_FragColor = vec4(DiffuseColor*diff,1.0);

OpenGL Transformations

[http://www.songho.ca/opengl/gl_transform.html]

vertex data ->

object coordinates -> Model -> world coordinates -> View -> eye coordinates

eye coordinates -> Projection Matrix -> clip coordinates

clip coordinates -> Divide by w -> normalised device coords -> Viewpoint transform -> window coordinates

GLSL Design Goals

To work well with OpenGL. To fit into design model of setting state then rendering data in context of the state.

Be hardware independant.

Inherent parallelization.

GLSL Implementation

ANSI C with C++ added.

Basic types: int,float,bool

Vectors and matrices: vec2, mat2, vec3, mat3, vec4, mat4

Texture samplers for sampling multidimensional textures: sampler1D, sampler2D

New instances built with constructors. Like C++.

Can declare function before defining and overload operators.

No pointers, strings, chars, unions, enums, bytes, shorts, longs, unsigned. No swtich statements.

No implict casting ie. no float f = 1. Can't implicitly cast int -> float.

Explicity type casting through constructors.

Functions parameters labelled in, out or inout.

Functions called by value-return. Values copied into and out of parameters at start and end of calls.

(√) why? - for parallel execution

in - value given to parameter will be copied into the parameter when the function is called, changes to parameter will not affect caller, value is not copied back out

out - variable not initialised, on return copy value of variable used by function into variable specified by caller

inout - both, initialised by caller and copied out to caller

GLSL API

  1. Create shaders objects - glCreateShader
  2. Load source code, as text, into shader - glShaderSource
  3. Compile shader - glCompileShader
  4. Create program object - glCreateProgram
  5. Bind shaders to program - glAttachShader
  6. Link program - glLinkProgram
  7. Register program - glUseProgram

Particle systems

Can represent particle system with a position and velocit for every particle.

Particles can be stored in texture memory, which shaders can write to.

Geometry shaders

Run after vertex shaders. Can generate new primitives ( vertices and vertex strips, grouped vertices )

DirectX 10 and OpenGL 3.2

Clone this wiki locally