Introduction

Qt’s support for OpenGL has now been extended to provide access to the OpenGL Core profile. When using the Core profile, all access to the legacy fixed-functionality pipeline is removed. This means that to get anything drawn on screen we have to make use of glsl shaders and vertex arrays or buffers.

The OpenGL Core profile is available when using Qt 4.8.0 or newer and OpenGL 3.1 or newer. Since Qt 4.8.0 has not yet been released you will need to get a development version of Qt. The easiest way to do this is to get it from the gitorious repository.

A complete copy of the source code for this tutorial can be obtained by doing:

  1. svn co https://svn.theharmers.co.uk/svn/codes/public/opengl/trunk opengl
  2. cd opengl/07-core-profile

Why would we want to use the OpenGL Core profile? Well, for a start OpenGL 3.0 deprecated much of the old fixed-functionality pipeline entry points. Yes, at present these are still available when using the Compatibility profile in order to keep old applications working. However, many of these deprecated functions encourage poor or out-dated practises. For example it is much more efficient to use vertex arrays or even better vertex buffer objects to send geometry to the OpenGL pipeline than the old glVertex family of functions. The same is true for all other per-vertex attributes too (e.g. normals, texture coordinates, colours etc.).

Using the Core profile also means that the OpenGL driver has to track far fewer states per-context. Using the Core profile the developer is responsible for configuring which states their shaders care about and these are all passed in by means of a much simpler and more consistent set of functions.

The Khronos Group that oversees the OpenGL specification recommends to use the Core profile in new OpenGL applications.

Some OpenGL drivers (e.g. nVidia) may incur a small performance penalty when using the Core profile as internally this enables checks to see if a feature should be enabled or not. So to get the very best performance one method is to develop your app using only the Core profile but then when you release build and test it using the Compatibility profile. This way you can be sure that you are only using non-deprecated feature but still getting the very best performance.

This is known to work under Linux but Windows and Mac OSX have some issues inside of Qt for creating Core Profile contexts.

Specifying the OpenGL Format

The first stage in being able to use the OpenGL Core profile is to prepare a QGLFormat object that describes the OpenGL context we would like to use. The following simple main function does just that:

  1. #include <QApplication>
  2. #include <QGLFormat>
  3.  
  4. #include "glwidget.h"
  5.  
  6. int main( int argc, char* argv[] )
  7. {
  8.     QApplication a( argc, argv );
  9.  
  10.     // Specify an OpenGL 3.3 format using the Core profile.
  11.     // That is, no old-school fixed pipeline functionality
  12.     QGLFormat glFormat;
  13.     glFormat.setVersion( 3, 3 );
  14.     glFormat.setProfile( QGLFormat::CoreProfile ); // Requires >=Qt-4.8.0
  15.     glFormat.setSampleBuffers( true );
  16.  
  17.     // Create a GLWidget requesting our format
  18.     GLWidget w( glFormat );
  19.     w.show();
  20.  
  21.     return a.exec();
  22. }

We first create a QApplication as usual. We then create a QGLFormat object and set it to OpenGL version 3.3 (the newest that my card and driver combination supports). We then request to use the Core profile and for nicer looking results we also ask to enable multi-sampling. We then pass the glFormat object through to the constructor of our custom subclass of QGLWidget, GLWidget (yes imaginative I know). Finally we show the widget and enter the event loop.

The GLWidget Class Declaration

Here is the declaration of the simple class we will use to demonstrate usage of the OpenGL Core profile:

  1. #ifndef GLWIDGET_H
  2. #define GLWIDGET_H
  3.  
  4. #include <QGLWidget>
  5.  
  6. #include <QGLBuffer>
  7. #include <QGLShaderProgram>
  8.  
  9. class GLWidget : public QGLWidget
  10. {
  11.     Q_OBJECT
  12. public:
  13.     GLWidget( const QGLFormat& format, QWidget* parent = 0 );
  14.  
  15. protected:
  16.     virtual void initializeGL();
  17.     virtual void resizeGL( int w, int h );
  18.     virtual void paintGL();
  19.  
  20.     virtual void keyPressEvent( QKeyEvent* e );
  21.  
  22. private:
  23.     bool prepareShaderProgram( const QString& vertexShaderPath,
  24.                                const QString& fragmentShaderPath );
  25.  
  26.     QGLShaderProgram m_shader;
  27.     QGLBuffer m_vertexBuffer;
  28. };
  29.  
  30. #endif // GLWIDGET_H

We inherit a class from QGLWidget as normal. Note that the constructor accepts a constant reference to a QGLFormat. We override the initialiseGL(), resizeGL(), and paintGL() functions to provide our custom functionality. For convenience we also override the keyPressEvent() function so that the Escape key quits the application.

The prepareShaderProgram() function is a simple wrapper function that takes care of loading the vertex and fragment shader source, compiling the shaders, and linking them into a functional shader program. The shader program is stored in the m_shader member. The m_vertexBuffer member, as its name suggests, encapsulates and OpenGL vertex buffer that holds the vertex data for our geometry.

The GLWidget Class Implementation

Initialisation

The constructor is very simple:

  1. GLWidget::GLWidget( const QGLFormat& format, QWidget* parent )
  2.     : QGLWidget( format, parent ),
  3.       m_vertexBuffer( QGLBuffer::VertexBuffer )
  4. {
  5. }

We pass the requested QGLFormat object through to the QGLWidget constructor along with the usual pointer to the parent. QGLWidget tries its best to supply a QGLContext that matches our requested format. If it is unable to get an exact match it tries to create a close approximation. You can explicitly check the created OpenGL context properties by way of the QGLWidget::format() function.

We also initialise the QGLBuffer object by telling it that we wish to use it to store vertex data.

Following construction Qt calls the initialiseGL() function for us to allow us to do any OpenGL initialisation. Note that this function is only called once so only do setup that will persist for as long as this widget here.

  1. void GLWidget::initializeGL()
  2. {
  3.     QGLFormat glFormat = QGLWidget::format();
  4.     if ( !glFormat.sampleBuffers() )
  5.         qWarning() << "Could not enable sample buffers";
  6.  
  7.     // Set the clear color to black
  8.     glClearColor( 0.0f, 0.0f, 0.0f, 1.0f );
  9.  
  10.     // Prepare a complete shader program...
  11.     if ( !prepareShaderProgram( ":/simple.vert", ":/simple.frag" ) )
  12.         return;
  13.  
  14.     // We need us some vertex data. Start simple with a triangle ;-)
  15.     float points[] = { -0.5f, -0.5f, 0.0f, 1.0f,
  16.                         0.5f, -0.5f, 0.0f, 1.0f,
  17.                         0.0f,  0.5f, 0.0f, 1.0f };
  18.     m_vertexBuffer.create();
  19.     m_vertexBuffer.setUsagePattern( QGLBuffer::StaticDraw );
  20.     if ( !m_vertexBuffer.bind() )
  21.     {
  22.         qWarning() << "Could not bind vertex buffer to the context";
  23.         return;
  24.     }
  25.     m_vertexBuffer.allocate( points, 3 * 4 * sizeof( float ) );
  26.  
  27.     // Bind the shader program so that we can associate variables from
  28.     // our application to the shaders
  29.     if ( !m_shader.bind() )
  30.     {
  31.         qWarning() << "Could not bind shader program to context";
  32.         return;
  33.     }
  34.  
  35.     // Enable the "vertex" attribute to bind it to our currently bound
  36.     // vertex buffer.
  37.     m_shader.setAttributeBuffer( "vertex", GL_FLOAT, 0, 4 );
  38.     m_shader.enableAttributeArray( "vertex" );
  39. }

We begin by checking that we do in fact have an OpenGL context with multi-sampling enabled. We then set the clear colour to black.

Next we call the prepareShaderProgram() passing in the paths of the vertex and fragment shader sources. In this case the sources are in the project’s resource file. We will look at this function in more detail shortly.

The next step is to define a simple array of floats representing the vertices of our geometry – a single triangle. We then ask the QGLBuffer object to actually create the underlying OpenGL buffer object and we tell it the intended usage pattern for this buffer. In this case we will never be changing the vertices so QGLBuffer::StaticDraw is a sensible choice. The next step is to bind the buffer to the OpenGL context and insert the actual vertex data into it. The above is likely to result in the vertex data being uploaded to the dedicated graphics memory of your GPU. I say likely as the final choice of where to locate the data is left to the OpenGL driver.

Now that the OpenGL driver knows about our vertex buffer we can associate it with a variable in the shader program. As we will see shortly, the vertex shader contains an input variable called “vertex”. The final part of the initialiseGL() function tells OpenGL context and driver that we wish to use the currently bound vertex buffer as the “vertex” variable in the shader, what type the buffer contains (GL_FLOAT) and how many elements are in each vertex (4 in this case).

On some setups (Windows 7, NVidia 9800 GT using drivers 285.62), it is required to bind a VAO before setting up the attributes. This behavior is part of the OpenGL 3.3 core profile. This is done by calling:

  1.     uint vao;
  2.     glGenVertexArrays(1, &vao);
  3.     glBindVertexArray(vao);

Unfortunately, those functions are not loaded by Qt and they must be accessed using a GL loader library or by manually calling wglGetProcAddress.

Here is the prepareShaderProgram() function mentioned above:

  1. bool GLWidget::prepareShaderProgram( const QString& vertexShaderPath,
  2.                                      const QString& fragmentShaderPath )
  3. {
  4.     // First we load and compile the vertex shader...
  5.     bool result = m_shader.addShaderFromSourceFile( QGLShader::Vertex, vertexShaderPath );
  6.     if ( !result )
  7.         qWarning() << m_shader.log();
  8.  
  9.     // ...now the fragment shader...
  10.     result = m_shader.addShaderFromSourceFile( QGLShader::Fragment, fragmentShaderPath );
  11.     if ( !result )
  12.         qWarning() << m_shader.log();
  13.  
  14.     // ...and finally we link them to resolve any references.
  15.     result = m_shader.link();
  16.     if ( !result )
  17.         qWarning() << "Could not link shader program:" << m_shader.log();
  18.  
  19.     return result;
  20. }

All this function does is to load and compile the source code for the vertex and fragment shaders respectively and then link them together into a complete shader program handily encapsulated in a QGLShaderProgram. Individual shaders are analogous to compilation units in C/C++ and must go through a final linking stage to make a functional binary or in this case a shader program. The linking stage, amongst other things, ensures that the variables used to interface between the vertex and fragment shaders match up.

The final part of the OpenGL initialisation is the implementation of the resizeGL() function:

  1. void GLWidget::resizeGL( int w, int h )
  2. {
  3.     // Set the viewport to window dimensions
  4.     glViewport( 0, 0, w, qMax( h, 1 ) );
  5. }

Qt calls this function for us in response to a resizeEvent. In our case all we need to do is to adjust the OpenGL viewport transformation that is performed after the fragment shader. All we do is make sure that our viewport fills the available space.

Note that in this simple example we are not using any custom perspective or orthogonal view transformations. Instead we are relying on the default OpenGL view transformation that maps the rectangle (-1, -1) – (1, 1) to the viewport.

Drawing

Now we are ready to draw something! Note that following the above initialiseGL() function we still have our shader program and vertex buffer bound to the OpenGL context so there is no need to rebind them each time we draw. This results in a trivial paintGL() function:

  1. void GLWidget::paintGL()
  2. {
  3.     // Clear the buffer with the current clearing color
  4.     glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
  5.  
  6.     // Draw stuff
  7.     glDrawArrays( GL_TRIANGLES, 0, 3 );
  8. }

All we do is clear the colour and depth buffers and tell OpenGL to draw our currently bound vertex buffer using the currently bound shader program. What could be easier? ;-)

Miscellaneous

The only remaining part of the C++ implementation is the keyPressEvent() handler.

  1. void GLWidget::keyPressEvent( QKeyEvent* e )
  2. {
  3.     switch ( e->key() )
  4.     {
  5.         case Qt::Key_Escape:
  6.             QCoreApplication::instance()->quit();
  7.             break;
  8.  
  9.         default:
  10.             QGLWidget::keyPressEvent( e );
  11.     }
  12. }

This just makes the application quit when the escape key is pressed.

The Shaders

The Vertex Shader

The vertex shader is simplicity itself:

  1. #version 330
  2.  
  3. in vec4 vertex;
  4.  
  5. void main( void )
  6. {
  7.     gl_Position = vertex;
  8. }

It begins with a pre-processing instruction telling the glsl compiler that it requires glsl version 330 which corresponds to OpenGL 3.3. We then declare an input variable of type vec4 (a 4D vector as you might expect). This is the variable to which we linked the vertex buffer object at the end of the initialiseGL() function.

The shader entry point is the main() function. This is called once per vertex. So in this simple example it will get called 3 times, once for each vertex of our triangle, per redraw. All it does is to assign the vextex coordinates to the built-in (implicitly declared) variable gl_Position.

Following execution of the vertex shader the OpenGL drivers performs some fixed functionality such as primitive assembly and rasterisation. The output of the rasterisation stage is a stream of “fragments”. Fragments are a data structure corresponding to a pixel plus some additional data. These fragments are then operated on by the fragment shader.

The Fragment Shader

The fragment shader is also simple:

  1. #version 330
  2.  
  3. layout(location = 0, index = 0) out vec4 fragColor;
  4.  
  5. void main( void )
  6. {
  7.     fragColor = vec4( 1.0, 0.0, 0.0, 1.0 );
  8. }

Once again we have the pre-processor instruction as in the vertex shader.

Next we declare the output variable, fragColor, that will hold the colour for this fragment that will be passed onto the last parts of the OpenGL pipeline (depth-testing, scissor-testing, blending etc.). The layout() syntax just tells OpenGL which variable in the blending equation the output of this fragment shader maps to.

The entry-point is once again the main() function which simply forces each fragment to an opaque red colour.

Building and Running

The example application can be built by doing the usual:

  1. qmake && make

or by using Qt-Creator. Upon running the application you should see the following on screen:

Core Profile Example [gallery.theharmers.co.uk]

Categories: