FAQ: wxWidgets and OpenGL

This forum can be used to talk about general design strategies, new ideas and questions in general related to wxWidgets. If you feel your questions doesn't fit anywhere, put it here.
Post Reply
Can't get richer than this
Can't get richer than this
Posts: 713
Joined: Mon Apr 30, 2012 11:07 pm

FAQ: wxWidgets and OpenGL

Post by Manolo » Fri Feb 08, 2019 2:09 am

People often gets confused with OpenGL. They think it's like any other system library. And worst, they think the IDE they use provide OpenGL as any other common functions.
I understand that without a previous knowledge such misunderstandings are there. Here are some tips.

Q. What are "display attributes"?
A. The configuration of the display (the window): RGBA colors, number bits of the depth buffer, samplers for multi-sampling, etc.
Some old settings -like palette colors, color buffer size, auxiliary buffers, etc.- still exist, but only for old context (see below).
In MS Windows they are called "pixelformat"; in Unix X11, "config".

Q. What's a so called "context"?
A. It defines the relation between the graphics card driver and the thread where gl-commands are called.
Think of it like a big "struct" in C/C++ parlance. A lot of "states" are stored there: coloring, depth, blending on/off, shaders infos, queries parameters, etc.
Setting a context is a must. No context, no OGL.

Q. Are there types of contexts?
A. Yes. Since the OGL beginning (1992) an up to "modern" OGL (2009) there was just one. No types. But since 2009 there are the so called "Profiles" and "attributes". A "Core Profile" means that most of old (fixed-pipeline) commands are removed; and that some matters are now required: VAO, VBO, shaders. A "Compatibility Profile" means you can still use the old style.
And you can add a "Debug" setting, or "Robust" setting, and a few more.

Q. About setting a context as "current". And Multi-threading.
A. Despite of a graphics card splits internally the job into hundreds (or more) cores, OGL was not designed with multi-threading on mind. So only a context-with-a-thread binding can be current (read: active). The gl-commands your app calls in a thread will be executed in the current bound thread. Trying to call gl-commands from different threads at once for the same context is an error.
Even you use an unique context for several windows, before any gl-command set that context as current to the window you want to render to.

A X11 issue is that the context cannot be set as current for a window that is not "realized". X11 is asynchronous, which means that if you try to create and set current a context in the wxGLCanvas ctor, you may get an error. Instead, delay it a bit. Perhaps after size-event, or at first paint-event.

Q. About "Swap Buffers"
A. Normally the GPU renders into a buffer that is not shown on screen, avoiding a lot of flickering. When the job is complete, you would want to see the final picture in the window, i.e. a buffer that is shown. Swapping "front" and "back" buffers achieves it.

Display attributes, context creation and set-as-current, and buffers swapping, are not OpenGL API calls. They are Operating System calls. And here is where wxWidgets do its job: create a common API, hidding implementation differences.
Beyond this, wx provides almost nothing (some checks for display, context, and "extensions"). The rest is up to you.

Q. GLFW, GLEW, GLU, GLUT, SLD, GLM, and more and more libraries I found on Internet...
A. GLFW and SDL are libraries like wxWidgetes. Much less featured, but specialized in OpenGL. I don't think mixing them with wx is a good idea, specially for window and context handling; but YMMV.
GLUT (and it's succesor FreeGLUT) are like GLFW, but only for OGL before 3.
GLU is for old, fixed-pipeline, OGL. Provides some utils for matrices, some complex primitives (cylinder, sphere, etc.), nurbs and more. You cannot use it with Core Profile.
GLM is a math lib, very useful.
GLEW deserves its own Q&A.

Q. About GLEW the "extensions wrangler". OGL function pointers.
A. At OGL beginning and little later (OGL 1.1) all the gl commands were directly callable, the OS provided them. But then the vendors began to add features to their GPUs. These are the so called "extensions".
The extensions "live" in the GPU driver. For example, in MS Windows each vendors replaces opengl32.dll with its own version.
In order to use them in your app you must retrieve each function pointer from the driver. But, normally, before asking for the pointer you check if the extension is supported. A supported extension #defines something like GL_EXT_texture3D. And the driver also provides a way of checking for a string inside the whole features string list.
Many extensions (EXT and ARB for any vendor, or specials like AMD, NV, etc) have been incorporated into official OGL API.

The thing is that any function beyond OGL 1.1 must be retrieved from the driver (*1). There are a few hundreds of official gl-commands, and many extensions. GLEW retrieves them all (you may see some delay during this all-in-a-row task).
For some OS versions, let's say before 2010, the pointers were bound to a context. Several contexts needed different function pointers even for the same gl-command. Nowadays this is not true any more.
What stands still is that you need to set a context as current before retrieving the function pointers.
The official Khronos Group defines OpenGL standards. And provides additional headers with #defines and typedefs for all parameters, constants, types, and functions beyond OGL 1.1. See OpenGL Registry at Documentation at opengl.org.

You can avoid GLEW (or GLAD, or any library alike) by coding the retrieving on your own. This is what wx/samples/opengl/pyramid does, only because glew is a huge library, and normally just a small subset is needed.
(*1) The needed code is platform-dependant. Apple does not require this retrieving, this OS provides the gl-commands directly; but it isn't harmfull to still retrieve them.

Q. How do I set up a OpenGL environment with wxWidgets?
A. The OS's gl.h header is required. It's already included by wx's glcanvas.h; so just include this wx header. For OGL > 1.1 glext.h (official) or a subset of it (like in wx pyramid sample) is also required.
The wx's gl lib (e.g. in MSW the file xxxx_gl.[a][lib][dll]) must be linked, unless you use the monolithic built of wx which incorporates everything.
The system gl lib must be also linked. In Linux it's just "gl". In MSWindows it is opengl32 (which serves both 32/64 bits versions). For Apple, add the -framework OpenGL flag to the compiler.
Don't forget to tell your compiler and linker the search paths for these headers and libs.

Q. Can I use OpenGL without wxGLCanvas?
A. Yes. Just use a wxWindow, and retrieve its handle. But now you need to implement all of what wxGLCanvas does on your own. Good luck.

Q. Which OpenGL version is available in my machine?
A. There's not direct query. First you must create a context. If it succeed, you can use glGetString(GL_VERSION) or other calls. See https://www.khronos.org/opengl/wiki/Ope ... ion_number
You can try if the context creation succeed by wxGLContext::IsOK(). If not, try a lower version.
Apple stop providing OGL beyond 4.1. They prefer its own "Metal" API.
But the point here is: What are you going to do if a specific version is not available? Will your app stop? Will you have different code for different OGL versions?

Q. Can you explain how OpenGL flows?
A. Roughly. It's too large.

An object (well, by now let's consider only its surface) can be decomposed into triangles. A cube has 6 faces, 2 triangles by face, 12 triangles total. A tetrahedron has 4 triangles.
A circular surface (cylinder, sphere,...) can not be exactly decomposed. But using many small triangles the surface is approximated very well. A line or a point don't need to be decomposed. Points, segments and triangles is what OGL uses. They are called "primitives".

Let's say you have a triangle. You know its local coordinates, meaning that they are values in a local XYZ axis system. We want to draw the 3D triangle in a 2D window.
The triangle is positioned in the world, where the rest of triangles also live, forming a "model". This is achieved by rotations translations, scalings, and other "transformations".
These transformations can be defined by several 4x4 matrices. With 3x3 matrix, the translation cannot be achieved. And those matrices can be combined in a unique matrix by matrices multiplication (which is not commutative, order matters!).
You (the "eye" or the "camera") are positioned somewhere and heads to some direction. So, the coordinates of the triangle are different from the world-system in a view-system, as view from the camera. Another transformation, a matter of some rotations and translation, 4x4 matrix again.
The view-system is projected into a 2D space (well, at first it's also a 3D space). This projection can be orthogonal or perspective. Each one has its own 4x4 matrix.
You can combine local, model, view, and projection transformations in a unique 4x4 matrix. When you multiply the final matrix MVP with the xyz1 coordinate of a point in local coordinates, you get XYZW coordinates. After dividing by that W you get "normalized device coordinates (NDC)" which are all in a cube of 2x2x2 size. This will be scaled to fit into the window size.
NDC data are stored by the GPU in two buffers: One with XY data; the Z is stored in the "depth buffer". The granularity of this depth buffer is one of the display attributes I told before. Typical value is 24 bits. When a "fragment" (see below) is calculated, its Z value is compared with the one currently in the same NDC XY position, and replaces the color depending of some parameters ("near to camera is preferred", "replace always", "blend with current", etc).

So, when you pass the 3 vertices of the triangle and the MVP matrix (or its M,V,P matrices), the GPU can calculate the final NDC coordinates of the 3 vertices. What about the edges and inner part of the triangle? The GPU calculate the proper "fragments" (think of them as pixels) and interpolates each XYZ coordinate for each fragment from the 3 NDC coordinates previously calculated.

Lighting is a matter about changing the color of a fragment depending on some parameters, like light direction, distance and color. Lighting is what makes your picture more "real". Some times it's needed to difference two faces of the same color.

So, we have "vertex-processing" and "fragment-processing". We can think also about "primitive-processing".
Old OGL has fixed-pipeline; those "processings" were fixed. Modern OpenGL works with "shaders". These are programs that live in the GPU, are compiled and executed in the GPU (not the CPU). Their language (GLSL) is very similar to C, but it isn't C, there's no pointers thing. Thus, you can program the pipeline. You MUST write the shaders, there's no other way.

A vertex shader ("VS") is fed with vertex "attributes" (coords, colors, normals, etc for each vertex) stored in a buffer. OGL>2 uses VBOs, which are generic buffers. The good thing about VBOs is that they are stored in GPU RAM, no need to transfer the data from RAM to GPU RAM time and time for each frame. You can update some part of a VBO.
The relation between a VS and the VBO (how to read the VBO) is stored in a OGL context state part, called a VAO. You can have several VBOs and VAOs.

For the whole set of triangles of an object instance you use the same MVP matrix. Another object (even a repetition of the same object but with different transformations) may require a different MVP matrix. You see this matrix is used for every set of primitives of one or several objects, for every glDrawXXX command (XXX means there are several different commands). Instead of store the matrix in a VBO, it's not a per-vertex attribute, you may pass it directly to the shader, by using glUniformXXX commands.
OK, we have VBO, VAO, and shaders. Different objects may use a different VAO&VBO with the same "program". A program is made by a VS, and a fragment shader (FS); but there are also other shaders (geometry, tesselation, compute).

Q. When do I do rendering?
A. When you want. The GPU will not show anything in the window until you call SwapBuffers (unless some rare buffer target you can set).
Of course, it you change the camera or the model then you need to update and render again. Changes may be due to a user action, a timer, or a refresh required by another window [un]overlapping your canvas.

Q. How to render text?
A. By using a special buffer type, called "texture", which can be "sampled" by texture coordinates instead of a typical index-access buffer, like you would do with a std::array. The texture can so be adjusted to a triangle. This is very useful for painting images; and a text can be rendered as an image. The pyramid sample shows this technique.

Q. Can I mix OGL and wx controls?
A. Normally, not. The OS and the GPU will fight for the window.
But you can render to a not-shown buffer (GL_BACK, or better a FBO) without SwapBuffers, then read the picture by glReadPixels, and set it as a background of the window.

Q. Some good OpenGL tutorials or books?
A. These are some of my favourites:
++ Learning Modern 3D Graphics Programming https://paroj.github.io/gltut/
++ https://learnopengl.com/
++ http://www.opengl-tutorial.org/
++ http://openglbook.com/
++ OpenGL Programming Guide

In need of some credit
In need of some credit
Posts: 1
Joined: Thu Mar 14, 2019 7:17 pm

Re: FAQ: wxWidgets and OpenGL

Post by jeffdc1979 » Thu Mar 14, 2019 7:20 pm

This was very helpful to read. I am still slightly confused how to access some core OpenGL extensions when setting up the wxGLCanvas and wxGLContext.

I am trying to have my wxApp disable vsync using WGL_EXT_swap_control and wglSwapIntervalEXT(0) and I can't seem to figure out how to achieve this. Could you provide information how to do this while still keeping most of my wxWidgets functionality in tact?

Post Reply