Jason Jason - 1 year ago 117
C++ Question

Purpose of binding points in OpenGL?

I don't understand what the purpose is of binding points (such as

) in OpenGL. To my understanding
creates a sort of pointer to a vertex buffer object located somewhere within GPU memory.


glGenBuffers(1, &bufferID)

means I now have a handle, bufferID, to 1 vertex object on the graphics card. Now I know the next step would be to bind bufferID to a binding point

glBindBuffer(GL_ARRAY_BUFFER, bufferID)

so that I can use that binding point to send data down using the
function like so:

glBufferData(GL_ARRAY_BUFFER, sizeof(data), data, GL_STATIC_DRAW)

But why couldn't I just use the bufferID to specifiy where I want to send the data instead? Something like:

glBufferData(bufferID, sizeof(data), data, GL_STATIC_DRAW)

Then when calling a draw function I would also just put in which ever ID to whichever VBO I want the draw function to draw. Something like:

glDrawArrays(bufferID, GL_TRIANGLES, 0, 3)

Why do we need the extra step of indirection with

Answer Source

OpenGL uses object binding points for two things: to designate an object to be used as part of a rendering process, and to be able to modify the object.

Why it uses them for the former is simple: OpenGL requires a lot of objects to be able to render.

Consider your overly simplistic example:

glDrawArrays(bufferID, GL_TRIANGLES, 0, 3)

That API doesn't let me have separate vertex attributes come from separate buffers. Sure, you might then propose glDrawArrays(GLint count, GLuint *object_array, ...). But how do you connect a particular buffer object to a particular vertex attribute? Or how do you have 2 attributes come from buffer 0 and a third attribute from buffer 1? Those are things I can do right now with the current API. But your proposed one can't handle it.

And even that is putting aside the many other objects you need to render: program/pipeline objects, texture objects, UBOs, SSBOs, transform feedback objects, query objects, etc. Having all of the needed objects specified in a single command would be fundamentally unworkable (and that leaves aside the performance costs).

And every time the API would need to add a new kind of object, you would have to add new variations of the glDraw* functions. And right now, there are over a dozen such functions. Your way would have given us hundreds.

So instead, OpenGL defines ways for you to say "the next time I render, use this object in this way for that process." That's what binding an object for use means.

But why couldn't I just use the bufferID to specifiy where I want to send the data instead?

This is about binding an object for the purpose of modifying the object, not saying that it will be used. That is... a different matter.

The obvious answer is, "You can't do it because the OpenGL API (until 4.5) doesn't have a function to let you do it." But I rather suspect the question is really why OpenGL doesn't have such APIs (until 4.5, where glNamedBufferStorage and such exist).

Indeed, the fact that 4.5 does have such functions proves that there is no technical reason for pre-4.5 OpenGL's bind-object-to-modify API. It really was a "decision" that came about by the evolution of the OpenGL API from 1.0, thanks to following the path of least resistance. Repeatedly.

Indeed, just about every bad decision that OpenGL has made can be traced back to taking the path of least resistance in the API. But I digress.

In OpenGL 1.0, there was only one kind of object: display list objects. That means that even textures were not stored in objects. So every time you switched textures, you had to re-specify the entire texture with glTexImage*D. That means re-uploading it. Now, you could (and people did) wrap each texture's creation in a display list, which allowed you to switch textures by executing that display list. And hopefully the driver would realize you were doing that and instead allocate video memory and so forth appropriately.

So when 1.1 came around, the OpenGL ARB realized how mind-bendingly silly that was. So they created texture objects, which encapsulate both the memory storage of a texture and the various state within. When you wanted to use the texture, you bound it. But there was a snag. Namely, how to change it.

See, 1.0 had a bunch of already existing functions like glTexImage*D, glTexParamter and the like. These modify the state of the texture. Now, the ARB could have added new functions that do the same thing but take texture objects as parameters.

But that would mean dividing all OpenGL users into 2 camps: those who used texture objects and those who did not. It meant that, if you wanted to use texture objects, you had to rewrite all of your existing code that modified textures. If you had some function that made a bunch of glTexParameter calls on the current texture, you would have to change that function to call the new texture object function. But you would also have to change the function of yours that calls it so that it would take, as a parameter, the texture object that it operates on.

And if that function didn't belong to you (because it was part of a library you were using), then you couldn't even do that.

So the ARB decided to keep those old functions around and simply have them behave differently based on whether a texture was bound to the context or not. If one was bound, then glTexParameter/etc would modify the bound texture, rather than the context's normal texture.

This one decision established the general paradigm shared by almost all OpenGL objects.

ARB_vertex_buffer_object used this paradigm for the same reason. Notice how the various gl*Pointer functions (glVertexAttribPointer and the like) work in relation to buffers. You have to bind a buffer to GL_ARRAY_BUFFER, then call one of those functions to set up an attribute array. When a buffer is bound to that slot, the function will pick that up and treat the pointer as an offset into the buffer that was bound at the time the *Pointer function was called.

Why? For the same reason: ease of compatibility (or to promote laziness, depending on how you want to see it). ATI_vertex_array_object had to create new analogs to the gl*Pointer functions. Whereas ARB_vertex_buffer_object just piggybacked off of the existing entrypoints.

Users didn't have to change from using glVertexPointer to glVertexBufferOffset or some other function. All they had to do was bind a buffer before calling a function that set up vertex information (and of course change the pointers to byte offsets).

It also mean that they didn't have to add a bunch of glDrawElementsWithBuffer-type functions for rendering with indices that come from buffer objects.

So this wasn't a bad idea in the short term. But as with most short-term decision making, it starts being less reasonable with time.

Of course, if you have access to GL 4.5/ARB_direct_state_access, you can do things the way they ought to have been done originally.

Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download