In GLM we have a overloaded '*' operator for multiplying matrices.
We can do matrix multiplication like this with using this operator in GLM:
glm::mat4 MVP = Projection * View * Model;
//after that pass MVP to uniform 'MVP'
Ideally one would pass them separately if you need them separately in your shader for other purposes. If all you need is MVP, its best to multiply and pass an MVP as a single matrix to your shader.
What is best, depends a lot on the number of vertices you render as well as whether your rendering has multiple draw calls with few vertices or few draw calls with many vertices.
Always multiplication on CPU would be better since its once per draw call, instead of once per vertex. If the values need to vary between shaders, then one has to perform the multiplication in the shader.
I think I was too quick to answer, but this might be very similar to the question Should I calculate matrices on the GPU or on the CPU?