I'm fairly new to OpenGL ES on iOS and have a pet project which is best described as essentially trying to create a more primitive version of the wonderful Stellarium on iOS.
I'm at a point where I managed to load approx. 9000 vertices (each representing a star's current position visible in the night sky), and by passing the correct model rotation matrices to the vertex shader, they move in real-time, according to the time of day and location of the user. In other words, the math is not the problem, neither is the basic setup of and EAGLView, various buffers and shader compilations etc, where I basically followed this tutorial by Ray Wenderlich.
The vertex positions are calculated such that all of them are on the surface of an invisible sphere with arbitrary radius around position (0,0,0), simulating the night sky.
Eventually, I'd like to be able to implement basic drag / pinch gestures to move the viewpoint of the observer (located at position 0,0,0) and "zoom" closer to the sphere to see more detail, such as star names.
The parts I struggle with are the following
1) Projection matrix: any expert advice around which projection to choose given the scene described above? I'm struggling to understand frustum, but think this is the way to go. How would I implement "zooming" closer?
2) Shape of each star: Currently, this is a single vertex which yields a single dot on the screen. What's the best way to apply a star texture?
3) There are eventually going to be some vertices in my array of vertices which are going to remain motionless, for example a celestial grid, showing azimuth and altitude, i.e. they should not be rotated like the stars. How can I apply a rotation matrix to some vertices, but not others? Can this be achieved by splitting
Projection matrices are frequently calculated with a helper function. For example, gluPerspective. If you'd like to implement zooming, you can adjust the FOV (smaller/larger values will zoom in/out). Frequently, zooming is also implemented by modifying the view matrix (again, generally calculated with a helper function), it's up to you which method (or combination) you go with.
To apply a texture, GL has a special fragment shader input variable called
gl_PointCoord, which can be used to easily map a texture across a
GL_POINT primitive (see GL ES 2.0 spec, section 3.3 'points'). You could also write a geometry shader, that will transform the points into quads, and write a 'standard' fragment shader - although that is likely much more work.
It's likely easier to split geometry that will be processed with different uniform data (eg. model/view/projection matrices) into separate draw calls. This is in general how most game engines render objects, except when the expected number of draw calls grows large, and then they frequently implement some sort of batched drawing. However, if you only have two sets, that should not be a large concern.