Xys Xys - 3 months ago 33
Objective-C Question

iOS - Video decoding with OpenGL ES 2.0

I've been trying to decode a .h264 video into opengl, but the frames I display are black. I don't see any error, and if I export the frames from the CMSampleBufferRef into the camera roll, they are fine.

It's maybe on the openGL side then ? But when I display images and not videos, it works fine, so don't know where to look at.

Here is the code to init the video decoder :

- (void)initVideo {

glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &_glTextureHook);

NSURL * url = [NSURL URLWithString:[self.videoMedia getFilePath]];
self.videoAsset = [[AVURLAsset alloc] initWithURL:url options:NULL];

dispatch_semaphore_t sema = dispatch_semaphore_create(0);

[self.videoAsset loadValuesAsynchronouslyForKeys:@[@"tracks"] completionHandler:^{

self.videoTrack = [self.videoAsset tracksWithMediaType:AVMediaTypeVideo][0];

NSString *key = (NSString *) kCVPixelBufferPixelFormatTypeKey;
NSNumber *value = @(kCVPixelFormatType_32BGRA);
NSDictionary *settings = @{key : value};

self.outputVideoTrackOuput = [[AVAssetReaderTrackOutput alloc]
initWithTrack:self.videoTrack outputSettings:settings];

self.assetReader = [[AVAssetReader alloc] initWithAsset:self.videoAsset error:nil];
[self.assetReader addOutput:self.outputVideoTrackOuput];
[self.assetReader startReading];

dispatch_semaphore_signal(sema);
}];

dispatch_semaphore_wait(sema, DISPATCH_TIME_FOREVER);
}


And the code to retrieve the opengl textures, each frame of the video :

- (void)play {

if (self.assetReader.status == AVAssetReaderStatusReading) {
CMSampleBufferRef sampleBuffer = [self.outputVideoTrackOuput copyNextSampleBuffer];
if(sampleBuffer) {
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

if(pixelBuffer) {
CVPixelBufferLockBaseAddress(pixelBuffer, 0);

glBindTexture(GL_TEXTURE_2D, _glTextureHook);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 1920, 1080, 0, GL_BGRA_EXT, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(pixelBuffer));
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
}

CFRelease(sampleBuffer);
}
}
}


The vertex shader and the fragment are the same for images and video (works with images). The only difference I see is in glTexImage2D, where both formats are GL_RGBA for images.

I made sure that _glTextureHook of the video decoder is well sent to shader manager, the context is activated on the thread etc.

This is the code of the fragment shader (it's very basic, same for the vertex shader):

precision lowp float;

uniform sampler2D Texture;

varying vec4 DestinationColor;
varying vec2 TexCoordOut;

void main() {
gl_FragColor = DestinationColor * texture2D(Texture, TexCoordOut);
}

Xys Xys
Answer

I just missed those lines after creating the texture :

glBindTexture(GL_TEXTURE_2D, _glTextureHook);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
Comments