Dinalli Dinalli - 5 months ago 80
Swift Question

Memory warning while trying to merge multiple video files in swift

Im trying to merge 2 videos together using swift. However i am getting a memory warning and sometimes a crash when i try to run this code.

My hunch is that I exit the dispatch_group early for some reason and finish the writter.

However i have also noticed that sometimes I don't get that far.

I have also noticed that my samples.count is massive at times, which seems strange as the videos are no more than 30secs long each.

Im a stuck at where to start to solve this issue tbh. Any pointers appreciated.

dispatch_group_enter(self.videoProcessingGroup)
asset.requestContentEditingInputWithOptions(options, completionHandler: {(contentEditingInput: PHContentEditingInput?, info: [NSObject : AnyObject]) -> Void in

let avAsset = contentEditingInput?.audiovisualAsset


let reader = try! AVAssetReader.init(asset: avAsset!)
let videoTrack = avAsset?.tracksWithMediaType(AVMediaTypeVideo).first

let readerOutputSettings: [String:Int] = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)]

let readerOutput = AVAssetReaderTrackOutput(track: videoTrack!, outputSettings: readerOutputSettings)
reader.addOutput(readerOutput)
reader.startReading()

//Create the samples
var samples:[CMSampleBuffer] = []

var sample: CMSampleBufferRef?

sample = readerOutput.copyNextSampleBuffer()

while (sample != nil)
{
autoreleasepool {
samples.append(sample!)
sample = readerOutput.copyNextSampleBuffer()
}
}

for i in 0...samples.count - 1 {
// Get the presentation time for the frame

var append_ok:Bool = false

autoreleasepool {
if let pixelBufferPool = adaptor.pixelBufferPool {
let pixelBufferPointer = UnsafeMutablePointer<CVPixelBuffer?>.alloc(1)
let status: CVReturn = CVPixelBufferPoolCreatePixelBuffer(
kCFAllocatorDefault,
pixelBufferPool,
pixelBufferPointer
)

let frameTime = CMTimeMake(Int64(frameCount), 30)

if var buffer = pixelBufferPointer.memory where status == 0 {
buffer = CMSampleBufferGetImageBuffer(samples[i])!
append_ok = adaptor.appendPixelBuffer(buffer, withPresentationTime: frameTime)
pixelBufferPointer.destroy()
} else {
NSLog("Error: Failed to allocate pixel buffer from pool")
}

pixelBufferPointer.dealloc(1)
dispatch_group_leave(self.videoProcessingGroup)
}
}

}
})




//Finish the session:
dispatch_group_notify(videoProcessingGroup, dispatch_get_main_queue(), {
videoWriterInput.markAsFinished()
videoWriter.finishWritingWithCompletionHandler({
print("Write Ended")

// Return writer
print("Created asset writer for \(size.width)x\(size.height) video")
})
})

Answer

In general, you can't fit all the frames of a video asset into memory on an iOS device, or even on a desktop machine:

var samples:[CMSampleBuffer] = []

Not even if the videos are 30s long. e.g. at 30 frames per second, a 720p, 30 second video, decoded as BGRA would require 30 * 30 * 1280 * 720 * 4 bytes = 3.2GB of memory. Each frame is 3.5MB! Worse if you're using 1080p, or a higher framerate.

You need to progressively merge your files frame by frame, keeping as few frames as possible in memory at any given time.

However for an operation as simple as a merge, you shouldn't need to deal with the frames your self. You could create an AVMutableComposition, append the individual AVAssets then export the merged file using AVAssetExportSession.