I know how to use AVAssetReader
and AVAssetWriter
, and have successfully used them to grab a video track from one movie and transcode it into another. However, I'd like to do this with audio as well. Do I have to create and AVAssetExportSession
after I've done with the initial transcode, or is there some way to switch between tracks while in the midst of a writing session? I'd hate to have to deal with the overhead of an AVAssetExportSession
.
I ask because, using the pull style method - while ([assetWriterInput isReadyForMoreMediaData]) {.开发者_开发知识库..}
- assumes one track only. How could it be used for more than one track, i.e. both an audio and a video track?
AVAssetWriter
will automatically interleave requests on its associated AVAssetWriterInput
s in order to integrate different tracks into the output file. Just add an AVAssetWriterInput
for each of the tracks that you have, and then call requestMediaDataWhenReadyOnQueue:usingBlock:
on each of your AVAssetWriterInput
s.
Here's a method I have that calls requestMediaDataWhenReadyOnQueue:usingBlock:
. I call this method from a loop over the number of output/input pairs I have. (A separate method is good both for code readability and also because, unlike a loop, each call sets up a separate stack frame for the block.)
You only need one dispatch_queue_t
and can reuse it for all of the tracks. Note that you definitely should not call dispatch_async
from your block, because requestMediaDataWhenReadyOnQueue:usingBlock:
expects the block to, well, block until it has filled in as much data as the AVAssetWriterInput
will take. You don't want to return before then.
- (void)requestMediaDataForTrack:(int)i {
AVAssetReaderOutput *output = [[_reader outputs] objectAtIndex:i];
AVAssetWriterInput *input = [[_writer inputs] objectAtIndex:i];
[input requestMediaDataWhenReadyOnQueue:_processingQueue usingBlock:
^{
[self retain];
while ([input isReadyForMoreMediaData]) {
CMSampleBufferRef sampleBuffer;
if ([_reader status] == AVAssetReaderStatusReading &&
(sampleBuffer = [output copyNextSampleBuffer])) {
BOOL result = [input appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
if (!result) {
[_reader cancelReading];
break;
}
} else {
[input markAsFinished];
switch ([_reader status]) {
case AVAssetReaderStatusReading:
// the reader has more for other tracks, even if this one is done
break;
case AVAssetReaderStatusCompleted:
// your method for when the conversion is done
// should call finishWriting on the writer
[self readingCompleted];
break;
case AVAssetReaderStatusCancelled:
[_writer cancelWriting];
[_delegate converterDidCancel:self];
break;
case AVAssetReaderStatusFailed:
[_writer cancelWriting];
break;
}
break;
}
}
}
];
}
Have you tried using two AVAssetWriterInputs and pushing the samples through a worker queue? Here is a rough sketch.
processing_queue = dispatch_queue_create("com.mydomain.gcdqueue.mediaprocessor", NULL);
[videoAVAssetWriterInput requestMediaDataWhenReadyOnQueue:myInputSerialQueue usingBlock:^{
dispatch_asyc(processing_queue, ^{process video});
}];
[audioAVAssetWriterInput requestMediaDataWhenReadyOnQueue:myInputSerialQueue usingBlock:^{
dispatch_asyc(processing_queue, ^{process audio});
}];
You can use dispatch groups!
Check out the AVReaderWriter example for MacOSX...
I am quoting directly from the sample RWDocument.m:
- (BOOL)startReadingAndWritingReturningError:(NSError **)outError
{
BOOL success = YES;
NSError *localError = nil;
// Instruct the asset reader and asset writer to get ready to do work
success = [assetReader startReading];
if (!success)
localError = [assetReader error];
if (success)
{
success = [assetWriter startWriting];
if (!success)
localError = [assetWriter error];
}
if (success)
{
dispatch_group_t dispatchGroup = dispatch_group_create();
// Start a sample-writing session
[assetWriter startSessionAtSourceTime:[self timeRange].start];
// Start reading and writing samples
if (audioSampleBufferChannel)
{
// Only set audio delegate for audio-only assets, else let the video channel drive progress
id <RWSampleBufferChannelDelegate> delegate = nil;
if (!videoSampleBufferChannel)
delegate = self;
dispatch_group_enter(dispatchGroup);
[audioSampleBufferChannel startWithDelegate:delegate completionHandler:^{
dispatch_group_leave(dispatchGroup);
}];
}
if (videoSampleBufferChannel)
{
dispatch_group_enter(dispatchGroup);
[videoSampleBufferChannel startWithDelegate:self completionHandler:^{
dispatch_group_leave(dispatchGroup);
}];
}
// Set up a callback for when the sample writing is finished
dispatch_group_notify(dispatchGroup, serializationQueue, ^{
BOOL finalSuccess = YES;
NSError *finalError = nil;
if (cancelled)
{
[assetReader cancelReading];
[assetWriter cancelWriting];
}
else
{
if ([assetReader status] == AVAssetReaderStatusFailed)
{
finalSuccess = NO;
finalError = [assetReader error];
}
if (finalSuccess)
{
finalSuccess = [assetWriter finishWriting];
if (!finalSuccess)
finalError = [assetWriter error];
}
}
[self readingAndWritingDidFinishSuccessfully:finalSuccess withError:finalError];
});
dispatch_release(dispatchGroup);
}
if (outError)
*outError = localError;
return success;
}
精彩评论