App Descrtiption: Speedometer. Has needle dial and animated needle as overlay on the video. I output the animation of the needle onto the video via post-processing. I use AVAssetExportSession, and construct an AVComposition containing my animated layers along with the Video and Audio tracks from the video. This works fine. Video shows, needle animates.
Currently to replay the animation during the post-processing, I have saved off any change in speed with a time since "recording" of the video began. During postprocessing, I then fire off a timer(s) based on the saved time/speed data to then animate the needle to the next speed.
Problem: Resulting video/animation pair are not completely accurate and there often is a mismatch between the speed displayed when the video was taken and when it is played back and composited. (usually needle is in advance of video) due to the fact that the compositing/compression during export is not necessarily real-time.
Question: Is there a way I can embed speed information into the recording video stream and then get access to it when it is exported so that the video and speedometer are temporally matched up?
Would be nice to get a callback at specific times durin开发者_Go百科g export that contains my speed data.
As always...thanks!
Instead of using timers to animate your needle create a keyframe animation based on the speed data you recorded.
Timers and CA don't generally mix well, at least not in the way I infer from your description.
If you need to embed the metadata while the app is running on the iPhone I don't know how to do it. If you can do the embedding before, use HTTP LIve Streaming and the HTTP Live Streaming Tools.
The metadata is generated on a file by the id3taggenerator, and embedded on video using mediafilesegmenter. Example:
id3taggenerator -o camera1.id3 -text "Dolly camera"
id3taggenerator -o camera2.id3 -text "Tracking camera"
There are several kinds of metadata you can embed, including binary objects. Refer to the man page for details. Now we need to reference the generated file from a "meta macro-file". This is a plain text file with the following format:
60 id3 camera1.id3
120 id3 camera2.id3
The first number is seconds elapsed since the beginning of the video where you want to insert the notification. I don't remember the mediafilesegmenter command exactly, sorry, you have to pass the macro file, the index, and the video file at least.
The resulting video contains metadata that is posted by the MPMoviePlayerController
as notifications. See this page for details: http://jmacmullin.wordpress.com/2010/11/03/adding-meta-data-to-video-in-ios/
You should use CAAnimations and the beginTime property to set up your animations ahead of time, then use AVVideoComposition + AVVideoCompositionCoreAnimationTool to add them to the video when exporting. Note its documentation states:
Any animations will be interpreted on the video's timeline, not real-time...
So your animations will line up exactly where you specify with the resulting movie.
It's been a while since this question was asked but after looking everywhere, I have managed to come up with something similar by sampling data in real time during recording (at 1/30 sec. with a timer for a video recorded at 30fps) and storing it in an array. Then in post-processing, I create multiple CALayers in a loop for each data element in the array and draw the visualisation of that data on each layer.
Each layer has an CAAnimation that fades in opacity at the correct media timeline with the beginTime attribute, which is simply 1/30 sec. multiplied by the array index. This is so short a time that the layer immediately appears over the preceeding layer. If the layer background is opaque, it will obscure the needle rendered in the previous layer and so appear to animate the needle in pretty good synch with the original video capture. You may have to tweak the timing a little but I am no more than one frame out.
/******** this has not been compiled but you should get the idea ************
// Before starting the AVAssetExportSession session and after the AVMutableComposition routine
CALayer* speedoBackground = [[CALayer alloc] init]; // background layer for needle layers
[speedoBackground setFrame:CGRectMake(x,y,width,height)]; // size and location
[speedoBackground setBackgroundColor:[[UIColor grayColor] CGColor]];
[speedoBackground setOpacity:0.5] // partially see through on video
// loop through the data
for (int index = 0; index < [dataArray count]; index++) {
CALayer* speedoNeedle = [[CALayer alloc] init]; // layer for needle drawing
[speedoNeedle setFrame:CGRectMake(x,y,width,height)]; // size and location
[speedoNeedle setBackgroundColor:[[UIColor redColor] CGColor]];
[speedoNeedle setOpacity:1.0]; // probably not needed
// your needle drawing routine for each data point ... e.g.
[self drawNeedleOnLayer:speedoNeedle angle:[self calculateNeedleAngle[dataArray objectAtIndex:index]]];
CABasicAnimation *needleAnimation = [CABasicAnimation animationWithKeyPath:@"opacity"];
needleAnimation.fromValue = [NSNumber numberWithFloat:(float)0.0];
needleAnimation.toValue = [NSNumber numberWithFloat:(float)1.0]; // fade in
needleAnimation.additive = NO;
needleAnimation.removedOnCompletion = NO; // it obscures previous layers
needleAnimation.beginTime = index*animationDuration;
needleAnimation.duration = animationDuration -.03; // it will not animate at this speed but layer will appear immediately over the previous layer at the correct media time
needleAnimation.fillMode = kCAFillModeBoth;
[speedoNeedle addAnimation:needleAnimation forKey:nil];
[speedoBackground addSublayer:needleOverlay];
}
[parentLayer addSublayer:speedoBackground];
.
.
.
// when the AVAssetExportSession has finished, make sure you clear all the layers
parentLayer.sublayers = nil;
It is processor and memory intensive, so it's not great for long videos or complex drawing. I am sure there are more elegant methods but this works and I hope this helps.
There is a session from this year's WWDC that might provide a different approach to what you're doing. You can see the videos here: http://developer.apple.com/videos/wwdc/2011/ . Look for one called "Working with Media in AVFoundation". The interesting bits are around minute 26 or so. I'm not completely sure I understand the problem, but when I read it, that session occurred to me.
Best regards.
精彩评论