AVFoundation框架提供了一组功能丰富的类,以方便视听asset的编辑。AVFoundation编辑API的核心是合成。合成只是一个或多个不同媒体资产的曲目集合。AVMutableComposition类提供了一个接口,用于插入和删除曲目,以及管理它们的时间顺序。图3-1显示了如何从现有资产组合中拼凑出新的组合,形成新的asset。如果您只想将多个asset按顺序合并到一个文件中,那么这就是所需的详细信息。如果您想对合成中的曲目执行任何自定义音频或视频处理,则需要分别合并音频混合或视频合成。
使用AVMutableAudioMix类,您可以在上执行自定义音频处理
为了进行编辑,您可以使用AVMutableVideoComposition类直接处理作文中的视频曲目,如图3-3所示。使用单个视频合成,可以为输出视频指定所需的渲染大小和比例以及帧持续时间。通过视频合成的说明(由AVMutableVideoCompositionInstruction类表示),可以修改视频的背景色并应用层说明。这些层指令(由AVMutableVideoCompositionLayerInstruction类表示)可用于将变换、变换渐变、不透明度和不透明度渐变应用于合成中的视频轨迹。视频合成类还使您能够使用animationTool属性将核心动画框架中的效果引入视频。
要将合成与音频混合和视频合成相结合,请使用AVAssetExportSession对象,如图3-4所示。您可以使用合成初始化导出会话,然后将音频混合和视频合成分别指定给audioMix和videoComposition属性。
创建 composition
要创建自己的合成,可以使用AVMutableComposition类。要将媒体数据添加到合成中,必须添加一个或多个合成曲目,由AVMutableCompositionTrack类表示。最简单的情况是创建一个包含一个视频曲目和一个音频曲目的可变合成:
AVMutableComposition *mutableComposition = [AVMutableComposition composition]; // Create the video composition track. AVMutableCompositionTrack *mutableCompositionVideoTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid]; // Create the audio composition track. AVMutableCompositionTrack *mutableCompositionAudioTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];用于初始化合成曲目的选项
向合成中添加新曲目时,必须同时提供媒体类型和曲目ID。虽然音频和视频是最常用的媒体类型,但也可以指定其他媒体类型,如AVMediaTypeSubtitle或AVMediaTypeText。
与某些视听数据关联的每个曲目都有一个称为曲目ID的唯一标识符。如果将kCMPersistentTrackID_指定为首选曲目ID,则会自动为您生成一个唯一标识符并与该曲目关联。
一旦你有了一个或多个曲目的组合,你就可以开始将媒体数据添加到相应的曲目中。要将媒体数据添加到合成曲目,您需要访问媒体数据所在的AVAsset对象。您可以使用可变合成曲目界面将具有相同基础媒体类型的多个曲目放在同一曲目上。以下示例说明了如何将两个不同的视频资源轨迹按顺序添加到同一合成轨迹:
// You can retrieve AVAssets from a number of places, like the camera roll for example. AVAsset *videoAsset = <#AVAsset with at least one video track#>; AVAsset *anotherVideoAsset = <#another AVAsset with at least one video track#>; // Get the first video track from each asset. AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0]; AVAssetTrack *anotherVideoAssetTrack = [[anotherVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0]; // Add them both to the composition. [mutableCompositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero,videoAssetTrack.timeRange.duration) ofTrack:videoAssetTrack atTime:kCMTimeZero error:nil]; [mutableCompositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero,anotherVideoAssetTrack.timeRange.duration) ofTrack:anotherVideoAssetTrack atTime:videoAssetTrack.timeRange.duration error:nil];检索兼容的合成曲目
在可能的情况下,每种媒体类型只能有一个合成曲目。这种兼容asset轨迹的统一导致资源使用量最小化。连续显示媒体数据时,应将任何相同类型的媒体数据放在同一合成曲目上。您可以查询可变合成,以确定是否存在与所需asset轨迹兼容的合成轨迹:
AVMutableCompositionTrack *compatibleCompositionTrack = [mutableComposition mutableTrackCompatibleWithTrack:<#the AVAssetTrack you want to insert#>];
if (compatibleCompositionTrack) {
// Implementation continues.
}
注意:将多个视频片段放置在同一合成轨迹上可能会导致在视频片段之间的过渡处播放时丢失帧,特别是在嵌入式设备上。为视频片段选择合成曲目的数量完全取决于应用程序及其预期平台的设计。
生成音量渐变单个AVMutableAudioMix对象可以单独对构图中的所有音频曲目执行自定义音频处理。使用audioMix类方法创建音频混合,并使用AVMutableAudioMixInputParameters类的实例将音频混合与构图中的特定曲目相关联。混音可以用来改变音轨的音量。以下示例显示了如何在特定音频曲目上设置音量渐变,以便在合成期间缓慢淡出音频:
AVMutableAudioMix *mutableAudioMix = [AVMutableAudioMix audioMix]; // Create the audio mix input parameters object. AVMutableAudioMixInputParameters *mixParameters = [AVMutableAudioMixInputParameters audioMixInputParametersWithTrack:mutableCompositionAudioTrack]; // Set the volume ramp to slowly fade the audio out over the duration of the composition. [mixParameters setVolumeRampFromStartVolume:1.f toEndVolume:0.f timeRange:CMTimeRangeMake(kCMTimeZero, mutableComposition.duration)]; // Attach the input parameters to the audio mix. mutableAudioMix.inputParameters = @[mixParameters];执行自定义视频处理
与音频混音一样,您只需要一个AVMutableVideoComposition对象就可以对合成的视频曲目执行所有自定义视频处理。使用视频合成,可以直接为合成的视频轨迹设置适当的渲染大小、比例和帧速率。有关为这些属性设置适当值的详细示例,请参见设置渲染大小和帧持续时间。
更改构图的背景色所有视频合成还必须具有包含至少一条视频合成指令的AVVideoCompositionInstruction对象数组。您可以使用AVMutableVideoCompositionInstruction类创建自己的视频合成指令。使用视频合成说明,可以修改合成的背景颜色,指定是否需要后期处理或应用层说明。
以下示例说明如何创建视频合成指令,将整个合成的背景色更改为红色。
AVMutableVideoCompositionInstruction *mutableVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction]; mutableVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, mutableComposition.duration); mutableVideoCompositionInstruction.backgroundColor = [[UIColor redColor] CGColor];应用不透明度渐变
视频合成指令也可用于应用视频合成层指令。AvmutableVideoCompositionLayerStruction对象可以将变换、变换渐变、不透明度和不透明度渐变应用于合成中的特定视频轨迹。视频合成指令的layerInstructions数组中的层指令顺序决定了在该合成指令期间,源曲目中的视频帧应如何分层和合成。下面的代码片段显示了如何设置不透明度渐变,以便在转换到第二个视频之前缓慢淡出合成中的第一个视频:
AVAsset *firstVideoAssetTrack = <#AVAssetTrack representing the first video segment played in the composition#>; AVAsset *secondVideoAssetTrack = <#AVAssetTrack representing the second video segment played in the composition#>; // Create the first video composition instruction. AVMutableVideoCompositionInstruction *firstVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction]; // Set its time range to span the duration of the first video track. firstVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration); // Create the layer instruction and associate it with the composition video track. AVMutableVideoCompositionLayerInstruction *firstVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:mutableCompositionVideoTrack]; // Create the opacity ramp to fade out the first video track over its entire duration. [firstVideoLayerInstruction setOpacityRampFromStartOpacity:1.f toEndOpacity:0.f timeRange:CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration)]; // Create the second video composition instruction so that the second video track isn't transparent. AVMutableVideoCompositionInstruction *secondVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction]; // Set its time range to span the duration of the second video track. secondVideoCompositionInstruction.timeRange = CMTimeRangeMake(firstVideoAssetTrack.timeRange.duration, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration)); // Create the second layer instruction and associate it with the composition video track. AVMutableVideoCompositionLayerInstruction *secondVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:mutableCompositionVideoTrack]; // Attach the first layer instruction to the first video composition instruction. firstVideoCompositionInstruction.layerInstructions = @[firstVideoLayerInstruction]; // Attach the second layer instruction to the second video composition instruction. secondVideoCompositionInstruction.layerInstructions = @[secondVideoLayerInstruction]; // Attach both of the video composition instructions to the video composition. AVMutableVideoComposition *mutableVideoComposition = [AVMutableVideoComposition videoComposition]; mutableVideoComposition.instructions = @[firstVideoCompositionInstruction, secondVideoCompositionInstruction];结合核心动画效果
视频合成可以通过animationTool属性将核心动画的力量添加到合成中。通过此动画工具,您可以完成诸如为视频添加水印、添加标题或动画覆盖等任务。核心动画可以以两种不同的方式用于视频合成:可以添加核心动画层作为其自己的单独合成轨迹,或者可以将核心动画效果(使用核心动画层)直接渲染到合成中的视频帧中。以下代码通过向视频中心添加水印来显示后一选项:
CALayer *watermarkLayer = <#CALayer representing your desired watermark image#>; CALayer *parentLayer = [CALayer layer]; CALayer *videoLayer = [CALayer layer]; parentLayer.frame = CGRectMake(0, 0, mutableVideoComposition.renderSize.width, mutableVideoComposition.renderSize.height); videoLayer.frame = CGRectMake(0, 0, mutableVideoComposition.renderSize.width, mutableVideoComposition.renderSize.height); [parentLayer addSublayer:videoLayer]; watermarkLayer.position = CGPointMake(mutableVideoComposition.renderSize.width/2, mutableVideoComposition.renderSize.height/4); [parentLayer addSublayer:watermarkLayer]; mutableVideoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];将所有内容放在一起:组合多个资源并将结果保存到摄影机卷
这个简短的代码示例演示了如何组合两个视频资源轨迹和一个音频资源轨迹来创建单个视频文件。它展示了如何:
创建AVMutableComposition对象并添加多个AVMutableCompositionTrack对象
将AVAssetTrack对象的时间范围添加到兼容的合成曲目
检查视频资源轨迹的preferredTransform属性以确定视频的方向
使用AVMutableVideoCompositionLayerInstruction对象将变换应用于合成中的视频轨迹
为视频合成的renderSize和frameDuration属性设置适当的值
导出到视频文件时,将合成与视频合成结合使用
注意:为了关注最相关的代码,本示例省略了完整应用程序的几个方面,例如内存管理和错误处理。要使用AVFoundation,您需要有足够的可可经验来推断缺失的部分。
创建 composition要将来自不同资源的轨迹拼接在一起,请使用AVMutableComposition对象。创建合成并添加一个音频和一个视频曲目。
AVMutableComposition *mutableComposition = [AVMutableComposition composition]; AVMutableCompositionTrack *videoCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid]; AVMutableCompositionTrack *audioCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];添加asset
将两个视频资源曲目和音频资源曲目添加到合成中。
AVAssetTrack *firstVideoAssetTrack = [[firstVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0]; AVAssetTrack *secondVideoAssetTrack = [[secondVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0]; [videoCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration) ofTrack:firstVideoAssetTrack atTime:kCMTimeZero error:nil]; [videoCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, secondVideoAssetTrack.timeRange.duration) ofTrack:secondVideoAssetTrack atTime:firstVideoAssetTrack.timeRange.duration error:nil]; [audioCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration)) ofTrack:[[audioAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0] atTime:kCMTimeZero error:nil];
注意:这假设您有两个asset,每个资源至少包含一个视频曲目,第三个资源至少包含一个音频曲目。视频可以从摄像机卷中检索,音频曲目可以从音乐库或视频本身中检索。
检查视频方向将视频和音频曲目添加到合成中后,需要确保两个视频曲目的方向正确。默认情况下,假定所有视频曲目都处于横向模式。如果您的视频轨迹是在纵向模式下拍摄的,则导出时视频将无法正确定向。同样,如果尝试将纵向模式下的视频快照与横向模式下的视频快照相结合,导出会话将无法完成
BOOL isFirstVideoPortrait = NO;
CGAffineTransform firstTransform = firstVideoAssetTrack.preferredTransform;
// Check the first video track's preferred transform to determine if it was recorded in portrait mode.
if (firstTransform.a == 0 && firstTransform.d == 0 && (firstTransform.b == 1.0 || firstTransform.b == -1.0) && (firstTransform.c == 1.0 || firstTransform.c == -1.0)) {
isFirstVideoPortrait = YES;
}
BOOL isSecondVideoPortrait = NO;
CGAffineTransform secondTransform = secondVideoAssetTrack.preferredTransform;
// Check the second video track's preferred transform to determine if it was recorded in portrait mode.
if (secondTransform.a == 0 && secondTransform.d == 0 && (secondTransform.b == 1.0 || secondTransform.b == -1.0) && (secondTransform.c == 1.0 || secondTransform.c == -1.0)) {
isSecondVideoPortrait = YES;
}
if ((isFirstVideoAssetPortrait && !isSecondVideoAssetPortrait) || (!isFirstVideoAssetPortrait && isSecondVideoAssetPortrait)) {
UIalertView *incompatibleVideoOrientationalert = [[UIalertView alloc] initWithTitle:@"Error!" message:@"Cannot combine a video shot in portrait mode with a video shot in landscape mode." delegate:self cancelButtonTitle:@"Dismiss" otherButtonTitles:nil];
[incompatibleVideoOrientationalert show];
return;
}
应用视频合成层指令
一旦知道视频片段具有兼容的方向,就可以对每个片段应用必要的层指令,并将这些层指令添加到视频合成中。
AVMutableVideoCompositionInstruction *firstVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction]; // Set the time range of the first instruction to span the duration of the first video track. firstVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration); AVMutableVideoCompositionInstruction * secondVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction]; // Set the time range of the second instruction to span the duration of the second video track. secondVideoCompositionInstruction.timeRange = CMTimeRangeMake(firstVideoAssetTrack.timeRange.duration, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration)); AVMutableVideoCompositionLayerInstruction *firstVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoCompositionTrack]; // Set the transform of the first layer instruction to the preferred transform of the first video track. [firstVideoLayerInstruction setTransform:firstTransform atTime:kCMTimeZero]; AVMutableVideoCompositionLayerInstruction *secondVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoCompositionTrack]; // Set the transform of the second layer instruction to the preferred transform of the second video track. [secondVideoLayerInstruction setTransform:secondTransform atTime:firstVideoAssetTrack.timeRange.duration]; firstVideoCompositionInstruction.layerInstructions = @[firstVideoLayerInstruction]; secondVideoCompositionInstruction.layerInstructions = @[secondVideoLayerInstruction]; AVMutableVideoComposition *mutableVideoComposition = [AVMutableVideoComposition videoComposition]; mutableVideoComposition.instructions = @[firstVideoCompositionInstruction, secondVideoCompositionInstruction];
所有AVAssetTrack对象都具有preferredTransform特性,该特性包含该资源轨迹的方向信息。只要资源轨迹显示在屏幕上,就会应用此转换。在前面的代码中,层指令的变换设置为资源轨迹的变换,以便在调整渲染大小后新合成中的视频可以正确显示。
设置渲染大小和帧持续时间要完成视频方向修复,必须相应调整renderSize属性。您还应该为frameDuration属性选择合适的值,例如1/30秒(或每秒30帧)。默认情况下,renderScale属性设置为1.0,这适用于此合成。
CGSize naturalSizeFirst, naturalSizeSecond;
// If the first video asset was shot in portrait mode, then so was the second one if we made it here.
if (isFirstVideoAssetPortrait) {
// Invert the width and height for the video tracks to ensure that they display properly.
naturalSizeFirst = CGSizeMake(firstVideoAssetTrack.naturalSize.height, firstVideoAssetTrack.naturalSize.width);
naturalSizeSecond = CGSizeMake(secondVideoAssetTrack.naturalSize.height, secondVideoAssetTrack.naturalSize.width);
}
else {
// If the videos weren't shot in portrait mode, we can just use their natural sizes.
naturalSizeFirst = firstVideoAssetTrack.naturalSize;
naturalSizeSecond = secondVideoAssetTrack.naturalSize;
}
float renderWidth, renderHeight;
// Set the renderWidth and renderHeight to the max of the two videos widths and heights.
if (naturalSizeFirst.width > naturalSizeSecond.width) {
renderWidth = naturalSizeFirst.width;
}
else {
renderWidth = naturalSizeSecond.width;
}
if (naturalSizeFirst.height > naturalSizeSecond.height) {
renderHeight = naturalSizeFirst.height;
}
else {
renderHeight = naturalSizeSecond.height;
}
mutableVideoComposition.renderSize = CGSizeMake(renderWidth, renderHeight);
// Set the frame duration to an appropriate value (i.e. 30 frames per second for video).
mutableVideoComposition.frameDuration = CMTimeMake(1,30);
导出构图并将其保存到摄影机卷
这一过程的最后一步是将整个构图导出到单个视频文件中,并将该视频保存到摄影机卷中。使用AVAssetExportSession对象创建新的视频文件,并将输出文件所需的URL传递给它。然后,您可以使用AlassetLibrary类将生成的视频文件保存到摄影机卷。
// Create a static date formatter so we only have to initialize it once.
static NSDateFormatter *kDateFormatter;
if (!kDateFormatter) {
kDateFormatter = [[NSDateFormatter alloc] init];
kDateFormatter.dateStyle = NSDateFormatterMediumStyle;
kDateFormatter.timeStyle = NSDateFormatterShortStyle;
}
// Create the export session with the composition and set the preset to the highest quality.
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:mutableComposition presetName:AVAssetExportPresetHighestQuality];
// Set the desired output URL for the file created by the export process.
exporter.outputURL = [[[[NSFileManager defaultManager] URLForDirectory:NSdocumentDirectory inDomain:NSUserDomainMask appropriateForURL:nil create:@YES error:nil] URLByAppendingPathComponent:[kDateFormatter stringFromDate:[NSDate date]]] URLByAppendingPathExtension:CFBridgingRelease(UTTypeCopyPreferredTagWithClass((CFStringRef)AVFileTypeQuickTimeMovie, kUTTagClassFilenameExtension))];
// Set the output file type to be a QuickTime movie.
exporter.outputFileType = AVFileTypeQuickTimeMovie;
exporter.shouldOptimizeForNetworkUse = YES;
exporter.videoComposition = mutableVideoComposition;
// Asynchronously export the composition to a video file and save this file to the camera roll once export completes.
[exporter exportAsynchronouslyWithCompletionHandler:^{
dispatch_async(dispatch_get_main_queue(), ^{
if (exporter.status == AVAssetExportSessionStatusCompleted) {
ALAssetsLibrary *assetsLibrary = [[ALAssetsLibrary alloc] init];
if ([assetsLibrary videoAtPathIsCompatibleWithSavedPhotosAlbum:exporter.outputURL]) {
[assetsLibrary writeVideoAtPathToSavedPhotosAlbum:exporter.outputURL completionBlock:NULL];
}
}
});
}];



