有人可以向我指出可以帮助我为 iPhone 获得正确 SPS 和 PPS 值的文档。
4 回答
问题有点不清楚...
Picture Parameter Set
在 7.3.2.2 章节中的最新ITU-T 标准版本中进行了描述
Sequence Parameter Set
在第 7.3.2.1 章中进行了描述。
我相信您知道,但您只能将 H264 编码的视频保存到 iOS 上的文件(.mp4、.mov)中。尚无法从代码访问编码的视频帧。因此,如果要创建具有编码视频的 mp4 文件,则需要使用 AVAssetWriter。苹果有一个很好的示例代码来说明如何做到这一点。
我不知道有什么地方发布了不同的 SPS/PPS;因为它们会根据您的压缩设置、图像大小以及您是以纵向还是横向模式编码视频而有所不同。您可以使用上面的示例代码(RosyWriter)使用您的编码预设生成一些小的 .mp4 文件;然后我会使用十六进制编辑器手动查找 SPS/PPS。请注意,作为更大的 mp4 信息结构的一部分,SPS/PPS 将在您的 H264 流之后指向文件的末尾。您可以在线找到有关其结构的更多信息。
这是我发现对我的项目有用的一些 SPS/PPS。其中一些可能对您有用,但如果不是,您始终可以使用 H264 编码预设生成 mp4 并找到必要的 SPS/PPS。我的视频是使用 AVVideoProfileLevelH264Baseline30 编码的,这里是我需要的不同视频尺寸的 SPS/PPS:
SPS:
// For AVCaptureSessionPresetLow(144x192) AVCaptureSessionLandscape on Iphone4S, Iphone5
char iphone_sps[] = {0x67, 0x4D, 0x00, 0x0C, 0xAB, 0x41, 0x82, 0x74, 0xD4, 0x04, 0x04, 0x18, 0x08};
// For AVCaptureSessionPresetLow(144x192), AVCaptureVideoOrientationPortrait on all Ipads
char ipad_sps[] = {0x67, 0x4D, 0x00, 0x0C, 0xAB, 0x41, 0x23, 0x34, 0xD4, 0x04, 0x04, 0x18, 0x08};
// Iphone 4G AVCaptureSessionPresetLow (144x192), AVCaptureVideoOrientationPortrait
char iphone4g_sps[] = {0x67, 0x42, 0x00, 0x1E, 0x8D, 0x68, 0x24, 0x66, 0x9A, 0x83, 0x00, 0x83, 0x01};
// For AVCaptureSessionPreset352x288 (352x288), AVCaptureVideoOrientationLandscape
char iphone_sps[] = {0x67, 0x42, 0x00, 0x1E, 0xAB, 0x40, 0xB0, 0x4B, 0x4D, 0x40, 0x40, 0x41, 0x80, 0x80};
// For AVCaptureSessionPreset352x288 (352x288), AVCaptureVideoOrientationPortrait
char ipad_sps[] = {0x67, 0x42, 0x00, 0x1E, 0xAB, 0x40, 0xB0, 0x4B, 0x4D, 0x40, 0x40, 0x41, 0x80, 0x80};
PP:
char pps[] = {0x28, 0xCE, 0x3C, 0x80};
char iphone4g_pps[] = {0x68, 0xCE, 0x09, 0xC8};
您可以将单个帧编码为文件,然后从该文件中提取 sps 和 pps。我有一个示例显示如何在http://www.gdcl.co.uk/2013/02/20/iOS-Video-Encoding.html上做到这一点
使用 VideoToolbox API。参考:https ://developer.apple.com/videos/play/wwdc2014/513/
搜索关键字print("SPS is
及print("PPS is
以下。
//
// LiveStreamSession.swift
// LiveStreamKit
//
// Created by Ya Wang on 6/10/21.
//
import Foundation
import AVFoundation
import VideoToolbox
public class LiveStreamSession {
let compressionSession: VTCompressionSession
var index = -1
var lastInputPTS = CMTime.zero
public init?(width: Int32, height: Int32){
var compressionSessionOrNil: VTCompressionSession? = nil
let status = VTCompressionSessionCreate(allocator: kCFAllocatorDefault,
width: width,
height: height,
codecType: kCMVideoCodecType_H264,
encoderSpecification: nil, // let the video toolbox choose a encoder
imageBufferAttributes: nil,
compressedDataAllocator: kCFAllocatorDefault,
outputCallback: nil,
refcon: nil,
compressionSessionOut: &compressionSessionOrNil)
guard status == noErr,
let compressionSession = compressionSessionOrNil else {
return nil
}
VTSessionSetProperty(compressionSession, key: kVTCompressionPropertyKey_RealTime, value: kCFBooleanTrue);
VTCompressionSessionPrepareToEncodeFrames(compressionSession)
self.compressionSession = compressionSession
}
public func pushVideoBuffer(buffer: CMSampleBuffer) {
// image buffer
guard let imageBuffer = CMSampleBufferGetImageBuffer(buffer) else {
assertionFailure()
return
}
// pts
let pts = CMSampleBufferGetPresentationTimeStamp(buffer)
guard CMTIME_IS_VALID(pts) else {
assertionFailure()
return
}
// duration
var duration = CMSampleBufferGetDuration(buffer);
if CMTIME_IS_INVALID(duration) && CMTIME_IS_VALID(self.lastInputPTS) {
duration = CMTimeSubtract(pts, self.lastInputPTS)
}
index += 1
self.lastInputPTS = pts
print("[\(Date())]: pushVideoBuffer \(index)")
let currentIndex = index
VTCompressionSessionEncodeFrame(compressionSession, imageBuffer: imageBuffer, presentationTimeStamp: pts, duration: duration, frameProperties: nil, infoFlagsOut: nil) {[weak self] status, encodeInfoFlags, sampleBuffer in
print("[\(Date())]: compressed \(currentIndex)")
if let sampleBuffer = sampleBuffer {
self?.didEncodeFrameBuffer(buffer: sampleBuffer, id: currentIndex)
}
}
}
deinit {
VTCompressionSessionInvalidate(compressionSession)
}
private func didEncodeFrameBuffer(buffer: CMSampleBuffer, id: Int) {
guard let attachments = CMSampleBufferGetSampleAttachmentsArray(buffer, createIfNecessary: true)
else {
return
}
let dic = Unmanaged<CFDictionary>.fromOpaque(CFArrayGetValueAtIndex(attachments, 0)).takeUnretainedValue()
let keyframe = !CFDictionaryContainsKey(dic, Unmanaged.passRetained(kCMSampleAttachmentKey_NotSync).toOpaque())
// print("[\(Date())]: didEncodeFrameBuffer \(id) is I frame: \(keyframe)")
if keyframe,
let formatDescription = CMSampleBufferGetFormatDescription(buffer) {
// https://www.slideshare.net/instinctools_EE_Labs/videostream-compression-in-ios
var number = 0
CMVideoFormatDescriptionGetH264ParameterSetAtIndex(formatDescription, parameterSetIndex: 0, parameterSetPointerOut: nil, parameterSetSizeOut: nil, parameterSetCountOut: &number, nalUnitHeaderLengthOut: nil)
// SPS and PPS and so on...
let parameterSets = NSMutableData()
for index in 0 ... number - 1 {
var parameterSetPointer: UnsafePointer<UInt8>?
var parameterSetLength = 0
CMVideoFormatDescriptionGetH264ParameterSetAtIndex(formatDescription, parameterSetIndex: index, parameterSetPointerOut: ¶meterSetPointer, parameterSetSizeOut: ¶meterSetLength, parameterSetCountOut: nil, nalUnitHeaderLengthOut: nil)
// parameterSets.append(startCode, length: startCodeLength)
if let parameterSetPointer = parameterSetPointer {
parameterSets.append(parameterSetPointer, length: parameterSetLength)
}
//
if index == 0 {
print("SPS is \(parameterSetPointer) with length \(parameterSetLength)")
} else if index == 1 {
print("PPS is \(parameterSetPointer) with length \(parameterSetLength)")
}
}
print("[\(Date())]: parameterSets \(parameterSets.length)")
}
}
}