HaishinKit for iOS, macOS, tvOS, visionOS 以及 Android。
- 通过 RTMP 和 SRT 提供的相机和麦克风流式传输库,适用于 iOS, macOS, tvOS 和 visionOS。
- README.md 包含未发布的內容,可使用 master 分支进行测试。
- API 文档
- 如果您需要帮助使用 HaishinKit 进行 LiveStreaming 请求,请在 GitHub 讨论区 使用 问答。
- 如果您想讨论功能请求,请在 GitHub 讨论区 使用 Idea。
- 如果您遇到了 HaishinKit 的 bug🐛,请在 GitHub 问题 使用 Bug report template。
- 跟踪级别日志非常有用。请设置
LBLogger.with(HaishinKitIdentifier).level = .trace
。 - 如果您没有使用问题模板,我将在不进行评论的情况下立即关闭您的问题。
- 跟踪级别日志非常有用。请设置
- 如果您 希望做出贡献,请使用 pr 模板提交拉取请求。
- 如果您希望支持基于电子邮件的通信而不使用 GitHub。
- 咨询费为 $50/1 起件事。我可以在几天内做出回应。
- Discord 聊天室.
- 如果会日语,请用日语进行交流!
项目名称 | 备注 | 许可证 |
---|---|---|
HaishinKit for Android。 | 通过 RTMP 提供的 Android 的相机和麦克风流式传输库。 | BSD 3-Clause "New" or "Revised" License |
HaishinKit for Flutter | 通过 RTMP 提供的 Flutter 的相机和麦克风流式传输库。 | BSD 3-Clause "New" or "Revised" License |
- 认证
- 发布和录制
- 回放(Beta)
- 自适应比特率流
- 动作消息格式
- AMF0
- AMF3
- 共享对象
- RTMPS
- 本地(RTMP over SSL/TLS)
- 隧道(RTMPT over SSL/TLS)(技术预览)
- RTMPT(技术预览)
- ReplayKit Live 作为广播上传扩展
- 增强 RTMP
- 发布和录制(H264/HEVC/AAC)
- 回放(beta)
- 模式
- 调用者
- 听众
- 邂逅
通过离屏渲染功能,可在广播或观看视频的过程中显示任何文本或位图。这允许有水印和时间显示等各种应用。
示例
stream.videoMixerSettings.mode = .offscreen
stream.screen.startRunning()
textScreenObject.horizontalAlignment = .right
textScreenObject.verticalAlignment = .bottom
textScreenObject.layoutMargin = .init(top: 0, left: 0, bottom: 16, right: 16)
stream.screen.backgroundColor = UIColor.black.cgColor
let videoScreenObject = VideoTrackScreenObject()
videoScreenObject.cornerRadius = 32.0
videoScreenObject.track = 1
videoScreenObject.horizontalAlignment = .right
videoScreenObject.layoutMargin = .init(top: 16, left: 0, bottom: 0, right: 16)
videoScreenObject.size = .init(width: 160 * 2, height: 90 * 2)
_ = videoScreenObject.registerVideoEffect(MonochromeEffect())
let imageScreenObject = ImageScreenObject()
let imageURL = URL(fileURLWithPath: Bundle.main.path(forResource: "game_jikkyou", ofType: "png") ?? "")
if let provider = CGDataProvider(url: imageURL as CFURL) {
imageScreenObject.verticalAlignment = .bottom
imageScreenObject.layoutMargin = .init(top: 0, left: 0, bottom: 16, right: 0)
imageScreenObject.cgImage = CGImage(
pngDataProviderSource: provider,
decode: nil,
shouldInterpolate: false,
intent: .defaultIntent
)
} else {
logger.info("no image")
}
let assetScreenObject = AssetScreenObject()
assetScreenObject.size = .init(width: 180, height: 180)
assetScreenObject.layoutMargin = .init(top: 16, left: 16, bottom: 0, right: 0)
try? assetScreenObject.startReading(AVAsset(url: URL(fileURLWithPath: Bundle.main.path(forResource: "SampleVideo_360x240_5mb", ofType: "mp4") ?? "")))
try? stream.screen.addChild(assetScreenObject)
try? stream.screen.addChild(videoScreenObject)
try? stream.screen.addChild(imageScreenObject)
try? stream.screen.addChild(textScreenObject)
stream.screen.delegate = self
功能 | PiPHKView | MTHKView |
---|---|---|
引擎 | AVSampleBufferDisplayLayer | Metal |
发布 | ✔ | ✔ |
回放 | ✔ | ✔ |
视觉效果 | ✔ | ✔ |
多摄像头 | ✔ | ✔ |
画中画 | ✔ |
- tvOS 17.0 对于 AVCaptureSession
- 支持多任务摄像头访问。
- 支持“仅允许应用扩展 API”选项
本项目的示例适用于iOS(使用UIKit)、iOS(使用SwiftUI)、macOS和tvOS。示例macOS需要Apple Silicon Mac。
- 摄像头和麦克风发布。
- 回放
git clone https://github.com/shogo4405/HaishinKit.swift.git
cd HaishinKit.swift
carthage bootstrap --platform iOS,macOS,tvOS --use-xcframeworks
open HaishinKit.xcodeproj
版本 | Xcode | Swift |
---|---|---|
1.9.0+ | 15.4+ | 5.10+ |
1.8.0+ | 15.3+ | 5.9+ |
1.7.0+ | 15.0+ | 5.9+ |
1.6.0+ | 15.0+ | 5.8+ |
- | iOS | tvOS | macOS | visionOS | watchOS |
---|---|---|---|---|---|
HaishinKit | 13.0+ | 13.0+ | 10.15+ | 1.0+ | - |
SRTHaishinKit | 13.0+ | 13.0+ | 13.0+ | 1.0+ | - |
请在Info.plist文件中包含。
iOS 10.0+
- NSMicrophoneUsageDescription
- NSCameraUsageDescription
macOS 10.14+
- NSMicrophoneUsageDescription
- NSCameraUsageDescription
tvOS 17.0+
- NSMicrophoneUsageDescription
- NSCameraUsageDescription
HaishinKit有一个多模块配置,如果您想使用SRT协议,请使用SRTHaishinKit。SRTHaishinKit仅支持SPM。
HaishinKit | SRTHaishinKit | |
---|---|---|
SPM | https://github.com/shogo4405/HaishinKit.swift | https://github.com/shogo4405/HaishinKit.swift |
CocoaPods | source 'https://github.com/CocoaPods/Specs.git' use_frameworks! def import_pods pod 'HaishinKit', '~> 1.8.2' end target 'Your Target' do platform :ios, '13.0' import_pods end |
不支持。 |
Carthage | github "shogo4405/HaishinKit.swift" ~> 1.8.2 | 不支持。 |
请确保您已设置并激活了iOS的AVAudioSession。
import AVFoundation
let session = AVAudioSession.sharedInstance()
do {
try session.setCategory(.playAndRecord, mode: .default, options: [.defaultToSpeaker, .allowBluetooth])
try session.setActive(true)
} catch {
print(error)
}
let connection = RTMPConnection()
let stream = RTMPStream(connection: connection)
stream.attachAudio(AVCaptureDevice.default(for: .audio)) { _, error in
if let error {
logger.warn(error)
}
}
stream.attachCamera(AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back), track: 0) { _, error in
if let error {
logger.warn(error)
}
}
let hkView = MTHKView(frame: view.bounds)
hkView.videoGravity = AVLayerVideoGravity.resizeAspectFill
hkView.attachStream(stream)
// add ViewController#view
view.addSubview(hkView)
connection.connect("rtmp:///appName/instanceName")
stream.publish("streamName")
let connection = RTMPConnection()
let stream = RTMPStream(connection: connection)
let hkView = MTHKView(frame: view.bounds)
hkView.videoGravity = AVLayerVideoGravity.resizeAspectFill
hkView.attachStream(stream)
// add ViewController#view
view.addSubview(hkView)
connection.connect("rtmp:///appName/instanceName")
stream.play("streamName")
var connection = RTMPConnection()
connection.connect("rtmp://username:password@localhost/appName/instanceName")
let connection = SRTConnection()
let stream = SRTStream(connection: connection)
stream.attachAudio(AVCaptureDevice.default(for: .audio)) { error in
// print(error)
}
stream.attachCamera(AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back), track: 0) { _, error in
if let error {
logger.warn(error)
}
}
let hkView = HKView(frame: view.bounds)
hkView.videoGravity = AVLayerVideoGravity.resizeAspectFill
hkView.attachStream(rtmpStream)
// add ViewController#view
view.addSubview(hkView)
connection.connect("srt://host:port?option=foo")
stream.publish()
let connection = SRTConnection()
let stream = SRTStream(connection: connection)
let hkView = MTHKView(frame: view.bounds)
hkView.videoGravity = AVLayerVideoGravity.resizeAspectFill
hkView.attachStream(rtmpStream)
// add ViewController#view
view.addSubview(hkView)
connection.connect("srt://host:port?option=foo")
stream.play()
stream.frameRate = 30
stream.sessionPreset = AVCaptureSession.Preset.medium
// Do not call beginConfiguration() and commitConfiguration() internally within the scope of the method, as they are called internally.
stream.configuration { session in
session.automaticallyConfiguresApplicationAudioSession = true
}
指定捕获捕获设置。
let front = AVCaptureDevice.default(for: .audio)
stream.attachAudio(front, track: 0) { audioUnit, error in
}
如果您想混合多个音频轨道,请启用功能标志。
stream.isMultiTrackAudioMixingEnabled = true
当您指定采样率时,将进行重采样。此外,在多通道的情况下,还可以应用降采样。
// Setting the value to 0 will be the same as the value specified in mainTrack.
stream.audioMixerSettings = IOAudioMixerSettings(
sampleRate: Float64 = 44100,
channels: UInt32 = 0,
)
stream.audioMixerSettings.isMuted = false
stream.audioMixerSettings.mainTrack = 0
stream.audioMixerSettings.tracks = [
0: .init(
isMuted: Bool = false,
downmix: Bool = true,
channelMap: [Int]? = nil
)
]
/// Specifies the bitRate of audio output.
stream.audioSettings.bitrate = 64 * 1000
/// Specifies the mixes the channels or not. Currently, it supports input sources with 4, 5, 6, and 8 channels.
stream.audioSettings.downmix = true
/// Specifies the map of the output to input channels.
stream.audioSettings.channelMap: [Int]? = nil
指定视频捕获设置。
let front = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front)
stream.attachCamera(front, track: 0) { videoUnit, error in
videoUnit?.isVideoMirrored = true
videoUnit?.preferredVideoStabilizationMode = .standard
videoUnit?.colorFormat = kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange
}
/// Specifies the image rendering mode.
stream.videoMixerSettings.mode = .passthrough or .offscreen
/// Specifies the muted indicies whether freeze video signal or not.
stream.videoMixerSettings.isMuted = false
/// Specifies the main track number.
stream.videoMixerSettings.mainTrack = 0
stream.videoSettings = .init(
videoSize: .init(width: 854, height: 480),
profileLevel: kVTProfileLevel_H264_Baseline_3_1 as String,
bitRate: 640 * 1000,
maxKeyFrameIntervalDuration: 2,
scalingMode: .trim,
bitRateMode: .average,
allowFrameReordering: nil,
isHardwareEncoderEnabled: true
)
内部现在正在处理超过3个通道的数据。如果您在IOStreamRecorder中遇到音频问题,建议在本地保存时将其设置回最多2个通道。
let channels = max(stream.audioInputFormats[0].channels ?? 1, 2)
stream.audioMixerSettings = .init(sampleRate: 0, channels: channels)
// Specifies the recording settings. 0" means the same of input.
var recorder = IOStreamRecorder()
stream.addObserver(recorder)
recorder.settings = [
AVMediaType.audio: [
AVFormatIDKey: Int(kAudioFormatMPEG4AAC),
AVSampleRateKey: 0,
AVNumberOfChannelsKey: 0,
// AVEncoderBitRateKey: 128000,
],
AVMediaType.video: [
AVVideoCodecKey: AVVideoCodecH264,
AVVideoHeightKey: 0,
AVVideoWidthKey: 0,
/*
AVVideoCompressionPropertiesKey: [
AVVideoMaxKeyFrameIntervalDurationKey: 2,
AVVideoProfileLevelKey: AVVideoProfileLevelH264Baseline30,
AVVideoAverageBitRateKey: 512000
]
*/
]
]
recorder.startRunning()
// recorder.stopRunning()
BSD-3-Clause