通过遵循此线程中其他人的建议,包括@nschoe(OP)和@Benjamin Trent,我最终使用Janus和GStreamer(1.9)完成了这项工作。我想我会包含我的代码,以便让下一个出现的人的生活更轻松,因为我涉及到如此多的反复试验:
首先构建/安装 GStreamer 及其所有需要的插件(对于我的设置,我需要确保两个插件目录位于 GST_PLUGIN_SYSTEM_PATH 环境变量中)。现在在 Janus 插件初始化(init()
回调)时初始化 GStreamer:
gst_init(NULL, NULL);
对于每个 WebRTC 会话,您需要保留一些 GStreamer 句柄,因此将以下内容添加到您的 Janus 插件会话结构中:
GstElement *pipeline, *appsrc, *multifilesink;
创建 Janus 插件会话(create_session()
回调)时,为该会话设置 GStreamer 管道(在我的情况下,我需要降低帧速率,因此需要降低 videorate/capsrate;您可能不需要这些):
GstElement *conv, *vp8depay, *vp8dec, *videorate, *capsrate, *pngenc;
session->pipeline = gst_pipeline_new("pipeline");
session->appsrc = gst_element_factory_make("appsrc", "source");
vp8depay = gst_element_factory_make("rtpvp8depay", NULL);
vp8dec = gst_element_factory_make("vp8dec", NULL);
videorate = gst_element_factory_make("videorate", NULL);
capsrate = gst_element_factory_make("capsfilter", NULL);
conv = gst_element_factory_make("videoconvert", "conv");
pngenc = gst_element_factory_make("pngenc", NULL);
session->multifilesink = gst_element_factory_make("multifilesink", NULL);
GstCaps* capsRate = gst_caps_new_simple("video/x-raw", "framerate", GST_TYPE_FRACTION, 15, 1, NULL);
g_object_set(capsrate, "caps", capsRate, NULL);
gst_caps_unref(capsRate);
GstCaps* caps = gst_caps_new_simple ("application/x-rtp",
"media", G_TYPE_STRING, "video",
"encoding-name", G_TYPE_STRING, "VP8-DRAFT-IETF-01",
"payload", G_TYPE_INT, 96,
"clock-rate", G_TYPE_INT, 90000,
NULL);
g_object_set(G_OBJECT (session->appsrc), "caps", caps, NULL);
gst_caps_unref(caps);
gst_bin_add_many(GST_BIN(session->pipeline), session->appsrc, vp8depay, vp8dec, conv, videorate, capsrate, pngenc, session->multifilesink, NULL);
gst_element_link_many(session->appsrc, vp8depay, vp8dec, conv, videorate, capsrate, pngenc, session->multifilesink, NULL);
// Setup appsrc
g_object_set(G_OBJECT (session->appsrc), "stream-type", 0, NULL);
g_object_set(G_OBJECT (session->appsrc), "format", GST_FORMAT_TIME, NULL);
g_object_set(G_OBJECT (session->appsrc), "is-live", TRUE, NULL);
g_object_set(G_OBJECT (session->appsrc), "do-timestamp", TRUE, NULL);
g_object_set(session->multifilesink, "location", "/blah/some/dir/output-%d.png", NULL);
gst_element_set_state(session->pipeline, GST_STATE_PLAYING);
当传入的 RTP 数据包被 Janus 解复用并准备好读取时,(incoming_rtp()
回调),将其输入 GStreamer 管道:
if(video && session->video_active) {
// Send to GStreamer
guchar* temp = NULL;
temp = (guchar*)malloc(len);
memcpy(temp, buf, len);
GstBuffer* buffer = gst_buffer_new_wrapped_full(0, temp, len, 0, len, temp, g_free);
gst_app_src_push_buffer(GST_APP_SRC(session->appsrc), buffer);
}
最后,当 Janus 插件会话结束时(destroy_session()
回调),一定要释放 GStreamer 资源:
if(session->pipeline) {
gst_element_set_state(session->pipeline, GST_STATE_NULL);
gst_object_unref(session->pipeline);
session->pipeline = NULL;
}