2

我在将 GStreamer 的视频输出重新缩放到显示视频的窗口尺寸(保留视频的纵横比)时遇到了一些问题。问题是我首先需要预卷视频以便能够通过检索协商的上限来确定其尺寸,然后计算它需要显示的尺寸以适合窗口。一旦我预卷了视频并获得了尺寸上限,我就无法再更改视频的尺寸了。设置新的上限仍会导致视频以其原始大小输出。我必须怎么做才能解决这个问题?

只是为了完整。在当前的实现中,我无法渲染到 OpenGL 纹理,这很容易解决这个问题,因为您可以简单地将输出渲染到纹理并缩放它以适应屏幕。我必须在 pygame 表面上绘制输出,它需要具有正确的尺寸。pygame 确实提供了缩放其表面的功能,但我认为这样的实现(就像我现在所拥有的那样)比直接从 GStreamer 检索正确大小的帧要慢(我说的对吗?)

这是我加载和显示视频的代码(我省略了主循环的内容):

def calcScaledRes(self, screen_res, image_res):
    """Calculate image size so it fits the screen
    Args
        screen_res (tuple)   -  Display window size/Resolution
        image_res (tuple)    -  Image width and height

    Returns
        tuple - width and height of image scaled to window/screen
    """
    rs = screen_res[0]/float(screen_res[1])
    ri = image_res[0]/float(image_res[1])

    if rs > ri:
        return (int(image_res[0] * screen_res[1]/image_res[1]), screen_res[1])
    else:
        return (screen_res[0], int(image_res[1]*screen_res[0]/image_res[0]))

def load(self, vfile):
    """
    Loads a videofile and makes it ready for playback

    Arguments:
    vfile -- the uri to the file to be played
    """
    # Info required for color space conversion (YUV->RGB)
    # masks are necessary for correct display on unix systems
    _VIDEO_CAPS = ','.join([
        'video/x-raw-rgb',
        'red_mask=(int)0xff0000',
        'green_mask=(int)0x00ff00',
        'blue_mask=(int)0x0000ff'
    ])

    self.caps = gst.Caps(_VIDEO_CAPS)

    # Create videoplayer and load URI
    self.player = gst.element_factory_make("playbin2", "player")        
    self.player.set_property("uri", vfile)

    # Enable deinterlacing of video if necessary
    self.player.props.flags |= (1 << 9)     

    # Reroute frame output to Python
    self._videosink = gst.element_factory_make('appsink', 'videosink')      
    self._videosink.set_property('caps', self.caps)
    self._videosink.set_property('async', True)
    self._videosink.set_property('drop', True)
    self._videosink.set_property('emit-signals', True)
    self._videosink.connect('new-buffer', self.__handle_videoframe)     
    self.player.set_property('video-sink', self._videosink)

    # Preroll movie to get dimension data
    self.player.set_state(gst.STATE_PAUSED)

    # If movie is loaded correctly, info about the clip should be available
    if self.player.get_state(gst.CLOCK_TIME_NONE)[0] == gst.STATE_CHANGE_SUCCESS:
        pads = self._videosink.pads()           
        for pad in pads:            
            caps = pad.get_negotiated_caps()[0]
            self.vidsize = caps['width'], caps['height']
        else:
            raise exceptions.runtime_error("Failed to retrieve video size")

# Calculate size of video when fit to screen
    self.scaledVideoSize = self.calcScaledRes((self.screen_width,self.screen_height), self.vidsize) 
# Calculate the top left corner of the video (to later center it vertically on screen)  
    self.vidPos = ((self.screen_width - self.scaledVideoSize [0]) / 2, (self.screen_height - self.scaledVideoSize [1]) / 2)

    # Add width and height info to video caps and reload caps
    _VIDEO_CAPS += ", width={0}, heigh={1}".format(self.scaledVideoSize[0], self.scaledVideoSize[1])
    self.caps = gst.Caps(_VIDEO_CAPS)
    self._videosink.set_property('caps', self.caps)  #??? not working, video still displayed in original size

def __handle_videoframe(self, appsink):
    """
    Callback method for handling a video frame

    Arguments:
    appsink -- the sink to which gst supplies the frame (not used)
    """     
    buffer = self._videosink.emit('pull-buffer')        

    img = pygame.image.frombuffer(buffer.data, self.vidsize, "RGB")

    # Upscale image to new surfuace if presented fullscreen
    # Create the surface if it doesn't exist yet and keep rendering to this surface
    # for future frames (should be faster)

    if not hasattr(self,"destSurf"):                
        self.destSurf = pygame.transform.scale(img, self.destsize)
    else:
        pygame.transform.scale(img, self.destsize, self.destSurf)
    self.screen.blit(self.destSurf, self.vidPos)

    # Swap the buffers
    pygame.display.flip()

    # Increase frame counter
    self.frameNo += 1
4

1 回答 1

0

I'm pretty sure that your issue was (has it is very long time since you asked this question) that you never hooked up the bus to watch for messages that were emitted.
The code for this is usually something like this:

    def some_function(self):
    #code defining Gplayer (the pipeline)
    #
    # here
        Gplayer.set_property('flags', self.GST_VIDEO|self.GST_AUDIO|self.GST_TEXT|self.GST_SOFT_VOLUME|self.GST_DEINTERLACE)
    # more code
    #
    # finally
        # Create the bus to listen for messages
        bus = Gplayer.get_bus()
        bus.add_signal_watch()
        bus.enable_sync_message_emission()
        bus.connect('message', self.OnBusMessage)
        bus.connect('sync-message::element', self.OnSyncMessage)
#   Listen for gstreamer bus messages
    def OnBusMessage(self, bus, message):
        t = message.type
        if t == Gst.MessageType.ERROR:
            pass
        elif t == Gst.MessageType.EOS:
            print ("End of Audio")
        return True

    def OnSyncMessage(self, bus, msg):
        if msg.get_structure() is None:
            return True
        if message_name == 'prepare-window-handle':
            imagesink = msg.src
            imagesink.set_property('force-aspect-ratio', True)
            imagesink.set_window_handle(self.panel1.GetHandle())    

The key bit for your issue is setting up a call back for the sync-message and in that call-back, setting the property force-aspect-ratio to True.
This property ensures that the video fits the window it is being displayed in at all times.

Note the self.panel1.GetHandle() function names the panel in which you are displaying the video.

I appreciate that you will have moved on but hopefully this will help someone else trawling through the archives.

于 2015-08-25T14:45:21.610 回答