GNOME Bugzilla – Bug 788563
rtph264depay frequently introduces frame drops.
Last modified: 2018-02-01 15:51:57 UTC
Displaying a stream from an RTP h264 source, gstreamer frequently displays video artifacts as if it drops frames. The same issue is not noticeable using other (libav based) players. gst-launch periodically throws the error: WARNING: from element /GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0: Could not decode stream. Additional debug info: gstrtpbasedepayload.c(466): gst_rtp_base_depayload_handle_buffer (): /GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0: Received invalid RTP payload, dropping Tested with the following pipeline, with identical results: gst-launch-1.0 udpsrc port=15005 caps=application/x-rtp ! rtph264depay ! queue ! avdec_h264 ! videoconvert ! queue ! autovideosink sync=false gst-launch-1.0 udpsrc port=15005 caps=application/x-rtp ! rtph264depay ! queue ! avdec_h264 ! queue ! videoconvert ! autovideosink sync=false gst-launch-1.0 udpsrc port=15005 caps=application/x-rtp ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink sync=false PCAP file of the video stream triggering the issue: https://1drv.ms/u/s!Ao_vQjquc4qCm3wXa26WbyXMJrOl
What is the bitrate of that stream? There are known problems with udpsrc not being fast enough, you can increase the buffer in udpsrc by setting the buffer-size property.
This error happens because a packet being processed is not a valid RTP packet. Which is the case here, there seem to be some rogue packets. I don't think it should be a problem though? RTP sequence numbers of the valid packets look continuous at first glance, and I can't reproduce any artefacts. Should I be seeing decoding artefacts with this pipeline? gst-launch-1.0 filesrc location= ~/samples/misc/788563-rtph264depay-invalid-payload-decode-artifacts.pcap ! pcapparse caps=application/x-rtp,media=video,encoding-name=H264,clock-rate=90000 ! rtph264depay ! avdec_h264 ! xvimagesink
I have tried to reproduce the issue with the same original setup and the latest gstreamer and I have been unable to do so. I guess whatever issue there were it has been fixed.
Let's hope so! If you run into it again, please re-open or file a new bug, thanks!