After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 752331 - playback loses frames regularly and increasingly on run-times of more than 2 days
playback loses frames regularly and increasingly on run-times of more than 2 ...
Status: RESOLVED NOTGNOME
Product: GStreamer
Classification: Platform
Component: gstreamer (core)
1.4.5
Other Windows
: Normal major
: git master
Assigned To: GStreamer Maintainers
GStreamer Maintainers
Depends on:
Blocks:
 
 
Reported: 2015-07-13 14:30 UTC by Tiago Rezende
Modified: 2015-07-13 21:37 UTC
See Also:
GNOME target: ---
GNOME version: ---



Description Tiago Rezende 2015-07-13 14:30:06 UTC
When a video is played for more than two days using playbin and appsink with regular seeks to defined points in the stream, decode starts to discard frames without sending them to the appsink.

Degradation of timing starts at the end of the second day and increases inverse-exponentially, and happens independent of video or codec choice - suspect to be an integer conversion with precision loss, probably from guint64 to double/float and back, within the timing logic.
Comment 1 Nicolas Dufresne (ndufresne) 2015-07-13 15:15:03 UTC
Even if considered unspecific, can you provides steps to reproduce ? Is this only happening on Windows ? What are the sink element in use ?
Comment 2 Tiago Rezende 2015-07-13 18:38:05 UTC
(In reply to Nicolas Dufresne (stormer) from comment #1)
> Even if considered unspecific, can you provides steps to reproduce ? Is this
> only happening on Windows ? What are the sink element in use ?

As of now, the instances I noticed this problem are on windows, but I've set up an OSX rig to test this same problem. The application I'm developing sets up a 'playbin', an 'appsink' for loading video textures into an OpenGL 3.2 texture and a 'gconfaudiosink' for sound. Both video and audio start to degrade at the same time, but gst_element_query_position constantly reports a monotonically increasing position.

Steps to reproduce:

1) Setup:
   gst_init(NULL,NULL);
   pipeline = gst_element_factory_make("playbin","player");
   videosink = gst_element_factory_make("appsink","videosink");
   audiosink = gst_element_factory_make("gconfaudiosink","audiosink");
   gst_base_sink_get_sync(videosink, true);
   gst_app_sink_set_drop(videosink, false);
   g_object_set(pipeline, "video-sink", videosink, NULL);
   g_object_set(pipeline, "audio-sink", audiosink, NULL);
   g_object_set(pipeline, "uri", filename, NULL);
   g_object_set(videosink, "emit-signals", false, NULL);
   // controller_obj is an object maintaining opengl state and buffers, callbacks receive it to relay messages
   GstAppSinkCallbacks gstCallbacks;
   gstCallbacks.eos = &on_eos_from_source;
   gstCallbacks.new_preroll = &on_new_preroll_from_source;
   gstCallbacks.new_sample = &on_new_sample_from_source;
   gst_app_sink_set_callbacks(videosink, &gstCallbacks, controller_obj, NULL);

   ret=gst_element_set_state(pipeline, GST_STATE_PLAYING);
   if(ret != GST_STATE_CHANGE_FAILURE) {
       return ERROR;
   } else {
       return OK;
   }

2) message pump

   // busy loop
   bus = gst_element_get_bus(pipeline);
   while((msg = gst_bus_pop(bus)) != NULL) {
      on_message(bus, msg, controller_obj);
      gst_message_unref(msg);
   }
   gst_object_unref(bus);
   controller_render(controller_obj);


   void on_message(GstBus* bus, GstMessage* msg, Controller* controller_obj) {
      switch(GST_MESSAGE_TYPE(msg)) {
         ...
         // simplest way I managed to reproduce the degradation was to simply put a seek to the beginning of the stream when receiving an EOS
         case GST_MESSAGE_EOS:
            gst_element_seek(pipeline, 1.0, GST_FORMAT_TIME, (GstSeekFlags)(GST_SEEK_FLAG_FLUSH | GST_SEEK_FLAG_SKIP), GST_SEEK_TYPE_SET, 2000000, GST_SEEK_TYPE_NONE, GST_CLOCK_TYPE_NONE);
            break;
         ...
      }
   }

   GstFlowReturn on_new_sample_from_source(GstAppSink *appsink, void* data) {
      Controller *controller = (Controller*)data;
      GstSample *sample = gst_app_sink_pull_sample(appsink);
      if(controller_enqueue_sample(controller, sample)) return GST_FLOW_OK;
      else return GST_FLOW_ERROR;
   }

   void controller_render(Controller* controller) {
      if(!controller_sample_queue_empty(controller)) {
         GstSample *sample = controller_pop_queue(controller);
         GstBuffer *buffer = gst_sample_get_buffer(sample);
         GstMapInfo info;
         gst_buffer_map(buffer, &info, GST_MAP_READ);
         // update opengl texture
         ...
         gst_buffer_unmap(buffer, &info);
         gst_buffer_unref(buffer);
         gst_sample_unref(sample);
      }
      controller_draw_movie(controller);
      // current_position keeps increasing
      gst_element_query_position(controller->pipeline, GST_FORMAT_TIME, &current_position);
      controller_draw_time(controller, current_position);
   }
Comment 3 Nicolas Dufresne (ndufresne) 2015-07-13 19:30:41 UTC
Did you forgot to set property "max-lateness" of appsink ? Without that, you are telling appsink to never drop, which will cause drift if you GL rendererer jitter.
Comment 4 Tiago Rezende 2015-07-13 19:37:16 UTC
(In reply to Nicolas Dufresne (stormer) from comment #3)
> Did you forgot to set property "max-lateness" of appsink ? Without that, you
> are telling appsink to never drop, which will cause drift if you GL
> rendererer jitter.

Oh, sorry, I forgot to include that the sample queuing function actually checks if the queue is longer than 3 samples, dropping the oldest ones in that case - making the max-lateness option unnecessary, IMO - does it influence the running timer/frame counter inside gstreamer? I couldn't detect any jitter in the playing before the pipeline started missing frames.
Comment 5 Nicolas Dufresne (ndufresne) 2015-07-13 19:41:41 UTC
(In reply to Tiago Rezende from comment #4)
> Oh, sorry, I forgot to include that the sample queuing function actually
> checks if the queue is longer than 3 samples, dropping the oldest ones in
> that case - making the max-lateness option unnecessary, IMO - does it
> influence the running timer/frame counter inside gstreamer? I couldn't
> detect any jitter in the playing before the pipeline started missing frames.

I don't think that fixes anything. If you want proper synchronization you need to rely on you sink element to do it. "sync" to 1, and "max-lateness" != -1
Comment 6 Tiago Rezende 2015-07-13 19:49:57 UTC
(In reply to Nicolas Dufresne (stormer) from comment #5)
> I don't think that fixes anything. If you want proper synchronization you
> need to rely on you sink element to do it. "sync" to 1, and "max-lateness"
> != -1

I was for some time relying on buffer PTS to do that, but when playback started skipping after the same two days I went on to just enqueuing the samples and rendering them in the order they came, to see if the problem went away. Then I discovered that actually after these two/three days increasingly more samples will end up not even being sent anymore to the appsink, and with a regular pattern.
Comment 7 Nicolas Dufresne (ndufresne) 2015-07-13 20:19:21 UTC
I think it's correct to say this is a bug in your code.
Comment 8 Tiago Rezende 2015-07-13 21:14:08 UTC
Sorry, I fail to follow this reasoning. I'll prepare a better test suite to explain my point, but should I present it on this thread still? Or is it better to make another bug report?

In any case, my problem seems to be that after a few days the appsink starts not receiving some samples in it's callback, and as time passes by more samples go missing - and the decoded samples generation and sending to the appsink, to the furthest of my knowledge about gstreamer, are not being scheduled by my code, but by gstreamer´s code itself.
Comment 9 Nicolas Dufresne (ndufresne) 2015-07-13 21:37:59 UTC
Maybe try first on the mailing list.