After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 738302 - videorate: Should increase minimum buffer in allocation query
videorate: Should increase minimum buffer in allocation query
Status: RESOLVED FIXED
Product: GStreamer
Classification: Platform
Component: gst-plugins-base
git master
Other Linux
: Normal normal
: 1.5.1
Assigned To: Nicolas Dufresne (ndufresne)
GStreamer Maintainers
Depends on:
Blocks:
 
 
Reported: 2014-10-10 14:40 UTC by Arjen Veenhuizen
Modified: 2015-01-18 20:53 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
[PATCH] videorate: Implement allocation query (4.01 KB, patch)
2015-01-18 20:00 UTC, Nicolas Dufresne (ndufresne)
committed Details | Review

Description Arjen Veenhuizen 2014-10-10 14:40:05 UTC
Tuning x264 for zerolatency stalls the pipeline

Working pipeline: gst-launch-1.0 -v videotestsrc pattern=snow !  videorate   ! "video/x-raw,framerate=25/1" ! x264enc  ! fakesink

STDOUT: http://pastebin.com/raw.php?i=N8fHHXMi

Pipeline stalls if we tune the x26enc for zerolatency:
gst-launch-1.0 -v videotestsrc pattern=snow !  videorate   ! "video/x-raw,framerate=25/1" ! x264enc tune=zerolatency ! fakesink

STDOUT: http://pastebin.com/raw.php?i=sTXpmJyf

Expected behaviour: low latency x264 stream
Observed behaviour: stalled pipeline

Both logs are generated with GST_DEBUG=x264*:6
Comment 1 Nicolas Dufresne (ndufresne) 2014-10-10 14:51:06 UTC
Just an observation, with is-live=1 it's fine, without videorate it's fine too. I've reproduce in git master, is this what your report is a about or do you see this in 1.4 too ? The decoder started negotiating upstream buffer pool in git master, though there might still be some elements that don't cooperate well in that negotiation. I'd look around around that (will look next week myself if not solved by then).
Comment 2 Arjen Veenhuizen 2014-10-10 14:58:36 UTC
Currently I do not have 1.4 installed so I cannot check. Sorry about that.
My original pipeline (involving an appsrc instead of videotestsrc) still gives me the same problem when using is-live=1.
Comment 3 Thijs Vermeir 2014-10-11 09:44:47 UTC
fyi, I can also reproduce the bug with git master. With 1.4.3 the pipeline plays fine.
Comment 4 Nicolas Dufresne (ndufresne) 2014-10-11 14:56:29 UTC
Thanks. Ok, here's the situation, in zerolatency, x264_encoder_maximum_delayed_frames() returns 0, hence we set the pool size to 1. Upstream does not add more to it, hence we only have 1 buffer in the pool, and max is set to 1 too, so the pool will not grow.

That's looks ridiculously too small, but considering there is no other thread, by the time push() returns that single buffer should have been released back to the pool. Otherwise some element is lying about it's latency (or forgot that latency have impact on the allocation).

It would seem that videorate is introducing this 1 buffer latency. This might be needed for observation, I don't know much about this element (yet).

On the other hand, accepting to have such a low buffer pool size might represent an overhead in presence of queues, rather then improving the performance. That seems like a different subject though.y

So, as this is master, I'd propose a correct solution, which is to read videorate code, and confirm if it has indeed one buffer latency, if so, implement the both the latency query (if upstream rate is known), and increase required pool size. Note that videorate seems like a good candidate for handling dynamic input rate and gently increasing pyipeline latency.
Comment 5 Olivier Crête 2014-10-11 20:14:35 UTC
Videorate doesn't add one buffer of latency.. But it keeps a one buffer observation, so it should indeed implement the Allocation query to add one buffer.
Comment 6 Nicolas Dufresne (ndufresne) 2014-10-12 17:43:47 UTC
Ah, then it's a rare case where latency and buffer isn't the same ;-P If it keeps 1 buffer observation, it must reserve that buffer. So then, this bug is videorate bug. We can still consider that less then 2 buffers in a pool is too low, I would not mind.
Comment 7 Nicolas Dufresne (ndufresne) 2014-10-12 17:44:53 UTC
Note though that videorate "hides" a certain latency if whatever comes before is slower then the expected FPS.
Comment 8 Arjen Veenhuizen 2014-10-27 12:28:46 UTC
I can confirm that there is one frame being processed by the videorate element, after that the pipeline locks up. Any further update on this issue?
Comment 9 Nicolas Dufresne (ndufresne) 2014-10-27 13:42:04 UTC
Moving this bug to base, as the faulty element is videorate here. In order to account the fact that videorate holds on 1 buffer (but no extra latency, I have checked), this element should instercept the allocation query and increase the minimum required buffer so the pipeline cannot stall.
Comment 10 Arjen Veenhuizen 2014-11-24 12:45:08 UTC
Any news on this bug? It is still there in the current HEAD (checked it today).
Comment 11 Nicolas Dufresne (ndufresne) 2014-11-24 16:32:05 UTC
Not yet.
Comment 12 Nicolas Dufresne (ndufresne) 2015-01-18 15:32:02 UTC
Actually, I would rather do two changes here. x264enc does not have to set a maximum in this case so:

gst_query_add_allocation_pool (query, NULL, info->size, num_buffers,
      num_buffers);

could be:

gst_query_add_allocation_pool (query, NULL, info->size, num_buffers, 0);

And there would be no fault I think. Obviously videorate still need update, since it would fail with v4l2 in present of let's say two videorate element or something ;-P
Comment 13 Nicolas Dufresne (ndufresne) 2015-01-18 16:05:20 UTC
commit 3c04db4a307048db70ee1d08c1d62e26ad9569d8
Author: Nicolas Dufresne <nicolas.dufresne@collabora.co.uk>
Date:   Sun Jan 18 11:02:00 2015 -0500

    videorate: Implement allocation query
    
    VideRate keeps 1 buffer in order to duplicate base on closest buffer
    relative to targeted time. This extra buffer need to be request
    otherwise the pipeline may stall when fixed size buffer pool is used.
    
    https://bugzilla.gnome.org/show_bug.cgi?id=738302
Comment 14 Sebastian Dröge (slomo) 2015-01-18 18:20:08 UTC
This change is not completely correct. You set the maximum number of buffers to MAX(min, max) but that's not correct IMHO. If the minimum becomes bigger than the maximum you will have to fail the allocation query instead.
Comment 15 Sebastian Dröge (slomo) 2015-01-18 18:22:33 UTC
Also you should IMHO hook into decide_allocation or propose_allocation, or at least let the base class' query function handle the allocation query *first*. You currently call it after you adjusted the values, but that means that the query does not contain the values from downstream yet.
Comment 16 Nicolas Dufresne (ndufresne) 2015-01-18 19:31:29 UTC
Sorry, completely miss-read the base class, and obviously we need to query downstream first.

commit 2e264103e1acb1f78edd4b1e14ede23d4e5c49a8
Author: Nicolas Dufresne <nicolas.dufresne@collabora.co.uk>
Date:   Sun Jan 18 14:17:07 2015 -0500

    Revert "videorate: Implement allocation query"
    
    This reverts commit 3c04db4a307048db70ee1d08c1d62e26ad9569d8


(In reply to comment #15)
> Also you should IMHO hook into decide_allocation or propose_allocation, or at
> least let the base class' query function handle the allocation query *first*.
> You currently call it after you adjusted the values, but that means that the
> query does not contain the values from downstream yet.

It's a bit weird, but we only need to hook propose_allocation() here.

(In reply to comment #14)
> This change is not completely correct. You set the maximum number of buffers to
> MAX(min, max) but that's not correct IMHO. If the minimum becomes bigger than
> the maximum you will have to fail the allocation query instead.

I was using a approach of better try then be sorry, but indeed it will fail late. The problem with failing the allocation query there is that the information is then lost. It will also fail where it should have worked. Upstream element can create their own pool (e.g. GstVideoPool). I think what need to be done is drop the pool that don't fit. At the end, recheck if there is at least one pool left, and if not, set one with pool = NULL. I'll propose a patch and attach for review.

Note that strictly speaking, this bug is also solved through this patch. x264 was proposing no pool, min = max = latency.

commit acc9529a36ab7eef38539cce9ab7d11658df1fd4
Author: Nicolas Dufresne <nicolas.dufresne@collabora.co.uk>
Date:   Sun Jan 18 11:07:43 2015 -0500

    x264enc: Don't set an allocation maximum
    
    There is no reason x264enc should enforce a maximum allocation size.
    The maximum is normally set by buffer pool which cannot grow, but we
    don't offer a buffer pool. This would lead to stall when used with
    element that don't implement allocation query.
    
    Related to: https://bugzilla.gnome.org/show_bug.cgi?id=738302
Comment 17 Nicolas Dufresne (ndufresne) 2015-01-18 20:00:30 UTC
Created attachment 294811 [details] [review]
[PATCH] videorate: Implement allocation query


The videorate element keeps 1 buffer internally. This buffer need
to be requested during allocation query otherwise the pipeline may
stall.

https://bugzilla.gnome.org/show_bug.cgi?id=738302
---
 gst/videorate/Makefile.am    |  3 ++-
 gst/videorate/gstvideorate.c | 58 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 60 insertions(+), 1 deletion(-)
Comment 18 Nicolas Dufresne (ndufresne) 2015-01-18 20:12:23 UTC
Review of attachment 294811 [details] [review]:

I think I'm correctly using the base class now. This time instead of bumping max, I drop that allocation. I save the maximum seen minimum. There is a risk of over-allocating though with that, but I don't think we can do anything about it. Basically the min contains the provider minimum and the minimum of other elements in between. The provider minimum might no longer be relevant if it's pool isn't being used.

At least, this would be the case if the one where to use v4l2sink or v4l2transform (encoder in the future) with (output-)io-mode=mmap/dmabuf. As the element would be copying into it's own pool, this extra allocation would be needed. At the same time it would be relevant in USERPTR or DMABUF-import io-mode. I think it would be fixed in v4l2 propose allocation by lowering the absolute maximum and adding the remaining at gst_buffer_pool_set_config().
Comment 19 Nicolas Dufresne (ndufresne) 2015-01-18 20:52:55 UTC
Comment on attachment 294811 [details] [review]
[PATCH] videorate: Implement allocation query

commit e60158c98f75c3c8db615e2762c6946710389b05
Author: Nicolas Dufresne <nicolas.dufresne@collabora.co.uk>
Date:   Sun Jan 18 14:58:36 2015 -0500

    videorate: Implement allocation query
    
    The videorate element keeps 1 buffer internally. This buffer need
    to be requested during allocation query otherwise the pipeline may
    stall.
    
    https://bugzilla.gnome.org/show_bug.cgi?id=738302