After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 724481 - bufferpool: Doesn't reset buffers when releasing them to the pool
bufferpool: Doesn't reset buffers when releasing them to the pool
Status: RESOLVED FIXED
Product: GStreamer
Classification: Platform
Component: gstreamer (core)
git master
Other Linux
: Normal normal
: git master
Assigned To: GStreamer Maintainers
GStreamer Maintainers
Depends on:
Blocks:
 
 
Reported: 2014-02-16 13:39 UTC by Sebastian Dröge (slomo)
Modified: 2018-07-03 18:16 UTC
See Also:
GNOME target: ---
GNOME version: ---



Description Sebastian Dröge (slomo) 2014-02-16 13:39:09 UTC
gst-launch-1.0 videotestsrc num-buffers=2 ! y4menc ! fakesink


This gives lots of errors about zero-sized buffers. Happens because videotestsrc uses its own video buffer pool, y4menc steals the memory of these buffers and unrefs them. Then GstBufferPool just releases them back to the queue, and next time a buffer is acquired we get a buffer without any memory in videotestsrc.

Not sure how to fix that properly. First of all, how do we detect that a buffer is invalid now? It might have different memory than before, or none at all, or the allocation metas could've changed, or the buffer is still the same but the memory is *also* shared with another buffer (and thus not writable), or ...

Then, what to do with such buffers? Just unref and forget them instead of putting back to the queue? That's going to cause problems as the memory could still be in another buffer and maybe we're not able to allocate a new buffer until that memory is released again. Or the buffer pool really needs to get the memory back for proper tracking of memories (meaning we actually have a memory pool that claims to be a buffer pool...).
Comment 1 Nicolas Dufresne (ndufresne) 2014-02-17 14:25:44 UTC
First solution that comes to my mind would be to have caching in allocator, and buffer pools could re-assemble buffers on the fly using that. A big change in allocator, since buffer pool would want to block on the allocation. That need to be carefully done to avoid introducing a new behaviour in existing allocator calls, and mechanism like flushing found in buffer pool would also bee needed.

Silly, partial, good idea ?
Comment 2 Sebastian Dröge (slomo) 2014-02-23 09:53:59 UTC
I don't understand your idea :) How would this be used to solve this problem here?
Comment 3 Nicolas Dufresne (ndufresne) 2014-02-23 21:24:32 UTC
It's not super solidly formed in my mind yes, but the general idea is that when a buffer comes back into the pool, the memory are detached (but kept in cache) and the GstBuffer + Metas are simply discarded. When a buffer is acquired, a clean GstBuffer and reconstructed metas are created and cached memory is added. In this scenario, the memory being available is now bound to when the memory is released, rather then when the buffer comes back into the pool.

Now, where is the cache implemented, I have no idea, might be in the allocator like what Olivier wanted, of might be a separate object (e.g. we could have a GstV4l2Queue, that better fit with V4L2 and is being used by the pool to reconstruct the buffers from cache. I'm open to suggestion, but the key would be to track the memory refcound of memory reuse, and to not truct the GstBuffers that comes back into the pool.
Comment 4 Wim Taymans 2014-02-25 14:36:03 UTC
IMO, each pool implementation should

1) reset the buffer to a good state (flags, timestamps, offsets, metadata) in the
  reset_buffer method
2) check if all buffer properties are acceptable (memory, memory size, metadata) in release_buffer

1) can always happen (unfortunately the API does not allow us to remove the buffer there)..
If 2) determines that the buffer is not acceptable, it should be discarded and not be released into the pool.

Usually, memory is also pooled (like v4l2 or xvimagesink) in the allocator so it will be released  independently of the buffer later.

If a new buffer is needed and an old one is available in the pool, it can be used directly, else the alloc_buffer vmethod makes a new one.

The idea is that the normal case is that a buffer+meta+memory is made in the pool once and can be reused directly so that you don't have the overhead of making a buffer + memory + meta. If that turns out to be impossible, you need to fall back to creating a buffer every time as outlined above. In any case, you want to optimize for reuse.
Comment 5 Wim Taymans 2014-02-25 15:09:03 UTC
Some more ideas:

- the LOCKED flag can be placed on Metadata to make sure it's never removed.
- we could add a GST_BUFFER_FLAG_PRISTINE_MEMORY that gets cleared whenever memory is added or removed from the buffer, it would be easier to discard modified buffers.
- We still need to check that the memory sizes are what we expect them to be and if they are not, that they are restored to what we expect (if possible).
Comment 6 Nicolas Dufresne (ndufresne) 2014-02-25 17:52:56 UTC
The only gap I see is about memory, the buffer might come back to the pool before the memory is released. In this case the buffer is not invalid, it's just not yet ready to be reused.
Comment 7 Olivier Crête 2014-02-26 08:10:24 UTC
(In reply to comment #6)
> The only gap I see is about memory, the buffer might come back to the pool
> before the memory is released. In this case the buffer is not invalid, it's
> just not yet ready to be reused.

My solution here is to have reset_buffer() calls gst_buffer_remove_all_memory() and re-assemble the buffer & memory(s) in acquire_buffer().
Comment 8 Wim Taymans 2014-02-26 09:29:08 UTC
(In reply to comment #7)
> My solution here is to have reset_buffer() calls gst_buffer_remove_all_memory()
> and re-assemble the buffer & memory(s) in acquire_buffer().

It sounds like a good fallback option. You really don't want to do that all the time because then the advantage of the bufferpool is no more.

(In reply to comment #6)
> The only gap I see is about memory, the buffer might come back to the pool
> before the memory is released. In this case the buffer is not invalid, it's
> just not yet ready to be reused.

Do you mean that someone is holding a ref to the memory still? If this ref was non-exclusive, this would mean that the memory can change between mappings (and that would be desired then). If is was locked for exclusive use, the memory would not be writable and you can check that before releasing the buffer (memory) in the pool.

I have another experiment where the reset_buffer would check if the buffer is still good:
 
 - reset timestamps + metadata
 - has memory of the right size and type and is writable

If it returns TRUE, which would put the buffer back into the pool for reuse.
If it returns FALSE, it would free the buffer and alloc_buffer would be called when more buffers are needed.
Comment 9 Olivier Crête 2014-02-26 09:39:08 UTC
If the memory is not writable, you don't have to free the buffer, just have to remove the memory from it, that saves having to de-allocate and re-allocate a GstBuffer.
Comment 10 Wim Taymans 2014-02-26 10:01:32 UTC
(In reply to comment #9)
> If the memory is not writable, you don't have to free the buffer, just have to
> remove the memory from it, that saves having to de-allocate and re-allocate a
> GstBuffer.

But then you have a cost each time you acquire a buffer because you need to check if there is memory on it and then make it if there isn't. If you optimize for the common case, you would never place an 'incomplete' buffer in the bufferpool queue.
Comment 11 Olivier Crête 2014-02-26 20:31:43 UTC
But you have to check anyway (on release or acquire) and completely re-creating a buffer is much more expensive than just checking if a the number of memories if 0 or not. I think the case where the GstBuffer comes back before the GstMemory is more common than we expect, especially for cases like encoders where the next step is either a RTP payloader or a muxer, then it would be every single buffer.
Comment 12 Wim Taymans 2014-02-27 14:47:49 UTC
commit f22d8f08e0a52c6f23520d67fba6b7e1aae61e6e
Author: Wim Taymans <wtaymans@redhat.com>
Date:   Thu Feb 27 15:14:59 2014 +0100

    bufferpool: Use TAG_MEMORY to check memory before releasing
    
    Tag allocated buffers with TAG_MEMORY. When they are released later,
    only add them back to the pool if the tag is still there and the memory
    has not been changed, otherwise throw the buffer away.
    Add unit test to check various scenarios.
    
    Fixes https://bugzilla.gnome.org/show_bug.cgi?id=724481

commit 5bca002f4bcb37e3c5453c77a750bb741e3cfea1
Author: Wim Taymans <wtaymans@redhat.com>
Date:   Thu Feb 27 14:35:09 2014 +0100

    buffer: add a new flag to track memory changes
    
    Add a flag to check if the memory changed in a buffer.
Comment 13 Wim Taymans 2014-02-27 14:49:24 UTC
This doesn't fix everything but at least does some sanity checking before releasing the buffer into the pool.
Comment 14 Nicolas Dufresne (ndufresne) 2015-11-13 22:46:11 UTC
Shall be close this one ? At least until we have issues again ?
Comment 15 Nicolas Dufresne (ndufresne) 2018-07-03 18:16:40 UTC
I guess the answer is yes.