After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 793413 - msdk: manage mfxFrameSurfaces seperately with other surfacepool.
msdk: manage mfxFrameSurfaces seperately with other surfacepool.
Status: RESOLVED FIXED
Product: GStreamer
Classification: Platform
Component: gst-plugins-bad
git master
Other Linux
: Normal normal
: 1.13.91
Assigned To: GStreamer Maintainers
GStreamer Maintainers
Depends on:
Blocks: 789886
 
 
Reported: 2018-02-13 07:51 UTC by Hyunjun Ko
Modified: 2018-03-08 19:41 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
msdk: manage MSDK surfaces seperately (11.50 KB, patch)
2018-02-20 10:04 UTC, Hyunjun Ko
none Details | Review
msdk: dec: remove code to manage buffers with locked surface (4.87 KB, patch)
2018-02-20 10:04 UTC, Hyunjun Ko
none Details | Review
RFC: implement acquire_buffer without managing surfaces with list. (1.53 KB, patch)
2018-03-05 06:51 UTC, Hyunjun Ko
none Details | Review
msdk: manage MSDK surfaces seperately (16.66 KB, patch)
2018-03-07 10:39 UTC, Hyunjun Ko
committed Details | Review
msdk: dec: remove code to manage buffers with locked surface (4.87 KB, patch)
2018-03-07 10:39 UTC, Hyunjun Ko
committed Details | Review

Description Hyunjun Ko 2018-02-13 07:51:02 UTC
Currently a gst buffer has one mfxFrameSurface when it's allocated.
This is based on that the life of gst buffer and mfxFrameSurface would be same.

But it's not true.
Sometimes even if a gst buffer of a frame is finished on downstream, mfxFramesurface coupled with the gst buffer is still locked, which means it's still being used.

So I would suggest this.
Every time a gst buffer is acquired from the pool(not allocated), it requests new surface to the surface pool which is unlocked and the gst buffer put it back to the surface pool when it's released so the surface pool can manage the surface from the gst buffer.
In this way, I guess, user(decoder or encoder) doesn't need to manage gst buffers including locked surface.
Comment 1 Hyunjun Ko 2018-02-20 10:04:13 UTC
Created attachment 368614 [details] [review]
msdk: manage MSDK surfaces seperately

Currently a gst buffer has one mfxFrameSurface when it's allocated and can't be changed.
This is based on that the life of gst buffer and mfxFrameSurface would be same.
But it's not true. Sometimes even if a gst buffer of a frame is finished on downstream,
mfxFramesurface coupled with the gst buffer is still locked, which means it's still being used in the driver.

So this patch does this.
Every time a gst buffer is acquired from the pool, it confirms if the surface coupled with the buffer is unlocked.
If not, replace it with new unlocked one.
In this way, user(decoder or encoder) doesn't need to manage gst buffers including locked surface.

To do that, this patch includes the following:

1. GstMsdkContext
- Manages MSDK surfaces available and used respectively.
- Provide an api to get MSDK surface available.

2. GstMsdkVideoMemory
- Gets a surface available when it's allocated.
- Provide an api to replace the surface with new unlocked one.

3. GstMsdkBufferPool
- In acquire_buffer, it confirms if the buffer's surface is unlocked when using video memory.
- If not, replace it with new unlocked one.

This also fixes bug #793525.
Comment 2 Hyunjun Ko 2018-02-20 10:04:38 UTC
Created attachment 368615 [details] [review]
msdk: dec: remove code to manage buffers with locked  surface
Comment 3 sreerenj 2018-02-24 01:27:24 UTC
Review of attachment 368614 [details] [review]:

Overall looks good to me. One question,
Why can't we just release the buffer back to pool (within the custom acquire_buffer method)if it is locked by msdk? Is there any particular reason for keeping available & used surfaces in lists?

::: sys/msdk/gstmsdkcontext.c
@@ +466,3 @@
+          g_list_remove (msdk_resp->surfaces_avail, surface);
+      msdk_resp->surfaces_used =
+mfxFrameSurface1 *

It is recommended to use g_list_prepend instead of g_list_append().

::: sys/msdk/gstmsdkvideomemory.c
@@ +60,3 @@
   return surface;
 }
 

Add some comments here, please.
Comment 4 Víctor Manuel Jáquez Leal 2018-02-26 20:26:51 UTC
This decoupling of surfaces and gst-buffers, worries me, since it adds complexity.

I wonder the same as Sree, is it possible, in gst_msdk_buffer_pool_acquire_buffer(), look like:

do {
  ret =
      GST_BUFFER_POOL_CLASS (parent_class)->acquire_buffer (pool, &buf, params);

  if (ret != GST_FLOW_OK || !priv->use_video_memory) {
    if (buf)
      *out_buffer_ptr = buf;
    return ret;
  }

  surface = gst_msdk_get_surface_from_buffer (buf);

  if (surface->Data.Locked == 0) {
    out_buffer_ptr = bug;
    return ret;
  } else {
    gst_buffer_unref (buf);
  }
} while (surface->Data.Locked > 0)

I mean, keep the 1:1 relation of buffer and surface, just adding an additional check for MSDK locking.
Comment 5 Hyunjun Ko 2018-03-01 08:03:37 UTC
(In reply to sreerenj from comment #3)
> Review of attachment 368614 [details] [review] [review]:
> 
> Overall looks good to me. One question,
> Why can't we just release the buffer back to pool (within the custom
> acquire_buffer method)if it is locked by msdk? Is there any particular
> reason for keeping available & used surfaces in lists?
> 

Think about this.
- 5 buffers allocated are in the pool, 4 are used now.
-> got 1 gst buffer, but its surface is locked, take it back to the pool.
-> we retry to get new buffer.
-> pool gives the same buffer since there is one allocated buffer. Take it back too -> Infinite loop. 

To handle this, we could keep the buffer instead of gst_buffer_unref before acquisition, but it would be very similar to the current implementation in decoder.

This case frequently happens when h264 decoding.

Another reson is in the reply of Victor's comment.
Comment 6 Hyunjun Ko 2018-03-01 08:05:58 UTC
(In reply to Víctor Manuel Jáquez Leal from comment #4)
> This decoupling of surfaces and gst-buffers, worries me, since it adds
> complexity.
> 
> I wonder the same as Sree, is it possible, in
> gst_msdk_buffer_pool_acquire_buffer(), look like:
> 
> do {
>   ret =
>       GST_BUFFER_POOL_CLASS (parent_class)->acquire_buffer (pool, &buf,
> params);
> 
>   if (ret != GST_FLOW_OK || !priv->use_video_memory) {
>     if (buf)
>       *out_buffer_ptr = buf;
>     return ret;
>   }
> 
>   surface = gst_msdk_get_surface_from_buffer (buf);
> 
>   if (surface->Data.Locked == 0) {
>     out_buffer_ptr = bug;
>     return ret;
>   } else {
>     gst_buffer_unref (buf);
>   }
> } while (surface->Data.Locked > 0)
> 
> I mean, keep the 1:1 relation of buffer and surface, just adding an
> additional check for MSDK locking.

Multiple call of vmethod in one gst_buffer_pool_acquire_buffer?
I don't believe it's good and the design of gstbufferpool is attempting this kindof usage.

For example, in the customed method, we should do increase/decrease outstanding when we do unref and acquire, and which is already in gst_buffer_pool_acquire_buffer. But there's no way to do it since it's private.

And ref of pool to the buffer should be in the customed method, which is also in gst_buffer_pool_acquire_buffer.
Comment 7 sreerenj 2018-03-01 20:52:16 UTC
(In reply to Hyunjun Ko from comment #5)
> (In reply to sreerenj from comment #3)
> > Review of attachment 368614 [details] [review] [review] [review]:
> > 
> > Overall looks good to me. One question,
> > Why can't we just release the buffer back to pool (within the custom
> > acquire_buffer method)if it is locked by msdk? Is there any particular
> > reason for keeping available & used surfaces in lists?
> > 
> 
> Think about this.
> - 5 buffers allocated are in the pool, 4 are used now.
> -> got 1 gst buffer, but its surface is locked, take it back to the pool.
> -> we retry to get new buffer.
> -> pool gives the same buffer since there is one allocated buffer. Take it
> back too -> Infinite loop. 
> 
> To handle this, we could keep the buffer instead of gst_buffer_unref before
> acquisition, but it would be very similar to the current implementation in
> decoder.
> 
> This case frequently happens when h264 decoding.
> 
> Another reson is in the reply of Victor's comment.


Assume it is a decoder.
There are many possible cases:

a)the downstream using 4 buffers, (it would be too strange if downstream keep four buffers without releasing, but possible, maybe using queue before each postprocessing element)

b) Decoder is using those 4 buffers (then the "Locked" flag here should be 1 too because the surfaces are being used by MediaSDK too)


Currently, gst_msdk_video_memory_replace_surface() will try to replace a surface associated with the buffer if the acquire_buffer() returns a buffer which has locked flag set. 
How can we guarantee that the surface-X (the new unlocked surface which supposed to replace the associated surface in acquired_buffer) is not required by dowstream anymore? Here the Msdk might have decreased the Lock count of Surface-X, but how can we ensure that the decoder already pushed this surface-X 
downstream? Is it possible to have this surface-X been in a locked state for referencing and now decreased the lock count, but not yet pushed downstream?
Comment 8 Hyunjun Ko 2018-03-02 05:46:05 UTC
(In reply to sreerenj from comment #7)
> 
> Assume it is a decoder.
> There are many possible cases:
> 
> a)the downstream using 4 buffers, (it would be too strange if downstream
> keep four buffers without releasing, but possible, maybe using queue before
> each postprocessing element)
> 
> b) Decoder is using those 4 buffers (then the "Locked" flag here should be 1
> too because the surfaces are being used by MediaSDK too)
> 
> 
> Currently, gst_msdk_video_memory_replace_surface() will try to replace a
> surface associated with the buffer if the acquire_buffer() returns a buffer
> which has locked flag set. 
> How can we guarantee that the surface-X (the new unlocked surface which
> supposed to replace the associated surface in acquired_buffer) is not
> required by dowstream anymore? Here the Msdk might have decreased the Lock
> count of Surface-X, but how can we ensure that the decoder already pushed
> this surface-X 
> downstream? Is it possible to have this surface-X been in a locked state for
> referencing and now decreased the lock count, but not yet pushed downstream?

Ah... that's a good point, IIUC.

To handle that, there should be 3 lists (avail, used, locked?) and then

- The used list : surfaces associated with the gst buffer acquired.
- The locked list : when releasing the buffer, if surface is still locked, we put it to this list.
- The available list: the surfaces get unlocked in the locked list should move to this list or surfaces that are never used.

we can search only the locked list without the used list when we need to replace.

What do you think?
Comment 9 sreerenj 2018-03-02 22:40:23 UTC
(In reply to Hyunjun Ko from comment #8)
> (In reply to sreerenj from comment #7)
> > 
> > Assume it is a decoder.
> > There are many possible cases:
> > 
> > a)the downstream using 4 buffers, (it would be too strange if downstream
> > keep four buffers without releasing, but possible, maybe using queue before
> > each postprocessing element)
> > 
> > b) Decoder is using those 4 buffers (then the "Locked" flag here should be 1
> > too because the surfaces are being used by MediaSDK too)
> > 
> > 
> > Currently, gst_msdk_video_memory_replace_surface() will try to replace a
> > surface associated with the buffer if the acquire_buffer() returns a buffer
> > which has locked flag set. 
> > How can we guarantee that the surface-X (the new unlocked surface which
> > supposed to replace the associated surface in acquired_buffer) is not
> > required by dowstream anymore? Here the Msdk might have decreased the Lock
> > count of Surface-X, but how can we ensure that the decoder already pushed
> > this surface-X 
> > downstream? Is it possible to have this surface-X been in a locked state for
> > referencing and now decreased the lock count, but not yet pushed downstream?
> 
> Ah... that's a good point, IIUC.
> 
> To handle that, there should be 3 lists (avail, used, locked?) and then
> 
> - The used list : surfaces associated with the gst buffer acquired.
> - The locked list : when releasing the buffer, if surface is still locked,
> we put it to this list.
> - The available list: the surfaces get unlocked in the locked list should
> move to this list or surfaces that are never used.
> 
> we can search only the locked list without the used list when we need to
> replace.
> 
> What do you think?


I'm still wondering what is the issue with the approach we mentioned in comment-3 and comment-4.

Now go back to the eg: use case you have explained:
If 4 buffers are in use, none of them are consumed by downstream yet, so you are not supposed to replace the internal surface anyway.
If you get a buffer by calling acquire_buffer() it means no gstreamer element in the pipeline is using it, the only possibility to have "buffer in use" is msdk keeping a reference.
Comment 10 Hyunjun Ko 2018-03-05 02:27:06 UTC
(In reply to sreerenj from comment #9)
> (In reply to Hyunjun Ko from comment #8)
> > (In reply to sreerenj from comment #7)
> > > 
> > > Assume it is a decoder.
> > > There are many possible cases:
> > > 
> > > a)the downstream using 4 buffers, (it would be too strange if downstream
> > > keep four buffers without releasing, but possible, maybe using queue before
> > > each postprocessing element)
> > > 
> > > b) Decoder is using those 4 buffers (then the "Locked" flag here should be 1
> > > too because the surfaces are being used by MediaSDK too)
> > > 
> > > 
> > > Currently, gst_msdk_video_memory_replace_surface() will try to replace a
> > > surface associated with the buffer if the acquire_buffer() returns a buffer
> > > which has locked flag set. 
> > > How can we guarantee that the surface-X (the new unlocked surface which
> > > supposed to replace the associated surface in acquired_buffer) is not
> > > required by dowstream anymore? Here the Msdk might have decreased the Lock
> > > count of Surface-X, but how can we ensure that the decoder already pushed
> > > this surface-X 
> > > downstream? Is it possible to have this surface-X been in a locked state for
> > > referencing and now decreased the lock count, but not yet pushed downstream?
> > 
> > Ah... that's a good point, IIUC.
> > 
> > To handle that, there should be 3 lists (avail, used, locked?) and then
> > 
> > - The used list : surfaces associated with the gst buffer acquired.
> > - The locked list : when releasing the buffer, if surface is still locked,
> > we put it to this list.
> > - The available list: the surfaces get unlocked in the locked list should
> > move to this list or surfaces that are never used.
> > 
> > we can search only the locked list without the used list when we need to
> > replace.
> > 
> > What do you think?
> 
> 
> I'm still wondering what is the issue with the approach we mentioned in
> comment-3 and comment-4.
> 

Gstbufferpool doesn't know if a buffer is in use in the driver as you know.
If there are 5 allocated buffers, 4 in use, 1 in release (but still being used in the driver)  the pool always gives the buffer to the user when calling acquire_buffer.

If we implement something you're thinking in acquire_buffer, it causes infinite loop.

And also see the comment 6.
Comment 11 sreerenj 2018-03-05 05:20:50 UTC
(In reply to Hyunjun Ko from comment #10)
> (In reply to sreerenj from comment #9)
> > (In reply to Hyunjun Ko from comment #8)
> > > (In reply to sreerenj from comment #7)
> > > > 
> > > > Assume it is a decoder.
> > > > There are many possible cases:
> > > > 
> > > > a)the downstream using 4 buffers, (it would be too strange if downstream
> > > > keep four buffers without releasing, but possible, maybe using queue before
> > > > each postprocessing element)
> > > > 
> > > > b) Decoder is using those 4 buffers (then the "Locked" flag here should be 1
> > > > too because the surfaces are being used by MediaSDK too)
> > > > 
> > > > 
> > > > Currently, gst_msdk_video_memory_replace_surface() will try to replace a
> > > > surface associated with the buffer if the acquire_buffer() returns a buffer
> > > > which has locked flag set. 
> > > > How can we guarantee that the surface-X (the new unlocked surface which
> > > > supposed to replace the associated surface in acquired_buffer) is not
> > > > required by dowstream anymore? Here the Msdk might have decreased the Lock
> > > > count of Surface-X, but how can we ensure that the decoder already pushed
> > > > this surface-X 
> > > > downstream? Is it possible to have this surface-X been in a locked state for
> > > > referencing and now decreased the lock count, but not yet pushed downstream?
> > > 
> > > Ah... that's a good point, IIUC.
> > > 
> > > To handle that, there should be 3 lists (avail, used, locked?) and then
> > > 
> > > - The used list : surfaces associated with the gst buffer acquired.
> > > - The locked list : when releasing the buffer, if surface is still locked,
> > > we put it to this list.
> > > - The available list: the surfaces get unlocked in the locked list should
> > > move to this list or surfaces that are never used.
> > > 
> > > we can search only the locked list without the used list when we need to
> > > replace.
> > > 
> > > What do you think?
> > 
> > 
> > I'm still wondering what is the issue with the approach we mentioned in
> > comment-3 and comment-4.
> > 
> 
> Gstbufferpool doesn't know if a buffer is in use in the driver as you know.
> If there are 5 allocated buffers, 4 in use, 1 in release (but still being
> used in the driver)  the pool always gives the buffer to the user when
> calling acquire_buffer.
> 

"4 in use": means you are not supposed to replace the surface in any of those 4 buffers. Those are already acquired by either decoder or downstream. So our only choice is to check whether the last one available buffer.
In this case, the infinite loop can be avoided by trying only a specific number of times (eg: try till the count reach the size of gstbufferpool).

The point is that "release_buffer" will always release buffer back to pool irrespective of whether MSDK keeping the lock for referencing. 

Am I missing something?


> If we implement something you're thinking in acquire_buffer, it causes
> infinite loop.
> 
> And also see the comment 6.
Comment 12 Hyunjun Ko 2018-03-05 05:49:18 UTC
(In reply to sreerenj from comment #11)
> (In reply to Hyunjun Ko from comment #10)
> > Gstbufferpool doesn't know if a buffer is in use in the driver as you know.
> > If there are 5 allocated buffers, 4 in use, 1 in release (but still being
> > used in the driver)  the pool always gives the buffer to the user when
> > calling acquire_buffer.
> > 
> 
> "4 in use": means you are not supposed to replace the surface in any of
> those 4 buffers. Those are already acquired by either decoder or downstream.
> So our only choice is to check whether the last one available buffer.

Sure

> In this case, the infinite loop can be avoided by trying only a specific
> number of times (eg: try till the count reach the size of gstbufferpool).

In this way, we should implement something like allocation new buffer in the pool if we reach the count, which is implemented already in gstbufferpool. I don't think it's good. This is similar reason to comment 6.

> 
> The point is that "release_buffer" will always release buffer back to pool
> irrespective of whether MSDK keeping the lock for referencing. 

That's why I suggested 3 lists instead of 2 lists.

> 
> Am I missing something?

I'm confused what you mean.

I thought you wanted to simplify the code by somethign like:
in acquire_buffer, just getting a buffer and then if it's locked, just unreffing and getting a new buffer again, without managing something by list.

Totally I agree with it if poosible without duplicated(or ugly) code.
If you want to see a patch implementing your thought, I would do that.
Do you want?
Comment 13 Hyunjun Ko 2018-03-05 06:51:38 UTC
Created attachment 369316 [details] [review]
RFC: implement acquire_buffer without managing surfaces with list.
Comment 14 Hyunjun Ko 2018-03-05 06:59:21 UTC
(In reply to Hyunjun Ko from comment #13)
> Created attachment 369316 [details] [review] [review]
> RFC: implement acquire_buffer without managing surfaces with list.

@Sree, If this patch is not like what you're thinking, it means I totally misunderstood what you mean. Please let me know if so.
Comment 15 sreerenj 2018-03-05 18:54:18 UTC
(In reply to Hyunjun Ko from comment #14)
> (In reply to Hyunjun Ko from comment #13)
> > Created attachment 369316 [details] [review] [review] [review]
> > RFC: implement acquire_buffer without managing surfaces with list.
> 
> @Sree, If this patch is not like what you're thinking, it means I totally
> misunderstood what you mean. Please let me know if so.

Why do you need to do this "We should increase outstanding, but impossible here since it's private." ? 

acquire a buffer and release it back if it has been blocked. gst_buffer_unref() will call the release_buffer() and that should take care of bufferpool internals like "decrementing outstanding".
Comment 16 Nicolas Dufresne (ndufresne) 2018-03-05 19:28:52 UTC
Review of attachment 369316 [details] [review]:

::: sys/msdk/gstmsdkbufferpool.c
@@ +258,3 @@
   if (surface->Data.Locked > 0) {
+    /* We should increase outstanding, but impossible here since it's private.
+     * g_atomic_int_inc (&pool->priv->outstanding);

Whether you don't ever give back any buffer to GstBufferPool, or give them all. Just don't do a mix. For the first case, have a look at PipeWirePool. The V4L2 pool way is a just a bad idea.
Comment 17 sreerenj 2018-03-05 20:20:49 UTC
(In reply to Nicolas Dufresne (stormer) from comment #16)
> Review of attachment 369316 [details] [review] [review]:
> 
> ::: sys/msdk/gstmsdkbufferpool.c
> @@ +258,3 @@
>    if (surface->Data.Locked > 0) {
> +    /* We should increase outstanding, but impossible here since it's
> private.
> +     * g_atomic_int_inc (&pool->priv->outstanding);
> 
> Whether you don't ever give back any buffer to GstBufferPool, or give them
> all. Just don't do a mix. For the first case, have a look at PipeWirePool.
> The V4L2 pool way is a just a bad idea.

Thanks for chime in. What do you think about https://bugzilla.gnome.org/show_bug.cgi?id=793413#c4 ?
(With some changes to avoid infinite looping)
Comment 18 Nicolas Dufresne (ndufresne) 2018-03-06 00:02:22 UTC
I have the same worry that calling multiple time the vmethod will unbalance the internal counter. I'd implement mask specific queue (mutex protected), and just never chain. Like Win did for pipewire elements.
Comment 19 Hyunjun Ko 2018-03-06 05:44:06 UTC
(In reply to Nicolas Dufresne (stormer) from comment #18)
> I have the same worry that calling multiple time the vmethod will unbalance
> the internal counter. I'd implement mask specific queue (mutex protected),
> and just never chain. Like Win did for pipewire elements.

Thanks for the advice, Nicolas.
I looked into it and the design looks good here as well if we could know when a msdk surface gets unlocked from the driver. But there's no way for now.
Comment 20 Hyunjun Ko 2018-03-06 07:21:42 UTC
@Sree, I'm getting tired of arguing on this thread because of very different timezone.

I have 2 concerns about your approach.
1. multiple calls of parent method in a single function.
2. Much more iteration to get a buffer during aquisition.

But if you still want to remove list things, I'd do that.
Comment 21 sreerenj 2018-03-06 08:22:05 UTC
(In reply to Hyunjun Ko from comment #20)
> @Sree, I'm getting tired of arguing on this thread because of very different
> timezone.
> 
> I have 2 concerns about your approach.
> 1. multiple calls of parent method in a single function.
> 2. Much more iteration to get a buffer during aquisition.
> 
> But if you still want to remove list things, I'd do that.

I'm not trying to get my idea in, just wanna make sure we understood all possible options :)
Comment 22 Hyunjun Ko 2018-03-07 10:39:25 UTC
Created attachment 369401 [details] [review]
msdk: manage MSDK surfaces seperately

Currently a gst buffer has one mfxFrameSurface when it's allocated and can't be changed.
This is based on that the life of gst buffer and mfxFrameSurface would be same.
But it's not true. Sometimes even if a gst buffer of a frame is finished on downstream,
mfxFramesurface coupled with the gst buffer is still locked, which means it's still being used in the driver.

So this patch does this.
Every time a gst buffer is acquired from the pool, it confirms if the surface coupled with the buffer is unlocked.
If not, replace it with new unlocked one.
In this way, user(decoder or encoder) doesn't need to manage gst buffers including locked surface.

To do that, this patch includes the following:

1. GstMsdkContext
- Manages MSDK surfaces available, used, locked respectively as the following:
  1\ surfaces_avail : surfaces which are free and unused anywhere
  2\ surfaces_used : surfaces coupled with a gst buffer and being used now.
  3\ surfaces_locked : surfaces still locked even after the gst buffer is released.

- Provide an api to get MSDK surface available.
- Provide an api to release MSDK surface.

2. GstMsdkVideoMemory
- Gets a surface available when it's allocated.
- Provide an api to get an available surface with new unlocked one.
- Provide an api to release surface in the msdk video memory.

3. GstMsdkBufferPool
- In acquire_buffer, every time a gst buffer is acquired, get new available surface from the list.
- In release_buffer, it confirms if the buffer's surface is unlocked or not.
  - If unlocked, it is put to the available list.
  - If still locked, it is put to the locked list.

This also fixes bug #793525.
Comment 23 Hyunjun Ko 2018-03-07 10:39:54 UTC
Created attachment 369402 [details] [review]
msdk: dec: remove code to manage buffers with locked  surface
Comment 24 sreerenj 2018-03-08 01:21:54 UTC
I still have some concerns, but we can address those later since this patch set fixing a lot of issues.
Comment 25 sreerenj 2018-03-08 19:40:27 UTC
Review of attachment 369401 [details] [review]:

Pushed as commit b08b8ddae3d636b2a37b1206c6354e94d8c43295
Comment 26 sreerenj 2018-03-08 19:40:55 UTC
Review of attachment 369402 [details] [review]:

Pushed as, commit 37ef61586adaa295902515d69fca0f30b0e7ecaa