After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 722556 - pad: Cleanup hooks on dispose
pad: Cleanup hooks on dispose
Status: RESOLVED INCOMPLETE
Product: GStreamer
Classification: Platform
Component: gstreamer (core)
git master
Other Linux
: Normal normal
: NONE
Assigned To: GStreamer Maintainers
GStreamer Maintainers
Depends on:
Blocks:
 
 
Reported: 2014-01-19 18:07 UTC by Andrey Utkin
Modified: 2015-05-17 17:35 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
patch (1.30 KB, patch)
2014-01-19 18:07 UTC, Andrey Utkin
reviewed Details | Review

Description Andrey Utkin 2014-01-19 18:07:25 UTC
Created attachment 266670 [details] [review]
patch

This properly releases installed pad probe callbacks, which weren't removed at the moment of pad disposal.

If testcase is required, i will provide it.
Comment 1 Sebastian Dröge (slomo) 2014-01-19 20:17:18 UTC
Test is not required, but a unit test would be nice :)

Will otherwise write one tomorrow and push then.
Comment 2 Sebastian Dröge (slomo) 2014-01-20 14:50:39 UTC
What exactly is the problem here though? No memory should be leaked without your patch either, and the pad should be unblocked already when it is disposed.
Comment 3 Andrey Utkin 2014-01-20 14:56:30 UTC
(In reply to comment #2)
> What exactly is the problem here though? No memory should be leaked without
> your patch either, and the pad should be unblocked already when it is disposed.

I haven't checked a lot with exact testcases (BTW if you have already made some, please share), but i think it fixes the situation when pad is going to be disposed, and hooks are still there (yes, that's application level error). No?
Comment 4 Sebastian Dröge (slomo) 2014-01-20 14:58:52 UTC
Yes, that's a big application level error if there are still pad blocks while you dispose the pad. Everything should just crash then because the blocked threads will use a pad that no longer exists after they were unblocked :)

Can you describe the actual situation a bit better?
Comment 5 Andrey Utkin 2014-01-20 15:10:33 UTC
(In reply to comment #4)
> Yes, that's a big application level error if there are still pad blocks while
> you dispose the pad. Everything should just crash then because the blocked
> threads will use a pad that no longer exists after they were unblocked :)

Isn't the matter is that we have hooks still present, not threads actually being stuck?

> Can you describe the actual situation a bit better?

Assume a pad callback procedure that wants to execute once and uninstall itself. In my case its function is to release the requested source pad (the pad on which actual probe is set up). But looks it can happen that task thread that should push through that pad is paused or stopped for some reason, and later the whole pipeline is shut down, so our callback haven't executed at all. This seems to result in a leak of memory related to that requested pad.

I cannot be sure that my description is fully correct, of course it's better to check with testcase than to take my speculations on trust.
Comment 6 Sebastian Dröge (slomo) 2014-01-20 15:16:47 UTC
(In reply to comment #5)
> (In reply to comment #4)
> > Yes, that's a big application level error if there are still pad blocks while
> > you dispose the pad. Everything should just crash then because the blocked
> > threads will use a pad that no longer exists after they were unblocked :)
> 
> Isn't the matter is that we have hooks still present, not threads actually
> being stuck?

The hooks are all freed with the clear() call below the line you added. cleanup_hook() additionally only signals any blocked threads that they can continue now.

> > Can you describe the actual situation a bit better?
> 
> Assume a pad callback procedure that wants to execute once and uninstall
> itself. In my case its function is to release the requested source pad (the pad
> on which actual probe is set up). But looks it can happen that task thread that
> should push through that pad is paused or stopped for some reason, and later
> the whole pipeline is shut down, so our callback haven't executed at all. This
> seems to result in a leak of memory related to that requested pad.
> 
> I cannot be sure that my description is fully correct, of course it's better to
> check with testcase than to take my speculations on trust.

Maybe a testcase would be best then... there should not be any memory leaks here. The only problem I see that your patch can "solve" is a thread being blocked in a pad block... while the pad is freed. And then it should crash anyway because the thread will use this pad after the block is released (and something should have a reference on that pad anyway).