GNOME Bugzilla – Bug 316248
linking two parallel chains hangs
Last modified: 2010-09-17 14:00:14 UTC
a pipeline like this hanges: src -> tee +-> vol1 -+-> adder -> sink `-> vol2 -´ see attached example
Created attachment 52194 [details] selfcontained example
I'd like to get this test case into core, but we need to get aggregator ported so that we have a "muxer" to use for testing.
Ping? Any updates on this one? Is this really supposed to work without queues? The linking still deadlocks (when updated with s/sinesrc/audiotestsrc/):
+ Trace 67570
Thread 1 (Thread -1213012288 (LWP 15898))
yep, it's supposed to work eventually when we support the GST_EVENT_BUFFERSIZE. It would go like this: tee sends a buffersize event on its source pads, one requesting a (sync) buffersize of 1, and on the other pad one requesting a (sync) size of 0. Adder then uses this to return from the chain function after queuing 1 buffer so that tee can push the other buffer. More complicated cases can be constructed requesting async scheduling (using a thread boundary etc).
*** Bug 353725 has been marked as a duplicate of this bug. ***
*** Bug 413080 has been marked as a duplicate of this bug. ***
Are there any news on this? Any ideas when it may be fixed (days, weeks, months)?
more like months unless somebody wants to start working on it now and provide a patch.
I thought that queues should help, so I've tried this: that of course works: gst-launch audiotestsrc wave=2 freq=200 ! tee name=t ! volume ! adder name=a ! alsasink this hangs (this is what I reported initialy); gst-launch audiotestsrc wave=2 freq=200 ! tee name=t ! volume ! adder name=a ! alsasink t. ! volume ! a. this still hangs: gst-launch audiotestsrc wave=2 freq=200 ! tee name=t ! queue ! volume ! adder name=a ! alsasink t. ! queue ! volume ! a. It already hangs during linking (thats why the queues won't help): 0:00:01.968371000 16711 0x804f070 DEBUG default gstutils.c:2475:gst_pad_proxy_getcaps: proxying getcaps for t:src1 0:00:01.968434000 16711 0x804f070 DEBUG GST_CAPS gstpad.c:2080:gst_pad_peer_get_caps:<t:src0> get peer caps 0:00:01.968496000 16711 0x804f070 DEBUG GST_CAPS gstpad.c:2052:gst_pad_get_caps:<queue0:sink> get pad caps 0:00:01.968551000 16711 0x804f070 DEBUG GST_CAPS gstpad.c:1958:gst_pad_get_caps_unlocked:<queue0:sink> get pad caps 0:00:01.968606000 16711 0x804f070 DEBUG GST_CAPS gstpad.c:1962:gst_pad_get_caps_unlocked:<queue0:sink> dispatching to pad getcaps function 0:00:01.968663000 16711 0x804f070 DEBUG GST_CAPS gstpad.c:2080:gst_pad_peer_get_caps:<queue0:src> get peer caps 0:00:01.968719000 16711 0x804f070 DEBUG GST_CAPS gstpad.c:2052:gst_pad_get_caps:<volume0:sink> get pad caps 0:00:01.968773000 16711 0x804f070 DEBUG GST_CAPS gstpad.c:1958:gst_pad_get_caps_unlocked:<volume0:sink> get pad caps 0:00:01.968828000 16711 0x804f070 DEBUG GST_CAPS gstpad.c:1962:gst_pad_get_caps_unlocked:<volume0:sink> dispatching to pad getcaps function 0:00:01.968886000 16711 0x804f070 DEBUG GST_CAPS gstpad.c:2080:gst_pad_peer_get_caps:<volume0:src> get peer caps 0:00:01.968942000 16711 0x804f070 DEBUG GST_CAPS gstpad.c:2052:gst_pad_get_caps:<a:sink1> get pad caps 0:00:01.968996000 16711 0x804f070 DEBUG GST_CAPS gstpad.c:1958:gst_pad_get_caps_unlocked:<a:sink1> get pad caps 0:00:01.969051000 16711 0x804f070 DEBUG GST_CAPS gstpad.c:1962:gst_pad_get_caps_unlocked:<a:sink1> dispatching to pad getcaps function 0:00:01.969104000 16711 0x804f070 DEBUG default gstutils.c:2475:gst_pad_proxy_getcaps: proxying getcaps for a:sink1 0:00:01.969164000 16711 0x804f070 DEBUG GST_CAPS gstpad.c:2080:gst_pad_peer_get_caps:<a:sink0> get peer caps 0:00:01.969220000 16711 0x804f070 DEBUG GST_CAPS gstpad.c:2052:gst_pad_get_caps:<volume1:src> get pad caps 0:00:01.969275000 16711 0x804f070 DEBUG GST_CAPS gstpad.c:1958:gst_pad_get_caps_unlocked:<volume1:src> get pad caps 0:00:01.969330000 16711 0x804f070 DEBUG GST_CAPS gstpad.c:1962:gst_pad_get_caps_unlocked:<volume1:src> dispatching to pad getcaps function
I've investigated this further. The root-problem is that both tee and adder are using gst_pad_proxy_getcaps(). Regarding gst_pad_proxy_getcaps() I wonder if this should use gst_element_iterate_[src|sink}_pads() depending on pad-direction rather that gst_element_iterate_pads(). I've tried it, but it doesn't solve the problem. After adding more logging to intersect_caps_func() I notices that it hangs when calling gst_pad_peer_get_caps(). If I now look at the get_caps() function of the elements in question (basetransform and queue) that again calls gst_pad_peer_get_caps() and so it seems that the gst_pad_peer_get_caps() calls comming from tee and adder hit each other. When apply this change in queue and basetransform: + if (GST_OBJECT_TRYLOCK(otherpad)) { + GST_OBJECT_UNLOCK (otherpad); result = gst_pad_peer_get_caps (otherpad); + } + else { + result=NULL; + GST_WARNING (" lock problem for link %s:%s and %s:%s", + GST_DEBUG_PAD_NAME (pad), GST_DEBUG_PAD_NAME (otherpad)); + } I does not lock anymore, but it loops forever. What totally puzzles me is that I don't get the GST_WARNING() though. Another attempt - result = gst_pad_peer_get_caps (otherpad); + { + GstPad *peerpad = GST_PAD_PEER (otherpad); + if(peerpad) { + result = gst_pad_get_caps (peerpad); + } + else result = NULL; + } basically a nonblocking version of gst_pad_peer_get_caps(). As expected it does not block, but neither does it terminate - it loops forever too. Conclussion for tonight. We seem to need a token to detect the cycle and somehow finish the negotiation for a subgraph, then try next branch and backtrack if it fails.
The following patch at least fixes the case where queues are added after the tee. The other case probably could be fixed with the BUFFER event or by making collectpads just queue the 1 buffer instead of blocking on it. * gst/adder/gstadder.c: (gst_adder_sink_getcaps), (gst_adder_request_new_pad): Make getcaps more robust by not using the proxycaps function. This makes sure that we don't end up recursively calling getcaps upstream. See #316248.
I can still see it locking up: http://rafb.net/p/3tsAX315.html 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211
+ Trace 201090
Ignore that last comment (even though its big). This is a different issue and I opened Bug #540645 for that.
Isn't this fixed now that proxy_getcaps does not reverse direction anymore?
well this hangs in "Pipeline is PREROLLING ..." gst-launch audiotestsrc ! tee name=t ! volume ! adder name=m ! pulsesink t. ! volume ! m. and this works: gst-launch audiotestsrc ! tee name=t ! volume ! adder name=m ! pulsesink t. ! queue ! volume ! m. interesting is also this one: gst-launch audiotestsrc is-live=true ! tee name=t ! volume ! adder name=m ! pulsesink t. ! volume ! m. it goes to PLAYING, but does not play anything
AFAIK collectpads of adder blocks the chain function of tee when it receives the first buffer waiting for all pads to have buffers. But tee's chain function thread is blocked on collectpads and no more buffers are pushed to the other chain.
Reopening as the question in comment #14 has been answered in comment #15.
Stefan, can you try again with latest core ?
Stefan, Ping.
We have not made any changes that can have fixed this. Comment #15 still applies. If nobody disagrees I'll close it as wontfix though, as one can 'solve' it by using queues.