After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 742141 - pulsesink: writeable size will increase bigger than total buffer size if no data feed to pulse.
pulsesink: writeable size will increase bigger than total buffer size if no d...
Status: RESOLVED OBSOLETE
Product: GStreamer
Classification: Platform
Component: gst-plugins-good
1.4.1
Other Linux
: Normal normal
: git master
Assigned To: GStreamer Maintainers
GStreamer Maintainers
Depends on:
Blocks:
 
 
Reported: 2014-12-31 07:22 UTC by kevin
Modified: 2018-11-03 14:56 UTC
See Also:
GNOME target: ---
GNOME version: ---



Description kevin 2014-12-31 07:22:50 UTC
Is it reasonable below writeable size will increase bigger than total buffer size if no data feed to pulse?

        pbuf->m_writable = pa_stream_writable_size (pbuf->stream);

It will cause our use case fail. But alsasink no such kind of issue. Per my understand, the total ring buffer size is the max value of the writeable size. But pulse audio will increase the writeable size without limitation if pulse un-corked.

Our use case is send EOS to audio pipeline when rewind playback. The pulse will be un-corked as need provided clock. And the writeable size will increase. When change to normal playback, the writeable size will huge and consume large mount of audio data. It will cause pause pipeline fail as audio buffer come to audio sink per one second (audio sample duration is 1 second) and audio sink haven't blocked on audio ring buffer commit function.
Comment 1 Arun Raghavan 2015-01-05 05:22:46 UTC
Yes the writable size will keep increasing based on the amount of data needed by the client.

I didn't understand your use case and the problem you're facing. Could you clarify?
Comment 2 kevin 2015-01-05 05:57:46 UTC
In a simple word, the writable size will increase to large value after some time if no data feed to pulse. Pulse will consume all audio data as soon as send to pulsesink and then audio pipeline will always no data.

Our use case is one AVI clip has large audio PCM sample (one sample has one second PCM data). If pulse client buffered all audio data, pulsesink change state from PLAYING to PAUSE will fail as no audio data send from demux as demux blocked by video pipeline as pulse client buffered too many audio data.

Can we change the behave of pulse client or pulse sink? Or need feed data at any time when PLAYING state?
Comment 3 kevin 2015-01-05 07:56:18 UTC
Is it clear? Do you need any other information?
Comment 4 kevin 2015-01-09 03:46:17 UTC
Can anyone give me advice?
Comment 5 Arun Raghavan 2015-01-09 10:40:38 UTC
(In reply to comment #2)
> In a simple word, the writable size will increase to large value after some
> time if no data feed to pulse. Pulse will consume all audio data as soon as
> send to pulsesink and then audio pipeline will always no data.

Yes, libpulse will continue to ask for more data (until at some point a h/w underrun occurs). However, if you write out the data when it becomes available, this should not be a problem.

> Our use case is one AVI clip has large audio PCM sample (one sample has one
> second PCM data). If pulse client buffered all audio data, pulsesink change
> state from PLAYING to PAUSE will fail as no audio data send from demux as demux
> blocked by video pipeline as pulse client buffered too many audio data.
> 
> Can we change the behave of pulse client or pulse sink? Or need feed data at
> any time when PLAYING state?

I'm really sorry but I'm having trouble understanding the problem you are seeing. The audio and video outputs of the demuxer should ideally have a queue after them so that each of those get their own thread. In that case, one side having large buffers and the other having small buffers should not matter.
Comment 6 kevin 2015-01-09 14:11:06 UTC
Our use case is very complex. But the root cause is the writable size will increase to very large if no audio data feed to pulse. Do I need fix the writable size increase or feed audio data at any time to avoid the writable size increase?
 
One side effect of the writable size increase is buffer will very large for some demux, such as tsdemux. Ts stream should read sequencely. Can read by track. If pulse buffered too many audio data, video pipeline will buffer more data, which will cause consume large memory.
Comment 7 Arun Raghavan 2015-01-09 14:29:13 UTC
I'm afraid I can't really say which solution you should try since I still don't understand where you are seeing a problem.

The buffering problem you mention should not really be a cause for concern since during playback, buffers will be consumed as they are presented, and the audio/video buffer levels need not be correlated if they two streams are decoupled using a queue like I mentioned in my previous comment.
Comment 8 kevin 2015-01-09 15:56:34 UTC
Our demux will blocked on video pad push if video pipeline full of data. So no audio data push until  pushed another video buffer.
If pulse buffer too many audio data, video pipeline will full of data and than audio data will push until one video buffer pushed.
audiosink need pre-roll when change state from PLAYING to PAUSE. received buffer or blocked on commit ring buffer can finish pre-roll. But can't finish pre-roll as no audio buffer come to audio sink. No audio data come to audiosink as video pipeline full and video PAUSED. Not blocked on commit ring buffer as all audio data is writed to pulse.

Can we control pulse buffer size? say not consume large amount of audio data. just consume total buffer size(segsize * segtotal).
Comment 9 kevin 2015-01-11 05:03:16 UTC
Audio pipeline and video pipeline can't totally decoupled. Demuxer will be blocked when video pipeline is full of buffers, so audio pipeline will no buffer if pulse buffered all audio data. The reason of pulse buffered all audio data is the writable size is very large.
Normal playback is ok in this case. But can't change to PAUSE state. The reason is below:
audiosink need pre-roll when change state from PLAYING to PAUSE. received
buffer or blocked on commit ring buffer can finish pre-roll. But can't finish
pre-roll as no audio buffer come to audio sink. No audio data come to audiosink
as video pipeline full and video PAUSED. Not blocked on commit ring buffer as
all audio data is writed to pulse.

Is my understand right? Do you need more information?
Comment 10 Stefan Sauer (gstreamer, gtkdoc dev) 2015-01-11 12:54:54 UTC
(In reply to comment #9)
> Audio pipeline and video pipeline can't totally decoupled. Demuxer will be
> blocked when video pipeline is full of buffers, so audio pipeline will no
> buffer if pulse buffered all audio data. The reason of pulse buffered all audio
> data is the writable size is very large.

Thats why one should place a multiqueue after the demuxer. decodebin/playbin do that already. The queues will provide a thread-boundary, that is each stream will run in a separate thread.
Comment 11 kevin 2015-01-11 15:47:13 UTC
(In reply to comment #10)
> (In reply to comment #9)
> > Audio pipeline and video pipeline can't totally decoupled. Demuxer will be
> > blocked when video pipeline is full of buffers, so audio pipeline will no
> > buffer if pulse buffered all audio data. The reason of pulse buffered all audio
> > data is the writable size is very large.
> 
> Thats why one should place a multiqueue after the demuxer. decodebin/playbin do
> that already. The queues will provide a thread-boundary, that is each stream
> will run in a separate thread.

I know there is multiqueue after demuxer. But the queue will full (10M bytes) as demuxer will continue push buffer to it. And then video pipeline (from demuxer video pad to video sink) will full with video buffer, and then demuxer blocked on video pad push.
Demuxer only has one thread, so demuxer don't push audio buffer until video pipeline has room to push one video buffer.
Comment 12 Arun Raghavan 2015-01-12 08:58:41 UTC
So your problem is that when going from PAUSED -> PLAYING (not the other way, as you mentioned), you're filling up the video buffer with 10 MB of compressed video and there is no audio available in that duration, so prerolling never completes?

The appropriate solution would then be to configure the queue to hold as much data as needed for any reasonable media to work (you can probably still craft pathologically bad media that won't play).

Another way to deal with this is by using pull mode and read audio/video packets non-serially so both parts of the pipeline can operate in parallel. This may or may not be viable for you -- it requires the existence of an index in your avi file (or you create one on the fly), and the source of has to support random access.
Comment 13 kevin 2015-01-12 10:11:39 UTC
Thanks.

I will try to feed audiosink data anytime when PLAYING to fix my issue.
Comment 14 kevin 2015-01-15 02:34:23 UTC
Seems can't ensure feed audio data in anytime when PLAYING.

streamsynchronizer will lag one second to send GAP event when one track is EOS. Which will cause pulse will no audio data within the second and will cause writable size increase.

Need to mention is the writable size will increase to large and can't reset when seek pipeline. We have tried reset it in pulse client when seek pipeline. it works. But not sure if it is pulse client bug.
Comment 15 GStreamer system administrator 2018-11-03 14:56:45 UTC
-- GitLab Migration Automatic Message --

This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity.

You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/issues/149.