GNOME Bugzilla – Bug 719615
oggdemux: slow seeking on some ogg files
Last modified: 2014-01-14 14:42:58 UTC
It is regression comparing to gst-0.10.
Totem can play ogg(theora/vorbis) without problems, but changing position on the track for some file can take too long time.
Here is one sample file:
If someone is interested, perf top shows that we spend all our time in:
Not sure if it's a regression in libogg or GStreamer, but I'm guessing we massively call _get_next_page() while doing a binary search. I have very little knowledge of ogg myself, but maybe that will save someone that knows a bit of time.
The main loop of ogg_page_checksum_set() was last modified in 2001. I don't think this is a libogg regression.
(In reply to comment #2)
> The main loop of ogg_page_checksum_set() was last modified in 2001. I don't
> think this is a libogg regression.
Then we know it always have been slow. If someone have time, the analyses of how many time we call into this in 0.10 vs 1.0 would be handy.
If i'm correct, this file use Ogg Skeleton 4. May be it is an issue?
Hmm, interestingly this format shall contain an index to accelerate seeking. It's likely not supported, though we probably need to look at the performance of non-indexed seeking, as it's likely a regression.
Am 03.12.2013 23:49, schrieb Monty Montgomery:> Any time code is stuck in a loop calling ogg_sync_pageseek() or
> ogg_sync_pageout(), you'll see most of the time sunk into checksum
> calculation. It's the only thing in libogg that _can_ take much time.
> This is not likely a libogg bug, but a bug in the code calling libogg
> in a tight loop.
> Looking at the file listed in the bug, there's a page discontinuity at
> the very beginning; after the header packets/pages, there are several
> missing pages before data starts normally. Whatever generated or
> trimmed this file did a ragged job at the beginning. This isn't an
> error, strictly speaking, but it's not completely correct either.
> First thing I'd check is whether or not this has tripped up the
> seeking code in gstreamer... Or actually, _any_ kind of analysis of
> what gstreamer is actually doing.
> A profiler is not a debugger! It's just waving a dead chicken at this
> problem, then trying to read the tea leaves. As of yet we have no
> concrete evidence of what is happening aside from gstreamer
> calculating a ton of Ogg checksums.
Reverting this patch will increase seek speed.
Author: Sjoerd Simons <email@example.com>
Date: Fri Jul 26 15:29:05 2013 +0200
oggdemux: Prevent seeks when _SCHEDULING_FLAG_SEQUENTIAL is set
Don't go into pull mode when the upstream scheduling flags indicate
seeks should be avoided by setting GST_SCHEDULING_FLAG_SEQUENTIAL.
Oleksij, from which kind of source are you playing this file? Via HTTP? As a local file the SEQUENTIAL flag should not be set, otherwise that would be a bug already.
From local. See the file from first comment.
I do not see the SEQUENTIAL flag set when trying with this file (well, the first 35 MB of it) and playbin from file://
Same for another full ogv file (in case it depended on EOS flag, etc).
Are you using some special pipeline ?
I use Totem.
I forgot to say, it makes only small performance gain (hopefully not illusion). It won't solve this bug.
Created attachment 266169 [details] [review]
adaptive chunk size
The issue seems due to the use of too small a chunk size when dealing with large pages (the file in the report uses ~64 KB pages, which is about the maximum Ogg can have).
Can you try the patch attached, and see if that fixes your issues ?
There's probably something else to find and fix, since this chunk size has been there forever, so if there was an actual regression, it likely is still there. But I'm curious to see how much of an effect this patches does. Testing here, I get about 40% as much calls to the checksum as before for this test file.
Hmm... first jump takes 15-20 seconds. Other jumps take much less - about 5 seconds. It seems to depend on hard drive and amount of RAM. After most of the file is cached, the speed will increase.
This is close to dd test.
dd if=video.ogv of=/dev/null
2590293+1 Datensätze ein
2590293+1 Datensätze aus
1326230110 Bytes (1,3 GB) kopiert, 15,6466 s, 84,8 MB/s
It means totem/gstreamer will read complete file to make a jump.
Beside, i have only one file which works without problems. This one has different structure. It looks like this but is about skeleton.
Comment on attachment 266169 [details] [review]
adaptive chunk size
Looks good in general, but the above comment suggests that the chunk size is only adjusted during seeking? Should probably do that while normally playing too whenever we finished a page.
The chunk size is updated as we see pages, not only when seeking.
And... 15 seconds ?! Wow. This is far worse than what I was thinking!
I guess I'll download the full 1.4 GB of your file and test :D
On a test (start playing, seek 2 seconds in):
0.10: 208 calls to pageseek
1.0: 264k calls
1.0 with adaptive chunk size: 92k calls
(yes, no k on the first line).
So there is definitely something *very* screwy here, which I'll hunt.
Created attachment 266248 [details] [review]
fix broken seeking in file with skeleton
Calls to pageseek down to 160 with this patch (better than 0.10 due to the adaptive chunk size patch :P)
See also the commit message for discussion about discontinuous streams. The current code was including them, I excluded them but I think it'd be good to include them with a threshold to give up. Comments welcome.
Review of attachment 266248 [details] [review]:
Looks good to me
Both pushed, to master and 1.2.
Great, it works. Thank you!