After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 747414 - basesrc: Enable pushing frames from OS thread
basesrc: Enable pushing frames from OS thread
Status: RESOLVED OBSOLETE
Product: GStreamer
Classification: Platform
Component: gstreamer (core)
git master
Other All
: Normal enhancement
: git master
Assigned To: GStreamer Maintainers
GStreamer Maintainers
Depends on:
Blocks:
 
 
Reported: 2015-04-06 15:15 UTC by Ilya Konstantinov
Modified: 2018-11-03 12:26 UTC
See Also:
GNOME target: ---
GNOME version: ---



Description Ilya Konstantinov 2015-04-06 15:15:20 UTC
Current GstBaseSrc in push-mode calls into element code whenever it expects element to yield data. It's desired to have a mode of operation where GstBaseSrc is called-into when data is available (i.e. inversion of control in relation to our current mode).

There are OS frameworks that deliver data, as it becomes available, through a callback called on an OS thread. It would be desirable, therefore, to push data from this OS thread, rather than having to deliver it (e.g. via GAsyncQueue) back to the GstTask thread.

The rationale -- "solving" this with a GAsyncQueue has two problems:
 - with a small queue, latency, perf will suffer due to blocking on kernel lock 
 - with a big queue, latency can grow too large
Both problems can cause frame-drops -- the first at the source (due to buffer overrun), the later at the sink (due to too-late frames)

Both problems can be avoided by simply not doing it in the first place.

Also, each element that's currently in this situation ends up rolling its own solution -- GAsyncQueue etc. -- complicating the code.

ELEMENTS THAT COULD BENEFIT

Basically, a lot of Apple elements:

* osxaudiosrc
* avfvideosrc
* vtenc_h264
* vtdec

ANSWERS TO POSSIBLE CONCERNS

Such OS frameworks typically implement their own buffering, and their callbacks shouldn't be treated as strictly as - say - interrupts in the kernel. In other words, one shouldn't rush to copy the data aside from the OS buffer and finish the operation in another thread. In fact, usually zero-copy is possible.

Elements don't expect each other to be on the same thread (e.g. due to GstQueue elements), so switching elements mid-pipeline is not expected to upset anybody.

Since things expect a GstTask to associate with, the original GstTask can be used. The original task's open and close (or lock / unlock?) will ensure that the OS framework callback is enabled/disabled.
Comment 1 Sebastian Dröge (slomo) 2015-04-07 00:41:40 UTC
I have some somewhat related changes to this that I need to clean up and put up for discussion.

So in your case, which thread would be responsible for all the other things that the basesrc task does? Sending events, negotiation, etc. The thread you want to use for pushing can't really be used for that, as we don't have control over when it runs... it only ever runs when it decides to give us some data.


(In reply to Ilya Konstantinov from comment #0)
> The rationale -- "solving" this with a GAsyncQueue has two problems:
>  - with a small queue, latency, perf will suffer due to blocking on kernel
> lock

How is that different from blocking the thread directly that gave us data? I.e. what you propose
Comment 2 Ilya Konstantinov 2015-04-07 10:46:12 UTC
(In reply to Sebastian Dröge (slomo) from comment #1)
> So in your case, which thread would be responsible for all the other things
> that the basesrc task does?

I'm not fully aware of all other GstBaseSrc responsibilities.

Judging from existing implementations which afford blocking in 'create' for:
 a) data
 b) unlock
I'd judge that there's not much ongoing in GstBaseSrc that's not a reaction to incoming data.

If there is something (e.g. independent timer), there's still the good old GstTask.  If what I say is true - that GstBaseSrc loses its "independence" until 'unlock' — the GstTask could be, in fact, restarted by 'unlock'.

> (In reply to Ilya Konstantinov from comment #0)
> > The rationale -- "solving" this with a GAsyncQueue has two problems:
> >  - with a small queue, latency, perf will suffer due to blocking on kernel
> > lock
> 
> How is that different from blocking the thread directly that gave us data?
> I.e. what you propose

It's not different, but it's in addition. Obviously the OS framework is implemented using similar primitives, e.g. it'll submit N memory blocks to the kernel for reading, and wait for completion. (There can be other methods, but all of them involve potential yielding to the scheduler.)

In typical Linux elements (e.g. v4l), the GStreamer component performs the role of the OS framework — beneath it, only the kernel handling ISRs, marking the I/O requests as completed and scheduling a thread. In a typical OS X element, the OS framework thread is already paying the cost of being at the scheduler's mercy. There's no reason to pay the cost twice. (For the record, I think the cost is in scheduling, not in context switching per-se.)

As a silly thought exercise, ask yourself whether you'd fling the buffer between 10 more threads for no reason whatsoever, and how that will affect performance and latency.

For fear of stating the obvious, this is true for RTC where late buffers are of no value. For pipelines where late buffers can be of value, one can easily add a GstQueue.
Comment 3 GStreamer system administrator 2018-11-03 12:26:32 UTC
-- GitLab Migration Automatic Message --

This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity.

You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/gstreamer/gstreamer/issues/103.