After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 766898 - player: Add a message queue API in addition to the signals
player: Add a message queue API in addition to the signals
Status: RESOLVED OBSOLETE
Product: GStreamer
Classification: Platform
Component: gst-plugins-bad
git master
Other Linux
: Normal enhancement
: git master
Assigned To: GStreamer Maintainers
GStreamer Maintainers
Depends on:
Blocks:
 
 
Reported: 2016-05-26 06:03 UTC by Sebastian Dröge (slomo)
Modified: 2018-11-03 13:51 UTC
See Also:
GNOME target: ---
GNOME version: ---



Description Sebastian Dröge (slomo) 2016-05-26 06:03:13 UTC
Similar to GstBus. Message queues are easier to handle in many situations, and especially don't require GObject knowledge.
Comment 2 Sebastian Dröge (slomo) 2018-05-13 11:03:11 UTC
Thanks, I've done an initial review of what you did so far. Definitely goes into the direction I was thinking of :)
Comment 3 Stephan Hesse 2018-05-14 21:46:22 UTC
Hey Sebastian, We will continue work in this direction then :) 

Thanks a lot for your review & comments much appreciated. All of them I can easily address and am able to follow your arguments here clearly. 

About the log with the obviously leaking string, this was knowingly, more of a first way to debug the initial bit of work here. Thanks for the advice so far!

This week and next week we will be able to continue on this effort! Keeping you posted here.

Regards,
Comment 4 Stephan Hesse 2018-05-14 22:08:19 UTC
> +create_message_data_from_state (GstPlayer *self, GstPlayerMessage message_type) {
> I think I would prefer separate functions for each type that just takes as 
> parameter the specific fields, instead of polluting the instance struct with them


This is something we can maybe discuss if you like. I am not too attached to it, we can of course go for what you propose. Just want to explain the reasons of my choice here :)

The approach I chose satisfies that anything that is posted as a message is part of the values within the player struct itself. Rather than seeing that as polution I rather see the fact we keep the actual player state clearly defined and centralized here as a conceptual decision that makes sense if applied consistently like this. Since the messages should really reflect the "state of playback" (position/duration/video-width/height and maybe more) at the moment the messages are created. Imo this state should also be readable from the player API at any moment in the same way if applications decide that is how they want to use it. Generally I think it is better having an explicit state (holding track of current position and duration values in our own members). The state changes of the playback (position and duration are also state) which we reflect here with the message bus is transparent, rather than reflecting a state that is somehow nested and implicit but not accessible elsewhere from the API than through the messages.

Furthermore, it centralizes all of the message creation in one place and makes it easy to post a message from either place. And finally, caching position/duration or current video dimensions will allow to read those with less overhead if the application decides to read it via getters.

On the other hand you may of course argue that the messages collected from the bus will deliver an equivalent representation anyway :) As I said, not a big deal in the end, but a choice still.
Comment 5 Sebastian Dröge (slomo) 2018-05-15 06:34:39 UTC
My main problem with caching some of these values is that they represent transient state. It's always changing, and for example the position messages could be disabled completely (there is the interval configuration) and the application might want to poll the position whenever it wants to instead.
Comment 6 Stephan Hesse 2018-05-16 20:22:15 UTC
Understood, makes sense.

So then the idea would be: 

- position, duration and other values of arbitrary/transient character can be queried from the API at any moments

- messages get created with data that is queried on creation of the message from within the pipeline, using the exact same mechanism as the corresponding API methods (to read position/duration etc)

Will add some custom functions to create those position/duration messages with the values obtained from the pipeline instead of caching them (as for error and warning).

We may still be able to use the current _from_state method for messages that actually can get created from the state solely.

And our next step would now be (re)moving the signal stuff from this file and building a proxy object from signal-based API as you had suggested. 

We probably also need to think about how to deal with an application that would not consume from the bus, how to avoid filling memory with more and more data structures that are not being consumed/unrefed at some point. Maybe I am still a bit unfamiliar with GstBus. How is this usually done?
Comment 7 Sebastian Dröge (slomo) 2018-05-17 07:35:13 UTC
(In reply to Stephan Hesse from comment #6)
> Understood, makes sense.
> 
> So then the idea would be: 
> 
> - position, duration and other values of arbitrary/transient character can
> be queried from the API at any moments
> 
> - messages get created with data that is queried on creation of the message
> from within the pipeline, using the exact same mechanism as the
> corresponding API methods (to read position/duration etc)
> 
> Will add some custom functions to create those position/duration messages
> with the values obtained from the pipeline instead of caching them (as for
> error and warning).
> 
> We may still be able to use the current _from_state method for messages that
> actually can get created from the state solely.

It's probably simpler to not have a separate function for this but just create the structures ad-hoc where they're needed, just like in e.g. gstmessage.c. If you get rid of the GValue juggling, it's one line per message creation :)

> And our next step would now be (re)moving the signal stuff from this file
> and building a proxy object from signal-based API as you had suggested. 

Yes, and get rid of the signal dispatcher

> We probably also need to think about how to deal with an application that
> would not consume from the bus, how to avoid filling memory with more and
> more data structures that are not being consumed/unrefed at some point.
> Maybe I am still a bit unfamiliar with GstBus. How is this usually done?

They are accumulating, and application will have to handle messages from the bus or set the bus to flushing.
Comment 8 Stephan Hesse 2018-05-28 22:30:56 UTC
fyi our branch has now fixes for the urgent issues functionnality-wise: a getter method for the bus, fixing the log leak and registering a Gtype for the player-message enum, and of course unrefing the bus on disposal.

currently we have thus the api message bus in functional state, while keeping backwards compatibility with the current API.

we should definitely use gst_structure_set instead of the current implementation when creating data structures in order to avoid the whole verbosity around GValue stuff, as you suggested. also, create everything ad-hoc instead of caching transient values in player state. is there anything else left in order to have the api message bus itself ready? 

before continuing on this: we were discussing that as we remove the signal dispatcher and signals of GstPlayer, we would create a proxy object that has the same signals, which would get emitted as it consumes messages from the bus and emits the corresponding signals. so, first of all I assume then that we would consume from the bus in sync mode. so we then emit the signals corresponding to player-messages directly from the thread from which the player-message got posted, so just connect to sync-message and re-emit in the same thread? or should the proxy object dispatch the emitter onto some injected context, like the current main-context-signal-dispatcher?

also: how would you want to call this object? GstPlayerSignalProxy? GstPlayerMessageProxy? :)
Comment 9 Sebastian Dröge (slomo) 2018-05-30 14:35:26 UTC
The signal proxy thing would use the normal GstBus API for integrating into a main context, that is it would use the thread default main context on creation time.

If someone needs sync signal emissions there could be a property for that too I guess
Comment 10 Philippe Normand 2018-08-13 10:36:16 UTC
I see the branch got some new commits in June. What's the status of it now?
Comment 11 Stephan Hesse 2018-08-14 10:28:21 UTC
The status of it now is the current branch that you saw.

Our plan is to have this finished before the next hackfest in October. There isn't much work left to do here. 

Do you have a specific need related to this?
Comment 12 Philippe Normand 2018-08-14 10:49:09 UTC
I was just wondering, out of curiosity :)
Comment 13 Stephan Hesse 2018-08-14 11:23:52 UTC
Cool. I guess our schedule for this task just became official then :) Let me know if you have any input on this!
Comment 14 rland 2018-08-17 17:01:50 UTC
What seems to be left, boring unit test? :)
Comment 15 Stephan Hesse 2018-08-27 12:17:20 UTC
Yes. And finishing the proxy (from message to signals) for backwards compat of apps using signals instead of message bus.
Comment 16 rland 2018-08-27 14:03:44 UTC
BTW, it seems that we need to consider the thread safety of api-bus, because we expose internal variables in the form of API. Considering that when the user program that has no GLib main loop  context, most likely to call gst_bus_timed_pop() or gst_bus_pull() in a separate thread. We don't want the object to be destroyed when api_bus is being used.
for people who are not familiar with gstbus, it is not an easy task to use bus-api correctly.:(
Comment 17 Sebastian Dröge (slomo) 2018-08-27 14:29:10 UTC
Can you describe in more detail what problems you see? GstBus usage seems quite straightforward, you pop() on one side and post() on the other side(s) and all sides need to have a strong reference to the bus for this to work.

Also GstBus has nothing to do with the GLib main loop, it can be integrated in basically any kind of event loop. Via an fd or direct notifications for example.
Comment 18 rland 2018-08-27 16:34:23 UTC
> and all sides need to have a strong reference to the bus for this to work.
> 

Perhaps this is the problem. If I remember correctly, the current implementation simply returns the self–>api_bus variable directly. Instead, we should return a strong reference and increase the ref count. But I am not sure if this is still reliable. Consider an imaginary scenario, we start a gstplayer instance in a service process, in order to process asynchronous messages and non–block main thread, we may choose to create a separate thread,and loop call gst_bus_timed_pop() to handle msg, that thread may be more than api_bus or even gstplayer instance have a longer lifecycle,depending on the exit condition of the thread.
Comment 19 Stephan Hesse 2018-08-27 16:51:12 UTC
The reason why it is done like it is currently is I wasn't expecting anyone to actually store that bus reference. That's also why we're not increasing the ref-count when we return it. It wasn't a transferrable thing as I was thinking how it would be used. You would always have to own a player ref to access it. I would also recommend that in your multi-thread scenario, the threads share the player instance rather than just the bus instance (why would you have to do latter thing?). Obviously, the current design is error-prone because it allows to do so.

So what we could do ensure that only player ref is shared, is just adding methods on GstPlayer API that allow to pop from the bus, instead of returning a reference to the bus. And keep the bus ref private.

Because ref-counted or not, in your scenario, if the player was destroyed and someone kept the reference to the bus, that would not be sane. Adding ref-counting everytime someone "gets" the bus ref also would really require us to validate that the ref-count on that is zero before we destroy the player?

Therefore we should probably rather encapsulate all the bus methods needed by API users. So wrap the pop call into a gst_player instance method. That will avoid to be able using the bus if the instance is gone in the first place, and per se rather require to use the player instance ref only.

It is also much simpler to keep one type of ref-counted object around for the users of the API (the player instance), rather than several (player instance and bus).

Does that make sense to you?
Comment 20 Stephan Hesse 2018-08-27 16:58:28 UTC
Or do you actually see the need to keep the bus reference after the player was destroyed? 

In that case obviously one would have to do it the way you propose, returning it with increased ref-count. What I dislike about that is that you'd have to tell API users to take care of unref'ing the bus however they use it. So they'd have to manage two objects (player and bus) instead of one. That seems unncessarily complex for an API that is supposed to be more convenient?

Obviously, also both needs can be catered for, with one method wrapping "pop" and another returning a strong-ref to the bus, for people who want to manage the bus life-cycle themselves, if for some reason they want to desychronize player instance and bus-reading-process lifecycles.
Comment 21 rland 2018-08-28 03:26:01 UTC
 for people who want to
> manage the bus life-cycle themselves, if for some reason they want to
> desychronize player instance and bus-reading-process lifecycles.

Managing two references, perhaps not sensible way, is just to illustrate a "problematic snapshot" that is not circumvented by API design.I don't think of any scenes that really need to hold references to two objects at the same time.:)


>Therefore we should probably rather encapsulate all the bus methods needed by >API users. So wrap the pop call into a gst_player instance method. That will >avoid to be able using the bus if the instance is gone in the first place, and >per se rather require to use the player instance ref only.

Yeah,maybe this will make the user's life better. :)
After all, the real purpose of the user is to safely get the asynchronous message from the api_bus and handle it without thinking about more multi-thread 
 synchronization across modules.
Comment 22 Sebastian Dröge (slomo) 2018-08-28 06:01:16 UTC
There's not actually a problem here if refcounting is done properly. Even if you only get a borrowed reference from the GstPlayer API for the bus: you're only supposed to keep that reference as long as you have a reference to the GstPlayer, or otherwise you need to get a strong reference yourself (call ref() and later when you don't need it anymore call unref()).

The situations you describe here are simply a result of not doing reference counting correctly.


That said, it would be better to return a strong reference from the GstPlayer API for consistency reasons.
Comment 23 Stephan Hesse 2018-08-28 16:15:29 UTC
> There's not actually a problem here if refcounting is done properly.

Obviously. 

Also, I row back on my previous statement that it wouldn't be "sane" having a bus ref if the player instance is already free'd. That would just be OK, the bus just wouldn't get any new messages. But it might still have some to be read.  

There might be a scenario where one thread only reads from the bus and another deals with the player API itself. And then the player might get destroyed while the other thread is still dealing with reading messages. This of course wouldn't be an issue at all if we give out a strong ref or if ref count is increased appropriatly somewhere else, i.e by the app. Same goes for the player instance ref itself, since it's a GObject as much as the GstBus is.

> That said, it would be better to return a strong reference from the GstPlayer API for consistency reasons.

Agreed.
Comment 24 Sebastian Dröge (slomo) 2018-11-01 11:05:09 UTC
Stephan, what's the current status here? Can you give an update? :)
Comment 25 Stephan Hesse 2018-11-01 18:34:11 UTC
Yes :) 

The main work on the signal adapter thingie is done. As in "should work". Need to do the slightly boring work of mapping all signal data to all messages now.

Here is the current diff from my feature branch against master: https://github.com/GStreamer/gst-plugins-bad/compare/master...tchakabam:feature/player-api-message-bus?expand=1

Finally, I still need to integrate all of your comments fully. Those or mainly removing the transient-state vars and instead building all message data in-place in the respective internal callbacks. And also the ref-counting issue on the bus getter.

You can already take a look at the existing patch, since it is already quite a bit to review maybe, and this *should* be almost finished..

We should also create a patch that updates gst-examples using this API, so we can validate fully that the current signal-adapter approach works.
Comment 26 Sebastian Dröge (slomo) 2018-11-02 08:28:17 UTC
Let's wait until we moved to gitlab (soon!) before I start reviewing this. It's going to be a disaster to review this in bugzilla, and it's not possible to comment on lines in the "compare" view on github :)

Can you send a MR with whatever you have at that time so I can start reviewing the parts that exist already?

(In reply to Stephan Hesse from comment #25)

> We should also create a patch that updates gst-examples using this API, so
> we can validate fully that the current signal-adapter approach works.

Yeah, otherwise it will fail to compile :)
Comment 27 GStreamer system administrator 2018-11-03 13:51:50 UTC
-- GitLab Migration Automatic Message --

This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity.

You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/394.