GNOME Bugzilla – Bug 755162
Update actor under pointer(s) after relayouts
Last modified: 2015-09-17 15:07:42 UTC
Currently, any relayout (size and stacking changes) doesn't trigger calls to _clutter_input_device_update(), which means clutter_input_device_get_pointer_actor() will yield a wrong result until the next motion event arrives and fixes the situation. The result of this is visible in wayland mutter, those are frequent operations on window actors, and currently all operations that change the window under the pointer (alt-tab, workspace changes, etc...) yield the wrong enter/leave events being sent, the wrong pointer being set... I'm attaching a patch that adds calls to _clutter_device_manager_update_devices() after processing the relayout for the stage and all its children, so this is done once per relayout. I know the underlying glReadPixels is generally deemed slow, I appreciated no delays here though in animations or responsiveness during heavy relayout (eg. shaking a dragged window).
Created attachment 311554 [details] [review] actor: Update input devices after relayouts on the stage The relayout might have affected the actor below the pointers, so we must ensure we pick again after the relayout happened so clutter_input_device_get_pointer_actor() is consistently right after. Otherwise the actor will be left inconsistent until the next pointer motion arrives.
Review of attachment 311554 [details] [review]: No, I don't think this is a good idea at all. It flies in the face of the whole frame clock, and it just generates a ton of picking (which is a paint operation) for no good reason. At most, the pointer should be updated every time the scene is painted — we tried it years ago and got a performance hit.
This is really a duplicate of bug 684246. *** This bug has been marked as a duplicate of bug 684246 ***
(In reply to Emmanuele Bassi (:ebassi) from comment #2) > Review of attachment 311554 [details] [review] [review]: > > No, I don't think this is a good idea at all. It flies in the face of the > whole frame clock, and it just generates a ton of picking (which is a paint > operation) for no good reason. > > At most, the pointer should be updated every time the scene is painted — we > tried it years ago and got a performance hit. We still need to do this whenever actors move, resize, are lowered/raised, hidden, shown. Are there places we could queue this and then update before paint (and send Wayland events as a result of that? Otherwise it'd be hard to use this for pointer/touch focus in mutter.
Every time an actor is changed, it will queue a redraw; we update the state at the beginning of each frame, and that would be the point where we can update the actor underneath the pointer. It's doable — we've been testing that since before the Clutter 1.0 release; but, as I said in bug 684246, this causes a performance hit because we need to paint the scene to determine where the pointer is, and then another paint to update the frame buffer on the screen. This means twice the traversal, and a read-back from the GPU in the middle.
So in order to avoid a paint operation during pointer handling, we must do a pointer operation during painting :P. I was fearing this kind of response, the biggest problem being that there is no easy way to do this outside of Clutter (sure we can emulate a pointer event that will trigger this all, this is already done in a few places in gnome-shell, very far from ideal though...). I guess another way to avoid hitting the gpu pipeline is just doing this out of mutter stacking/position information, but that's goodbye to fancy transforms...
(In reply to Carlos Garnacho from comment #6) > So in order to avoid a paint operation during pointer handling, we must do a > pointer operation during painting :P. What I rejected was the approach that violated all the layering inside ClutterActor — by checking for a ClutterStage and updating the input device state at the end of a size allocation. :-) The proper way to do it is to do it during the stage update in the master clock. Any check of the actor underneath the pointer of each input device *will* end up causing a pick draw — that's just how Clutter works. In order to avoid that, we'd have to break API, and do client-side hit testing via transformation matrices. One way to do that would be to phase out the picking API; every time we add a new actor class to the scene, we check if the class overrides the pick() implementation, and if it does, we fall back to the pick-drawing method. Otherwise, we do a simple "invert the actor's modelview matrix and see if the untransformed coordinates fall into the actor's allocation" check.