GNOME Bugzilla – Bug 788420
Wayland: Too wide windows/subsurfaces cause exit
Last modified: 2018-05-02 19:11:08 UTC
Created attachment 360752 [details] Test case See attached test case. The wayland backend will try to create a surface that is too large and cause the client to exit.
The texture size exceeds the supported texture size, which is hardware/driver dependent, you can see this easily using the WAYLAND_DEBUG env-var: wl_display@1.error(wl_surface@24, 2, "Failed to create a texture for surface 24: Failed to create texture 2d due to size/format constraints") Gdk-Message: Error 71 (Protocol error) dispatching to Wayland display.
So basically, the client asks for a size not supported by the Wayland compositor (because the underlying hardware doesn't supports textures of that size) which has no choice but raising a protocol error that will kill the client.
I knew that already. Can the compositor not cap at the maximum supported size and let the client handle the smaller size?
It would be nice to have some protocol in place for a client to ask the compositor how big a window size can be — this way the toolkit would be able to catch the issue and avoid sending non-sensical values. Further mitigation: GtkTooltip should probably use a GtkLabel that has an ellipsization rule, and try to constrain the text width to something reasonable. I also wonder if it's possible, at the Wayland compositor level, to map multiple textures to the same surface, to allow getting over the texture size limit. Cogl already allows this internally with "sliced" textures, with limitations on what the textures can do.
(In reply to Emmanuele Bassi (:ebassi) from comment #4) > I also wonder if it's possible, at the Wayland compositor level, to map > multiple textures to the same surface, to allow getting over the texture > size limit. Cogl already allows this internally with "sliced" textures, with > limitations on what the textures can do. Depends. For SHM surfaces, the compositor already has all the information it needs to use multiple textures from a single SHM buffer. (Let's call that 'shattering' rather than slicing, since that's what the proposal to do the same for X11 was, and it avoids a terminology collision with mipmap/texture-array slices.) As for client GL/Vulkan textures though, not really. Firstly, hardware has stride limits on buffers (confusingly called 'surfaces' in, e.g. Intel HW), and that's not exposed through API. So you'd need to kinda guess. Secondly, there's no way (currently) to expose that wl_buffers are secretly shattered: we'd need extra tokens in eglQueryWaylandBufferWL to discover the number of subsurfaces and their offsets, and an extra attrib to EGLImage EGL_WAYLAND_BUFFER_WL import specifying the offset to apply. The client EGLSurface implementation would then need to shatter its surfaces internally, which I can't really see flying for EGL, and definitely not for Vulkan. tl;dr maybe just use subsurfaces smaller than max texture dimensions instead (with null input regions so input handling isn't pain)
We still need some protocol to learn those max texture dimension, right ?
Wouldn't that be leaking a compositor/backend implementation/limit onto the clients (for example, the same works with "weston --use-pixman")?
-- GitLab Migration Automatic Message -- This bug has been migrated to GNOME's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.gnome.org/GNOME/gtk/issues/931.