After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 704387 - Support dual-gpu systems
Support dual-gpu systems
Status: RESOLVED DUPLICATE of bug 773117
Product: gnome-shell
Classification: Core
Component: extensions
unspecified
Other Linux
: Normal normal
: ---
Assigned To: gnome-shell-maint
gnome-shell-maint
Depends on:
Blocks:
 
 
Reported: 2013-07-17 11:39 UTC by Matthias Clasen
Modified: 2016-10-21 19:12 UTC
See Also:
GNOME target: ---
GNOME version: ---



Description Matthias Clasen 2013-07-17 11:39:26 UTC
On systems that have switchable graphics ('Optimus', mostly nvidia/intel combinations), it would be nice to have a context menu item in the application launcher that says

 [ ] Use external GPU

To implement this, the app needs to be launched with DRM_PRIME=1 in the environment.

I think this could be done as a shell extension.
Comment 1 drago01 2013-07-17 12:05:06 UTC
(In reply to comment #0)
> On systems that have switchable graphics ('Optimus', mostly nvidia/intel
> combinations), it would be nice to have a context menu item in the application
> launcher that says
> 
>  [ ] Use external GPU
> 
> To implement this, the app needs to be launched with DRM_PRIME=1 in the
> environment.
> 
> I think this could be done as a shell extension.

Wouldn't it be better to have this somehow encoded in the desktop file and set it without user interaction?

Could be set for 3D games, blender, ...
Comment 2 Matthias Clasen 2013-07-17 13:20:34 UTC
Possibly, X-UseExternalGPU=true. But we probably still want a way to set it, for apps that lack it. That would go in alacarte then. I'll point out that DRM_PRIME=1 is just the simplest instance of this gpu-choice feature. There are more complicated scenarios, like hotpluggable gpus, or gpus in a dock, and the app should use that gpu whenever it displays on the monitors plugged into the dock....
Comment 3 Allan Day 2013-07-17 13:30:38 UTC
I don't really want to rely on extensions for enabling people to take advantage of their hardware - it's something we should really do out of the box.

My suggestion would be to add a section to the Details settings panel, which allows you to specify a list of applications that should use the prime GPU.
Comment 4 Matthias Clasen 2013-07-23 12:59:00 UTC
Here is some more information about use cases and technologies, from Dave Airlie:

The randr provider mechanism we have now controls three scenarios:

a) hotplugged display device like USB - creates new provider, ideally GNOME would automagic bind the
provider, X server does this at the moment, but I guess like input devices the policy should be in the DE.
Once providers are connected up this just looks like an extra randr crtc/output.

b) secondary GPU offload: a secondary GPU exists, again DE should probably control provider bindings,
need a way to launch things on the "good" GPU. DRI_PRIME=1 env var from the launcher.

c) secondary GPU outputs: some laptops only have certain outputs on the secondary GPUs, we try to expose
this by listing all the connectors on the device and using the primary device to render and copy the results,
again the provider level objects exist, but once connected up we should just see normal randr crtc/outputs.

So of those 3 we need UI for b, either launcher ability or just magic apps will run on secondary GPU, but
giving the user someway to override.

Now the future scenario which we haven't quite tackled yet but I think there is two stages for:
a) starting the X server with the secondary GPU as the main renderer, and the primary as an output slave,
patches to add a command line option to X to do this were submitted upstream I haven't reviewed them,
but we'd need to see how gdm could integrate, maybe some gdm UI to launch session on other GPU, I think that is
what Canonical did in lightdm land.

b) dynamic GPU switching - having gnome-shell be able to transition from one GPU to the other at runtime,
like OSX and Windows do. This involves a lot more work in the X server, GL stack, then cogl/clutter/mutter/gnome-shell,
then some sort of mechanism to decide when to switch the compositor to the other GPU. (UI is probably least of our worries).
Comment 5 Allan Day 2013-08-29 17:48:47 UTC
Some ideas:

 * Have a place to configure the GPU per app. This could happen in Settings > Details, a separate Settings panel (under hardware), or in Software.

 * Add a GPU option to the session menu in the login screen.

 * If we can do dynamic GPU switching, add an option to the system menu.
Comment 6 drago01 2013-08-29 17:58:21 UTC
(In reply to comment #5)
> Some ideas:
> 
>  * Have a place to configure the GPU per app. This could happen in Settings >
> Details, a separate Settings panel (under hardware), or in Software.

The first one makes no sense to me. "Details" should not be a dumping place for stuff that does not have its own panel. The other two make sense though (one of them).

>  * Add a GPU option to the session menu in the login screen.

Having to re login when you have want to play a game and relogin when you want to save battery is really inconvenient.

>  * If we can do dynamic GPU switching, add an option to the system menu.

We can't now but this makes sense.
Comment 7 Allan Day 2013-08-29 18:17:59 UTC
(In reply to comment #6)
> >  * Have a place to configure the GPU per app. This could happen in Settings >
> > Details, a separate Settings panel (under hardware), or in Software.
> 
> The first one makes no sense to me. "Details" should not be a dumping place for
> stuff that does not have its own panel. The other two make sense though (one of
> them).

It's a detail in the sense that it is a fairly technical thing. It really depends on how interested people with dual GPU systems will be in these configuration options though, and I don't pretend to be very informed about that.

> >  * Add a GPU option to the session menu in the login screen.
> 
> Having to re login when you have want to play a game and relogin when you want
> to save battery is really inconvenient.
...

This would be for running the shell on the prime GPU. You'd could have something like:

Graphics Card

 (*) Secondary
 ( ) Prime

Session:

 (*) GNOME 3
 ( ) GNOME Classic
Comment 8 drago01 2013-08-29 18:48:19 UTC
(In reply to comment #7)
> (In reply to comment #6)
> > >  * Have a place to configure the GPU per app. This could happen in Settings >
> > > Details, a separate Settings panel (under hardware), or in Software.
> > 
> > The first one makes no sense to me. "Details" should not be a dumping place for
> > stuff that does not have its own panel. The other two make sense though (one of
> > them).
> 
> It's a detail in the sense that it is a fairly technical thing. It really
> depends on how interested people with dual GPU systems will be in these
> configuration options though, and I don't pretend to be very informed about
> that.

It is not just a question of being interested or not. If you want to use an app that needs your prime (more powerful GPU) you have no other choice then to either use it somehow (either we do it by some heuristics or by offering the user a somehow visible option) or it would simply be slow and resulting into a crap UX.

> > >  * Add a GPU option to the session menu in the login screen.
> > 
> > Having to re login when you have want to play a game and relogin when you want
> > to save battery is really inconvenient.
> ...
> 
> This would be for running the shell on the prime GPU. You'd could have
> something like:
> 
> Graphics Card
> 
>  (*) Secondary
>  ( ) Prime

That shouldn't be needed. If we require the prime GPU to run the shell at acceptable speed we have fucked up completely.
Comment 9 Allan Day 2013-08-29 18:59:54 UTC
(In reply to comment #8)
...
> That shouldn't be needed. If we require the prime GPU to run the shell at
> acceptable speed we have fucked up completely.

I suggested this because it was mentioned as a requirement in Matthias's last comment. I agree that we shouldn't require the prime GPU to run the shell.
Comment 10 drago01 2013-08-29 19:31:09 UTC
(In reply to comment #9)
> (In reply to comment #8)
> ...
> > That shouldn't be needed. If we require the prime GPU to run the shell at
> > acceptable speed we have fucked up completely.
> 
> I suggested this because it was mentioned as a requirement in Matthias's last
> comment. I agree that we shouldn't require the prime GPU to run the shell.

Oh .. Matthias why do you thing that this is needed?
Comment 11 Matthias Clasen 2013-08-30 21:47:45 UTC
(In reply to comment #10)
 
> Oh .. Matthias why do you thing that this is needed?

What I said in that comment came straight from Dave Airlie, I have no further insight.
Comment 12 Federico Mena Quintero 2014-02-14 19:50:34 UTC
It's likely that I'll be implementing some of these features.  So, let's see, for each of Matthias's cases:

a) Hotplugged USB consoles - I don't have one yet, but I may have one soon.  Does anyone have one and can report how/if it works now?

b) If you run an app with DRI_PRIME=1, does Mesa render it on the fat GPU and does the compositing get done correctly (even if the secondary GPU is driving the display)?

I can see why you would want to specify this option graphically for "legacy" programs.  But could a "new" program select that option itself before initializing Mesa or whatever?  I.e. an i_am_a_fat_game() API?

c) Outputs hooked only to one GPU.  This needs to be made to Just Work by virtue of gnome-settings-daemon, right?

[I'll rename the extra two cases (d) and (e)]

d) Starting the X server with the secondary GPU as the main renderer, and the
primary as an output slave.  This could very well go in the Details or Display control panels, right?  I don't really know why you would want your whole session to run on the fat GPU (assuming gnome-shell indeed runs well on the slow GPU), but if there is a need for the option, we could put it there.

e) Dynamic GPU switching.  When do Windows/OSX do this?  (If you can run a fat app in the fat GPU, and the rest in the thin GPU, why do you need this?)
Comment 13 Dave Airlie 2014-02-14 22:42:10 UTC
(In reply to comment #12)
> It's likely that I'll be implementing some of these features.  So, let's see,
> for each of Matthias's cases:
> 
> a) Hotplugged USB consoles - I don't have one yet, but I may have one soon. 
> Does anyone have one and can report how/if it works now?

The X server in Fedora at least carries a patch to auto configure these, however really like input hotplug this should be a DE policy not hardcoded in the server. This isn't USB console devices, this is just USB output devices, i.e. just a VGA or DVI output, USB seat devices are handled separately by systemd.

> b) If you run an app with DRI_PRIME=1, does Mesa render it on the fat GPU and
> does the compositing get done correctly (even if the secondary GPU is driving
> the display)?

Yes except for bugs, there are some strange bugs in the compositing with prime that I still haven't gotten to fixing, some sort of race conditions which end up with black windows.

> 
> I can see why you would want to specify this option graphically for "legacy"
> programs.  But could a "new" program select that option itself before
> initializing Mesa or whatever?  I.e. an i_am_a_fat_game() API?

In theory there are GL extensions, in practice they aren't implemented and this covers more than just GL, vdpau also can use PRIME, any DRI2 using layer can. So no it needs to happen in the launcher area before the app starts.

> 
> c) Outputs hooked only to one GPU.  This needs to be made to Just Work by
> virtue of gnome-settings-daemon, right?
> 
> [I'll rename the extra two cases (d) and (e)]
> 
> d) Starting the X server with the secondary GPU as the main renderer, and the
> primary as an output slave.  This could very well go in the Details or Display
> control panels, right?  I don't really know why you would want your whole
> session to run on the fat GPU (assuming gnome-shell indeed runs well on the
> slow GPU), but if there is a need for the option, we could put it there.

The main reason for this, is when the external outputs are on the secondary GPU it has a lot more overhead that swapping GPUs. I'll try and summarise other OSes on the mailing list as I'm not sure its appropriate for in here.

> 
> e) Dynamic GPU switching.  When do Windows/OSX do this?  (If you can run a fat
> app in the fat GPU, and the rest in the thin GPU, why do you need this?)
Comment 14 Bastien Nocera 2015-10-21 15:24:17 UTC
I've put up a design page for GNOME in particular:
https://wiki.gnome.org/Design/OS/DualGPU

Please review the design goals and let me know whether there's anything amiss.
Comment 15 Alexander E. Patrakov 2015-10-21 15:30:25 UTC
The Goals section does not explicitly list making outputs attached to the discreet GPU available. This should be either added explicitly, or moved to non-goals with the intention to add another wiki page for that goal.

P.S. I have a laptop with a hot-pluggable AMD GPU in its docking station, and thus can test virtually all scenarios (outputs, offloading, hotplug).
Comment 16 Bastien Nocera 2015-10-21 15:45:05 UTC
(In reply to Alexander E. Patrakov from comment #15)
> The Goals section does not explicitly list making outputs attached to the
> discreet GPU available.

In the non-goals it says:
"Supporting external/hot-plugged GPUs beyond making their outputs available"
which links to bug 734346.

> This should be either added explicitly, or moved to
> non-goals with the intention to add another wiki page for that goal.
>
> P.S. I have a laptop with a hot-pluggable AMD GPU in its docking station,
> and thus can test virtually all scenarios (outputs, offloading, hotplug).

Problem is, it seems that not very people have that type of device. Given that the brand of your laptop, a Sony Vaio, stopped making laptops[1], it would be hard to support it even if we wanted to.

I know of no other laptop makers that offer this sort of hotpluggable dock, so the hotpluggable GPU is a niche within a (bigger) niche.

[1]: http://www.sony.com/electronics/computers/vaio-laptops
Comment 17 Alexander E. Patrakov 2015-10-21 16:59:18 UTC
Of course, hot-pluggable GPU is a rare feature. The other manufacturer who does this is MSI, see http://www.anandtech.com/show/8817/msi-announces-gs30-shadow-laptop-and-gpu-expansion-dock

However, outputs not connected to the integrated GPU are not so rare. See e.g. Lenovo T430.
Comment 18 drago01 2015-10-22 05:55:30 UTC
(In reply to Bastien Nocera from comment #14)
> I've put up a design page for GNOME in particular:
> https://wiki.gnome.org/Design/OS/DualGPU
> 
> Please review the design goals and let me know whether there's anything
> amiss.

> Always starting the machine and the desktop interface using the integrated GPU

Should already be working (we don't do anything to guarantee it though).

> Allow applications telling the OS that they would prefer using the discreet GPU

Easiest would be stating that in the desktop file. (i.e add a key to it).
We would then simply pass DRI_PRIME=1 before launching the app.

> Allow forcing an application to use the discreet GPU
> No configuration UI necessary for users 

They somehow contradict each other ... how would a user force the app to use the discreet GPU without some kind of configuration? 

We could add a "Run on discrete GPU" menu entry to the app right click menu in the overview.
Comment 19 Bastien Nocera 2015-10-22 12:42:35 UTC
(In reply to Alexander E. Patrakov from comment #17)
> However, outputs not connected to the integrated GPU are not so rare. See
> e.g. Lenovo T430.

I can't find any information about that.

(In reply to drago01 from comment #18)
> (In reply to Bastien Nocera from comment #14)
> > I've put up a design page for GNOME in particular:
> > https://wiki.gnome.org/Design/OS/DualGPU
> > 
> > Please review the design goals and let me know whether there's anything
> > amiss.
> 
> > Always starting the machine and the desktop interface using the integrated GPU
> 
> Should already be working (we don't do anything to guarantee it though).

That's not GNOME related per-se, but should be something that the kernel does. On a (now slightly old) MacBook Pro, the Intel will be used by default, but the Radeon card will always be powered at full tilt, which gives you atrocious battery life.

> > Allow applications telling the OS that they would prefer using the discreet GPU
> 
> Easiest would be stating that in the desktop file. (i.e add a key to it).
> We would then simply pass DRI_PRIME=1 before launching the app.

Right, that might require change in GIO to handle that.

> > Allow forcing an application to use the discreet GPU
> > No configuration UI necessary for users 
> 
> They somehow contradict each other ... how would a user force the app to use
> the discreet GPU without some kind of configuration? 
> 
> We could add a "Run on discrete GPU" menu entry to the app right click menu
> in the overview.

Selecting "run on discrete GPU" is not configuration though. To implement that, gnome-shell would need to know how many GPUs there are, and if we're in an "integrated/discreet" case. Bug 756914 could be used to implement that, as it needs to export the card name under Wayland.
Comment 20 Alexander E. Patrakov 2015-10-22 12:49:28 UTC
(In reply to Bastien Nocera from comment #19)
> (In reply to Alexander E. Patrakov from comment #17)
> > However, outputs not connected to the integrated GPU are not so rare. See
> > e.g. Lenovo T430.
> 
> I can't find any information about that.

See here: https://dchambers.github.io/articles/driving-multiple-monitors-on-an-optimus-laptop/
Comment 21 Bastien Nocera 2015-10-22 12:54:04 UTC
(In reply to Alexander E. Patrakov from comment #20)
> (In reply to Bastien Nocera from comment #19)
> > (In reply to Alexander E. Patrakov from comment #17)
> > > However, outputs not connected to the integrated GPU are not so rare. See
> > > e.g. Lenovo T430.
> > 
> > I can't find any information about that.
> 
> See here:
> https://dchambers.github.io/articles/driving-multiple-monitors-on-an-optimus-
> laptop/

That falls under this non-goal:
* Forcing using the discreet GPU when it has support for higher resolutions than the integrated one

You can already drive 2 displays (1 internal + 1 external, or 2 external) with the default configuration on the machine.

If that goal isn't defined clearly enough, I'm all ears for suggestions on how to clarify it.
Comment 22 Alexander E. Patrakov 2015-10-22 12:57:52 UTC
Another instance of a Lenovo laptop with two video cards, with the DVI port reachable only from the nvidia card: https://bugs.freedesktop.org/show_bug.cgi?id=70389 (Lenovo W530, in Optimus mode in BIOS)
Comment 23 Alexander E. Patrakov 2015-10-22 13:02:03 UTC
I think this wording will be more clear:

"""
Goals

 * Always starting the machine and the desktop interface using the integrated GPU
 * Allow forcing an application to use the discreet GPU
 * Allow applications telling the OS that they would prefer using the discreet GPU
 * No configuration UI necessary for users 

Non-goals

 * Supporting dual-GPU setups of similar capabilities on desktops (NVidia's SLI, AMD's CrossFireX)
 * Supporting external/hot-plugged GPUs
 * Making outputs reachable only via secondary GPUs available at all
 * Forcing using the discreet GPU when it has support for higher resolutions than the integrated one
 * Forcing running the whole session on the discreet GPU
"""

I.e. focus this design item on render-offloading, but not on output-offloading. For output-offloading, we already have a separate bug, so it seems logical to use a separate design item.
Comment 24 Alexander E. Patrakov 2015-10-22 13:19:53 UTC
Well, if the above understanding is wrong and both render-offloading and output-offloading is in scope, then the clarified text would be:

"""
Goals

 * Always starting the machine and the desktop interface using the integrated GPU
 * Allow forcing an application to use the discreet GPU for rendering
 * Allow applications telling the OS that they would prefer using the discreet GPU
 * Making both outputs connected to the integrated GPU and outputs connected to the discreet GPU available for the GNOME session
 * No new configuration UI necessary for users 

Non-goals

 * Supporting dual-GPU setups of similar capabilities on desktops (NVidia's SLI, AMD's CrossFireX)
 * Supporting external/hot-plugged GPUs beyond making their outputs available for the GNOME session
 * Forcing using the discreet GPU when it has support for higher resolutions than the integrated one
 * Forcing running the whole session on the discreet GPU
"""

I don't have any preference which of the two texts is put on the wiki page, just a bit dissatisfied with a situation where "making their outputs available" is neither a goal nor a non-goal.
Comment 25 Bastien Nocera 2015-10-22 15:19:09 UTC
(In reply to Alexander E. Patrakov from comment #24)
> Well, if the above understanding is wrong and both render-offloading and
> output-offloading is in scope, then the clarified text would be:
<snip>

I don't see what a difference between those and what we have. Please add your changes to the comment section of the Wiki page.

> I don't have any preference which of the two texts is put on the wiki page,
> just a bit dissatisfied with a situation where "making their outputs
> available" is neither a goal nor a non-goal.

Making what's outputs available?
Comment 26 Alexander E. Patrakov 2015-10-22 15:29:00 UTC
Making the outputs of discrete GPUs available
Comment 27 Luya Tshimbalanga 2015-10-29 18:18:41 UTC
I have a recent 2015 ASUS X550ZE Laptop that uses Dual Radeon
AMD Radeon® R5 M230 + Radeon® R7 M265 DX Dual Graphics with 2GB DDR3 VRAM Built-in A10-7400P
It is a Kaveri APU where the R5 M230 is left unused the whole time running on Fedora 23. I am willing to try it out.

For those looking getting that kind of laptops, Amazon provide them 
http://www.amazon.com/X550ZE-DB10-15-6-Inch-Graphics-Windows-Upgrade/dp/B00YR6BJLC/ref=cm_cr_pr_product_top?ie=UTF8 giving the opportunity to test the dual Radeon.
Comment 28 Bastien Nocera 2016-10-21 19:12:10 UTC
I'll close this as fixed, as the initial request was implemented in bug 773117. There's plenty more that could be done, including support for hotpluggable graphics cards (which would happen in switcheroo-control and mutter), ensuring outputs are made available (mutter for Wayland, X.org for X11), making applications tell that they'd prefer a discrete GPU if available (freedesktop, and plenty of applications).

If you have a particular problem, please file a separate bug for those.

*** This bug has been marked as a duplicate of bug 773117 ***