After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 751602 - /etc/gdm/Init/Default script does not run
/etc/gdm/Init/Default script does not run
Status: RESOLVED DUPLICATE of bug 748297
Product: gdm
Classification: Core
Component: general
3.16.x
Other Linux
: Normal normal
: ---
Assigned To: GDM maintainers
GDM maintainers
Depends on:
Blocks:
 
 
Reported: 2015-06-28 00:24 UTC by Alexandre Rostovtsev
Modified: 2016-01-11 19:19 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
journalctl output with debug/Enable=true (124.23 KB, text/plain)
2015-06-28 00:24 UTC, Alexandre Rostovtsev
Details

Description Alexandre Rostovtsev 2015-06-28 00:24:56 UTC
Created attachment 306231 [details]
journalctl output with debug/Enable=true

(reported downstream at https://bugs.gentoo.org/show_bug.cgi?id=553446)

In gdm-3.16.1.1, /etc/gdm/Init/Default appears to not run (tested by adding
echo "Ran Init `date`" >> /tmp/gdm.debug.txt
at the beginning of the script)

By contrast, PostLogin/Default and PreSession/Default do run.

gdm was configured with the following options:

--prefix=/usr --build=x86_64-pc-linux-gnu --host=x86_64-pc-linux-gnu --mandir=/usr/share/man --infodir=/usr/share/info --datadir=/usr/share --sysconfdir=/etc --localstatedir=/var/lib --disable-dependency-tracking --disable-silent-rules --libdir=/usr/lib64 --docdir=/usr/share/doc/gdm-3.16.1.1 --enable-compile-warnings=minimum --disable-schemas-compile --disable-maintainer-mode --enable-debug=yes --with-run-dir=/run/gdm --localstatedir=/var --disable-static --with-xdmcp=yes --enable-authentication-scheme=pam --with-default-pam-config=exherbo --with-at-spi-registryd-directory=/usr/libexec --with-consolekit-directory=/usr/lib/ConsoleKit --without-xevie --without-libaudit --enable-ipv6 --without-plymouth --without-selinux --with-systemd --without-console-kit --enable-systemd-journal --with-systemdsystemunitdir=/usr/lib/systemd/system --with-tcp-wrappers --disable-wayland-support --with-xinerama ITSTOOL=/bin/true --with-initial-vt=7

Debug journalctl output is attached.
Comment 1 Hakim Zulkufli 2015-08-31 05:14:32 UTC
It seems that I'm not the only with this problem. I've seen the same issue dated all the way back from 2008. Are the developers simply ignoring this or didn't even realise it never worked all these years?

http://ubuntuforums.org/showthread.php?t=813018
http://ubuntuforums.org/showthread.php?t=1306696

I'm trying to get GDM to work together with Nvidia Optimus, which is very annoying when two simple commands just won't work because of this.
Comment 2 Ray Strode [halfline] 2015-08-31 12:29:38 UTC
We don't run Init anymore since it used to get run by the code that started X as the root user. We now start the X server as the logged in user, implicitly as part of the session. We don't have a good place to run it. You could run it as part of the session using a desktop autostart file though.

The optimus situation with propriety nvidia is rough... Someone dropped a thinkpad W541 in my laptop a few months ago and I did a write up of what was necessary getting it working with Fedora (i'll post it below).

Those xrandr commands you want to run shouldn't be necessary at some point in the near future.  I talked to Dave Airlie about potentially exposing them as X configuration in June, and he said that they shouldn't be necessary at all at some near point in the future.  I'd give more details, but I don't remember them off hand and my irc logs are on an unplugged machine far away at the moment.

anyway, the writeup is here:

-----8<----------------------------

So I spent some time looking into the W541 optimus laptop that had the failwhale after you installed the proprietary nvidia drivers.  Initially I thought the problem was that the packaged driver version didn't support the video hardware yet. I then tried upgrading to the latest driver version from the nvidia website to see if that would resolve the problem. In fact, it didn't, so I did some more digging with a debug build of the X server to figure out what was going on.

In actuality, the driver wasn't getting associated with the video hardware for a different reason: the X server only associates the primary video card and cards explicitly mapped to drivers in the Xorg configuration. The primary video card is the one used at boot up (integrated video), and of course there was no explicit mappings in the X configuration. I was able to eventually get things to work after significantly altering the xorg config:

1) assigning the nvidia driver to a specific pci device (using the BusID directive).
2) associating the onboard video with the modesetting driver, since the modesetting driver supports PRIME, needed for getting optimus to work in a dual configuration mode
3) setting the modesetting driver to be inactive by default (since it's going to serve as a provider sink/frontend for the nvidia driver)
4) disabling GL usage in the modesetting driver by setting the AccelMethod option to "none", since nvidia's GL is loaded into the server and isn't compatible with glamour used by the modesetting driver.
5) setting a configuration option to allow the nvidia driver to proceed even though the video card isn't lighting up
any monitors using the AllowEmptyInitialConfiguration option set to "yes" or the UseDisplayDevice option set to "none" (since the nvidia card is a provider source/backend that's disconnected from monitors)
6) running some commands xrandr commands after X starts to activate the modesetting driver with the nvidia driver as its source

The bottom line is the nvidia drivers, as packaged, aren't really set up for optimus, they're just set up for standalone nvidia machines.  we could probably improve them to handle optimus better, although i don't see a way to get around doing the xrandr commands, and replace that stuff with xorg configuration. Dave, is there a way to do the equivalent of "xrandr --setprovideroutputsource" from xorg.conf ?

Now the question remains...what to do about the user experience?  Actually, we're sort of "lucky" that we got a failwhale. What was going on was, nvidia's libGL was getting used, and nvidia's version of the X server GLX module was getting loaded, but the nvidia driver itself wasn't getting used (because of the above needed configuration changes, and because the nvidia kernel module wasn't built).  In this scenario, the X server proceeded with the intel driver despite not having working GLX and hobbled along in a way that applications wouldn't be able to use GL but could still use other X APIs.  Our gl checks failed and we showed the fail whale. We might be able to detect this particular case (for instance by checking if it's a local X connection and checking if the GLX extension is missing), but it's not clear to me what message we could show that would be helpful. We certainly shouldn't explain how to configure them correctly, the steps for getting them configured correctly are insane at the moment.  We need to fix that first, and once we fix that then they shouldn't see the failwhale at all.  We could say to uninstall the nvidia drivers, but if the failwhale happens right after the user installs the nvidia driver, then they probably know the nvidia drivers are to blame, so they probably know that uninstalling them will fix things.  Also, if they're explicitly installing them, suggesting to uninstall them is telling them to do something we know they don't want to do. So I think the best tact is to make it work out of the box rather than show the failwhale.

It would also be good if nvidia didn't need it's own glx module, or if nvidia's glx module only got loaded if the nvidia driver configured a screen, but that's stuff nvidia would have to fix I guess (along with the vendor neutral dispatch whatever to make sure the right GL gets loaded)

-------------------------->8-------
Comment 3 bwcknr 2015-08-31 17:30:06 UTC
I expected gdm to execute that script, since it is documented behaviour in the docs -- see https://help.gnome.org/admin/gdm/3.16/configuration.html.en#scripting -- and furthermore that file is distributed and installed in each release since ever.

Currently I'm on vacation and can't test the autostart solution using desktop files. Yes, that's also documented -- see https://help.gnome.org/admin/gdm/3.16/configuration.html.en#autostart -- but back in the days I've had bad experiences with those docs regarding localization of gdm.
Comment 4 Ray Strode [halfline] 2015-08-31 18:01:43 UTC
the documentation in gdm is way out of date. we should probably stop shipping it, or fix it or something.
Comment 5 Lubosz Sarnecki 2015-10-16 09:19:04 UTC
> 6) running some commands xrandr commands after X starts to activate the 
> modesetting driver with the nvidia driver as its source

So is there a way to execute the xrandr commands for gdm?
Running gnome-session from .xinitrc works fine with the xrandr commands.
But adding them to PreSession/Default does not fix the gdm startup.
The screen remains black. So black, the backlight turns off when I start gdm.

And /etc/gdm/Init/Default is obviously shipped unintentionally and confusing the user. Please don't package it when its not loaded at all.

I would love to see gdm running on Optimus, thanks for your effort.
Comment 6 Lubosz Sarnecki 2015-10-16 09:31:16 UTC
I also get a segfault in mutter when starting gdm, which I do not get when starting with startx.
Comment 7 Rongcui Dong 2015-10-21 21:48:35 UTC
I also have this problem, and I get a segfault in mutter. I am on Arch linux amd64.
Comment 8 Matthias Clasen 2015-10-22 11:08:50 UTC
Maybe this is pointing out the obvious: executing xrandr commands won't help if the login screen is using wayland.
Comment 9 Lubosz Sarnecki 2015-10-22 11:18:11 UTC
I don't think that gdm is using Wayland on my Nvidia only (without Optimus) installation, since it works fine there.
Is the usage of Wayland a config option I got wrong on my Optimus setup?
Comment 10 Ray Strode [halfline] 2015-10-22 12:29:09 UTC
It should automatically fall back to Xorg when proprietary nvidia drivers are installed.

You should be able to create an autostart file in /usr/share/gdm/greeter/autostart to run the xrandr command.

I've been hesitant to come up with a better fix for this on the gnome side, since Dave Airlie said these commands wouldn't be needed in the near future, but that was a while ago. need to ping him again i guess.

We may just want to cry uncle and put the equivalent of the command somewhere in gnome-desktop and/or gnome-settings-daemon and/or mutter
Comment 11 Rongcui Dong 2015-10-22 13:26:09 UTC
I have disabled Wayland in GDM, but it does not help. However, it seems that /var/log/Xorg.0.log is not updated, so I don't even know if X is started
Comment 12 Rongcui Dong 2015-10-22 13:27:59 UTC
(In reply to Rongcui Dong from comment #11)
> I have disabled Wayland in GDM, but it does not help. However, it seems that
> /var/log/Xorg.0.log is not updated, so I don't even know if X is started

Actually, Xorg.0.log is touched, so probably X was started
Comment 13 Rongcui Dong 2015-10-22 13:34:55 UTC
OK, I hacked it around a bit for now by placing another ".desktop" (I named it "XRandR.desktop") under /usr/share/gdm/greeter/autostart, with 

    Exec=sh -c "xrandr --setprovideroutputsource modesetting NVIDIA-0; xrandr --auto"

(I copied it from screen, so expect typos.)

It does work out fine... I see the login screen, but then it goes dark after I hit log in.
Comment 14 Ray Strode [halfline] 2015-10-22 13:46:50 UTC
copy that same file to /etc/xdg/autostart to get things working post login.
Comment 15 Rongcui Dong 2015-10-22 13:57:44 UTC
No, I cannot log in. I chose GNOME (not GNOME on Wayland), but then I get a black screen. Later, I switched to tty and switched back, and I am logged off. Also, GDM can only be started once per boot somehow, doing a "systemctl restart gdm" will case it to die, throwing messages like 

    "GdmDisplay: display lasted .... seconds"
    "Child process ... was already dead"

lots of it, and

    "GdmLocalDisplayFactory: maximum of X display failure reached: check X server log for errors"

But I do not have X errors. I have a kernel error saying 

    vgaarb: this pci device is not a vga device

I have 

    _XSERVTransSocketUNIXCreateListener: ...SocketCreateListener() failed
    _XSERVTransMakeAllCOTSServerListeners: server already running

in my `journalctl -b` output. I have

    CRITICAL: gsm_manager_set_phase: assertion 'GSM_IS_MANAGER (manager)' failed
Comment 16 Rongcui Dong 2015-10-22 14:01:17 UTC
(In reply to Ray Strode [halfline] from comment #14)
> copy that same file to /etc/xdg/autostart to get things working post login.

I see graphics now, but then there is the "Oh no! Something is wrong" window which is not quite helpful. journalctl still emits some assertion failure and the vgaarb error
Comment 17 Rongcui Dong 2015-10-22 14:16:14 UTC
I get

    gnome-shell[805]: segfault at 14 ip 00007fa98de472c5 sp 00007fff15381a20 error 4 in mutter.so.0.0.0.0[7fa98de03000+fc000]

And another one. If desired I can copy the whole log, it's just that it is on my other computer.

The vgaarb error is not related since the same error is present using SDDM, but GNOME starts that way.
Comment 18 Michael Catanzaro 2016-01-11 19:19:49 UTC
Guys, if you are getting segfaults, please report bugs with stack traces (get coredumps out of coredumpctl). We can't debug or investigate crashes without stack traces, and we can't keep track of your crash if you report it in seemingly-unrelated bug reports.

The original report in this bug is the same as bug #748297. If you have a different issue, please file a new bug. If it's a crasher, be sure to include a stacktrace and set importance=critical. Thanks!

*** This bug has been marked as a duplicate of bug 748297 ***