GNOME Bugzilla – Bug 686666
introduce G_TYPE_ENUM_UINT and friends, deprecate G_TYPE_ENUM
Last modified: 2012-10-25 19:43:09 UTC
See https://bugzilla.redhat.com/864196 for downstream discussion. The executive summary is that in for enums in C, the compiler will choose a storage type (unsigned int, long long) based on what the enum contains. For GCC, if the enum's values fit in the range of 0-G_MAXUINT, unsigned int will be chosen. This fits "most" enums. If you have negative numbers, but the range fits into G_MAXINT, you get an int, unsurprisingly. A case that's not handled at all by G_TYPE_ENUM is if you have > G_MAXINT values. GCC will choose e.g. guint64, but the public enum API is only 32 bit sized. A corner case here is that enums that fit in G_MAXUINT but not G_MAXINT will fail on some architectures like PPC64, which is the issue linked above. Now, what we need to do is add types so that library users can explicitly specify the size of the enum. This is going to be tedious and painful; but I suspect most library users will just need to do s/G_TYPE_ENUM/G_TYPE_ENUM_UINT/. Still do be done for this is an analysis of which modules in GNOME need patching.
*** This bug has been marked as a duplicate of bug 686662 ***
Created attachment 227299 [details] [review] tests/signals: Disable large enumeration value test that is failing on PPC64 Basically due to a combination of va_args semantics around signed/unsigned ints, this test case fails on ppc64. At the moment, we have as yet to find any real-world consumer with such a large enumeration value. Unfortunately, the possible fixes for this are extremely invasive; we would have to define a new enum API. Given both of these facts, we believe it makes the most sense at the current time to simply not test this. If we at a later time determine there is such a real-world consumer, we can look at doing the necessary fixes.
Comment on attachment 227299 [details] [review] tests/signals: Disable large enumeration value test that is failing on PPC64 Wrong bug