GNOME Bugzilla – Bug 684920
Port to Google Drive API v2
Last modified: 2018-09-21 16:23:51 UTC
Just as soon as we get the Google Documents API v3 working, Google go and retire it in favour of the new Google Drive API. http://googleappsdeveloper.blogspot.co.uk/2012/09/retiring-google-documents-list-api-v3.html There’s still a year until the old API goes away completely.
Deadline: April 20th, 2015.
WIP branch here: https://git.gnome.org/browse/libgdata/log/?h=wip/rishi/drive
Created attachment 301651 [details] [review] documents: Point the request uri at the Drive v2 API
Created attachment 301652 [details] [review] documents: Add support for parsing Drive v2 file lists
Created attachment 301653 [details] [review] documents: Add support for parsing Drive v2 files
Created attachment 301654 [details] [review] core: Parse alternateLink from JSON
Created attachment 301655 [details] [review] core: Parse description from JSON
Created attachment 301656 [details] [review] core: Parse owners from JSON
Created attachment 301657 [details] [review] core: Parse createdDate from JSON
Created attachment 301667 [details] [review] documents: Add support for parsing Drive v2 files Adds a GDATA_LINK_ACCESS_CONTROL_LIST link to prevent callers of gdata_access_handler_get_rules from crashing. Note that the feed returned by the server can't be parsed, yet, but atleast won't crash.
Created attachment 301668 [details] Sample test program to test the new protocol implementation
Review of attachment 301651 [details] [review]: a-c_n with that modification. ::: gdata/services/documents/gdata-documents-service.c @@ +479,3 @@ /* If we want to query for documents contained in a folder, the URI is different. * The "/[folder:id]" suffix is added by the GDataQuery later. */ + return g_strconcat (_gdata_service_get_scheme (), "://www.googleapis.com/drive/v2/files", NULL); _gdata_service_get_scheme() is essentially deprecated, so you might as well just use a literal ‘https’.
Review of attachment 301652 [details] [review]: ::: gdata/services/documents/gdata-documents-feed.c @@ +152,3 @@ + } + } else { + g_warning ("%s files are not handled yet", kind); Does this mean you plan to deprecate all the subclasses of GDataDocumentsDocument? If so, a patch to do that would be great.
Review of attachment 301667 [details] [review]: ++
Review of attachment 301654 [details] [review]: Are you sure the alternateLink key will be consistently used across all JSON-using services — not just in the Drive API? If it’s specific to Drive, it should be in GDataDocumentsDocument.
Review of attachment 301655 [details] [review]: ++
(In reply to Philip Withnall from comment #15) > If it’s specific to Drive, > it should be in GDataDocumentsDocument. er, GDataDocumentsEntry
Review of attachment 301656 [details] [review]: No, this is definitely specific to the Drive API and hence should be implemented in GDataDocumentsEntry. ::: gdata/gdata-entry.c @@ +728,3 @@ + get_kind_email_and_name (reader, &kind, &email, &name); + if (kind == NULL) { + g_warning ("Error parsing JSON: Failed to find ‘kind’"); I’m not sure I like the pervasiveness of g_warning() calls here (and in similar bits of code in the last few patches). This should set the GError instead.
Review of attachment 301657 [details] [review]: This, too, is definitely specific to the Drive API and should be implemented in GDataDocumentsEntry. I have a local patch to add a _gdata_entry_set_published() internal API which would allow you to do this; I will push it shortly.
(In reply to Debarshi Ray from comment #11) > Created attachment 301668 [details] > Sample test program to test the new protocol implementation Could you possibly fix up the command line interface and plumb this into the build system as a demo, similar to the freebase-cli and youtube-cli demos? (I just committed youtube-cli and it would serve as a nice template to copy: https://git.gnome.org/browse/libgdata/commit/?id=d6ddcbfe28f236f3f6a17fd980f03b2306135ec8) That would mean the effort of writing the sample program in the first place has not been wasted. :-)
Created attachment 301737 [details] [review] documents: Point the request uri at the Drive v2 API
Created attachment 301738 [details] [review] documents: Add support for parsing Drive v2 file lists
Created attachment 301739 [details] [review] documents: Add support for parsing Drive v2 files
Created attachment 301740 [details] [review] core: Add support for JSON to GDataAccessHandler
Created attachment 301741 [details] [review] core: Add support for JSON to GDataAccessRule
(In reply to Philip Withnall from comment #12) > Review of attachment 301651 [details] [review] [review]: > > a-c_n with that modification. > > ::: gdata/services/documents/gdata-documents-service.c > @@ +479,3 @@ > /* If we want to query for documents contained in a folder, the URI is > different. > * The "/[folder:id]" suffix is added by the GDataQuery later. */ > + return g_strconcat (_gdata_service_get_scheme (), > "://www.googleapis.com/drive/v2/files", NULL); > > _gdata_service_get_scheme() is essentially deprecated, so you might as well > just use a literal ‘https’. Done.
(In reply to Philip Withnall from comment #13) > Review of attachment 301652 [details] [review] [review]: > > ::: gdata/services/documents/gdata-documents-feed.c > @@ +152,3 @@ > + } > + } else { > + g_warning ("%s files are not handled yet", kind); > > Does this mean you plan to deprecate all the subclasses of > GDataDocumentsDocument? Yes. How about also marking the document-id property as deprecated?
(In reply to Philip Withnall from comment #15) > Review of attachment 301654 [details] [review] [review]: > > Are you sure the alternateLink key will be consistently used across all > JSON-using services — not just in the Drive API? If it’s specific to Drive, > it should be in GDataDocumentsDocument. Yes, it probably should be in GDataDocumentsEntry. I just stuck it in there because selfLink was already there. I will move it.
(In reply to Philip Withnall from comment #18) > Review of attachment 301656 [details] [review] [review]: > > No, this is definitely specific to the Drive API and hence should be > implemented in GDataDocumentsEntry. Done. > ::: gdata/gdata-entry.c > @@ +728,3 @@ > + get_kind_email_and_name (reader, &kind, &email, &name); > + if (kind == NULL) { > + g_warning ("Error parsing JSON: Failed to find ‘kind’"); > > I’m not sure I like the pervasiveness of g_warning() calls here (and in > similar bits of code in the last few patches). This should set the GError > instead. I am now setting the error when 'kind' is missing.
(In reply to Philip Withnall from comment #19) > Review of attachment 301657 [details] [review] [review]: > > This, too, is definitely specific to the Drive API and should be implemented > in GDataDocumentsEntry. I have a local patch to add a > _gdata_entry_set_published() internal API which would allow you to do this; > I will push it shortly. Done. Thanks for the internal API.
Created attachment 301753 [details] Sample test program to test the new protocol implementation Extended it to cover access handler / rules, single_query and download.
Review of attachment 301737 [details] [review]: ++
Review of attachment 301738 [details] [review]: ++ with this one tweak. ::: gdata/services/documents/gdata-documents-feed.c @@ +121,3 @@ + /* Translators: the parameter is an error message */ + _("Error parsing JSON: %s"), + _("JSON node ‘items’ is not an array.")); If there’s not an existing suitable translated string, don’t mark this for translation. I think I’m going to change libgdata’s translation policy. Strings like this are only useful to the developer as debug info, and are just a waste of translators’ time to translate. No end user is going to appreciate seeing that string.
Review of attachment 301739 [details] [review]: ++ with this tweak. ::: gdata/services/documents/gdata-documents-entry.c @@ +563,3 @@ + /* Translators: the parameter is an error message */ + _("Error parsing JSON: %s"), + _("JSON node ‘labels’ is not an array.")); Same as in the previous review: unless there’s a translated string which matches this pattern which you can re-use, there’s no point in marking error messages like this for translation.
Review of attachment 301740 [details] [review]: ++
Review of attachment 301741 [details] [review]: ++, with this small fix. ::: gdata/gdata-access-rule.c @@ +529,3 @@ + /* Translators: the parameter is an error message */ + _("Error parsing JSON: %s"), + _("Permission type ‘group’ or ‘user’ needs an ‘emailAddress’ property.")); Same comment about the translatable strings as in reviews of the other patches.
(In reply to Philip Withnall from comment #33) > Review of attachment 301738 [details] [review] [review]: > > ++ with this one tweak. > > ::: gdata/services/documents/gdata-documents-feed.c > @@ +121,3 @@ > + /* Translators: the parameter is an error message */ > + _("Error parsing JSON: %s"), > + _("JSON node ‘items’ is not an array.")); > > If there’s not an existing suitable translated string, don’t mark this for > translation. "Error parsing JSON: %s" does exist already, but not the other one. So should I only unmark the second string? Or should I unmark both?
(In reply to Debarshi Ray from comment #37) > (In reply to Philip Withnall from comment #33) > > Review of attachment 301738 [details] [review] [review] [review]: > > > > ++ with this one tweak. > > > > ::: gdata/services/documents/gdata-documents-feed.c > > @@ +121,3 @@ > > + /* Translators: the parameter is an error message */ > > + _("Error parsing JSON: %s"), > > + _("JSON node ‘items’ is not an array.")); > > > > If there’s not an existing suitable translated string, don’t mark this for > > translation. > > "Error parsing JSON: %s" does exist already, but not the other one. So > should I only unmark the second string? Or should I unmark both? Unmark the second string.
Created attachment 301809 [details] [review] core: Add support for JSON to GDataAccessRule Pushed after unmarking the error strings.
Created attachment 301810 [details] [review] documents: Add support for parsing Drive v2 file lists Unmark the error strings and set the error if kind == NULL.
Created attachment 301811 [details] [review] documents: Add support for parsing Drive v2 files Unmark the error strings.
Created attachment 301813 [details] [review] documents: Point the entry URI at the Drive v2 API
Created attachment 301814 [details] [review] core: Get the content type from the response instead of hard coding it
Created attachment 301815 [details] [review] documents: Parse downloadUrl from JSON
Created attachment 301816 [details] [review] documents: Add support for downloading and exporting using Drive v2
Created attachment 301817 [details] [review] documents: Use Drive v2 scopes
Review of attachment 301810 [details] [review]: ++ ::: gdata/services/documents/gdata-documents-feed.c @@ +83,3 @@ + gdata_parser_string_from_json_member (reader, "kind", P_REQUIRED | P_NON_EMPTY, &kind, &success, &error); + if (!success && error != NULL) { + g_warning ("Error parsing JSON: Trying to find ‘kind’: %s", error->message); When I mentioned using GError instead of g_warning(), I meant it for all the g_warning() instances which report parse errors. This could be fixed as a follow-up commit, though, since it is a code path which will basically never be hit.
Review of attachment 301811 [details] [review]: Missing deprecation of document-id? Also, see my comment #47 about g_warning().
Review of attachment 301813 [details] [review]: ++
Review of attachment 301814 [details] [review]: Nice cleanup!
Review of attachment 301815 [details] [review]: ++, though a link to the documentation describing the JSON format would be useful in a comment at the top of parse_json().
Review of attachment 301816 [details] [review]: ::: gdata/services/documents/gdata-documents-document.c @@ +215,3 @@ +struct _GDataDocumentsDocumentPrivate { + GHashTable *export_links; Please put a comment of the form: /* owned string → owned string */ so that it’s obvious how to use this hash table. @@ +270,3 @@ + /* Translators: the parameter is an error message */ + _("Error parsing JSON: %s"), + _("JSON node ‘exportLinks’ is not an object.")); Unmark the second string as translatable. Might be worthwhile checking that the .pot file for the project doesn’t change between master and HEAD of your branch.
Review of attachment 301817 [details] [review]: ++
(In reply to Philip Withnall from comment #51) > Review of attachment 301815 [details] [review] [review]: > > ++, though a link to the documentation describing the JSON format would be > useful in a comment at the top of parse_json(). Done. Should I do that for all the other implementations of parse_json too?
(In reply to Debarshi Ray from comment #54) > (In reply to Philip Withnall from comment #51) > > Review of attachment 301815 [details] [review] [review] [review]: > > > > ++, though a link to the documentation describing the JSON format would be > > useful in a comment at the top of parse_json(). > > Done. Should I do that for all the other implementations of parse_json too? That would be extremely useful. :-)
(In reply to Philip Withnall from comment #55) > (In reply to Debarshi Ray from comment #54) > > (In reply to Philip Withnall from comment #51) > > > Review of attachment 301815 [details] [review] [review] [review] [review]: > > > > > > ++, though a link to the documentation describing the JSON format would be > > > useful in a comment at the top of parse_json(). > > > > Done. Should I do that for all the other implementations of parse_json too? > > That would be extremely useful. :-) Ok, done.
Created attachment 301998 [details] [review] documents: Add support for parsing Drive v2 file lists + Create GDataDocumentsEntry sub-classes based on MIME types to retain compatibility + Added links to JSON format and list of MIME types
Created attachment 301999 [details] [review] documents: Add support for parsing Drive v2 files
Created attachment 302000 [details] [review] documents: Support thumbnails and downloads using Drive v2 Added support for thumbnails and squashed the downloadUrl and export patches together since they were logically similar.
Created attachment 302001 [details] [review] documents: Deprecate document-id
Created attachment 302006 [details] [review] documents: Add support for parsing Drive v2 files Move alternateLink to GDataDocumentsEntry.
Review of attachment 301998 [details] [review]: ::: gdata/services/documents/gdata-documents-feed.c @@ +83,3 @@ + gdata_parser_string_from_json_member (reader, "kind", P_REQUIRED | P_NON_EMPTY, &kind, &success, &error); + if (!success && error != NULL) { + g_warning ("Error parsing JSON: Trying to find ‘kind’: %s", error->message); Again, this would be better as a propagated GError which is reported by the caller. Not a fan of g_warning() on parse errors. @@ +137,3 @@ + + if (json_reader_is_object (reader) == FALSE) { + g_warning ("Error parsing JSON: JSON node inside ‘items’ is not an object"); g_warning() => GError
Review of attachment 302000 [details] [review]: Ready to commit with the following minor change: ::: gdata/services/documents/gdata-documents-document.c @@ +423,3 @@ g_return_val_if_fail (export_format != NULL && *export_format != '\0', NULL); + if (g_strcmp0 (export_format, "html") == 0) A link to something documenting the magic strings assigned to @format would be useful.
Review of attachment 302001 [details] [review]: Minor tweak to the commit message: GDataDocumentsEntry:document-id was actually already deprecated (in the gtk-doc comment); it just wasn’t marked with G_PARAM_DEPRECATED.
Review of attachment 302006 [details] [review]: ::: gdata/services/documents/gdata-documents-entry.c @@ +543,2 @@ + g_free (alternate_uri); + return success; Looks good to me. (I have not re-reviewed the rest of the patch; only the alternateLink bit.)
Created attachment 302070 [details] [review] documents: Deprecate document-id Improve the commit message.
Review of attachment 302070 [details] [review]: ++
These patches break gnome-documents due to its faulty handling of GDataEntry:id. See bug 748253 Right now, I am leaning towards fixing gnome-documents and gnome-online-miners, instead of adding some band-aid in libgdata.
(In reply to Philip Withnall from comment #62) > Review of attachment 301998 [details] [review] [review]: > > ::: gdata/services/documents/gdata-documents-feed.c > @@ +83,3 @@ > + gdata_parser_string_from_json_member (reader, "kind", P_REQUIRED | > P_NON_EMPTY, &kind, &success, &error); > + if (!success && error != NULL) { > + g_warning ("Error parsing JSON: Trying to find ‘kind’: %s", > error->message); > > Again, this would be better as a propagated GError which is reported by the > caller. Not a fan of g_warning() on parse errors. Done. > @@ +137,3 @@ > + > + if (json_reader_is_object (reader) == FALSE) { > + g_warning ("Error parsing JSON: JSON node inside ‘items’ is not an > object"); > > g_warning() => GError Done.
Created attachment 302161 [details] [review] documents: Add support for parsing Drive v2 file lists Use GError instead of g_warning for parsing issues.
(In reply to Philip Withnall from comment #63) > Review of attachment 302000 [details] [review] [review]: > > Ready to commit with the following minor change: > > ::: gdata/services/documents/gdata-documents-document.c > @@ +423,3 @@ > g_return_val_if_fail (export_format != NULL && *export_format != '\0', > NULL); > > + if (g_strcmp0 (export_format, "html") == 0) > > A link to something documenting the magic strings assigned to @format would > be useful. I can not locate any reference for the list of supported export formats. exportLinks is mentioned here: https://developers.google.com/drive/v2/reference/files , and the list of MIME types for the native Google Drive formats are here: https://developers.google.com/drive/web/mime-types , but neither of those have what we want. I wrote the code by going through all the exportLinks for my own Drive. :/
*** Bug 748317 has been marked as a duplicate of this bug. ***
Created attachment 302171 [details] [review] documents: Add support for parsing Drive v2 files Change the remaining g_warnings to GErrors.
Review of attachment 302161 [details] [review]: ::: gdata/services/documents/gdata-documents-feed.c @@ +97,3 @@ + /* Translators: the parameter is an error message */ + _("Error parsing JSON: %s"), + "Failed to find ‘mimeType’."); Not sure it's necessary to prefix the error here — gdata_parser_string_from_json_member() should be returning an appropriate error message.
Review of attachment 302171 [details] [review]: ::: gdata/services/documents/gdata-documents-entry.c @@ +441,3 @@ + /* Translators: the parameter is an error message */ + _("Error parsing JSON: %s"), + "Failed to find ‘kind’."); Same here: not sure it's necessary to prefix the error. Same for the other similar places in this file.
(In reply to Philip Withnall from comment #74) > Review of attachment 302161 [details] [review] [review]: > > ::: gdata/services/documents/gdata-documents-feed.c > @@ +97,3 @@ > + /* Translators: the parameter is an error > message */ > + _("Error parsing JSON: %s"), > + "Failed to find ‘mimeType’."); > > Not sure it's necessary to prefix the error here — > gdata_parser_string_from_json_member() should be returning an appropriate > error message. I was looking at gdata_parser_error_from_json_error which doesn't mention the element that we were trying to parse, and skimming through the json-glib sources it doesn't look like the reader mentions it either. Up to you. Also, do you want to change the error domain (and code) to GDATA_PARSER_ERROR? The errors from the parser use GDATA_SERVICE_ERROR.
Created attachment 302212 [details] [review] documents: Deprecate GDataDocumentsEntry:edited
(In reply to Debarshi Ray from comment #76) > (In reply to Philip Withnall from comment #74) > > Review of attachment 302161 [details] [review] [review] [review]: > > > > ::: gdata/services/documents/gdata-documents-feed.c > > @@ +97,3 @@ > > + /* Translators: the parameter is an error > > message */ > > + _("Error parsing JSON: %s"), > > + "Failed to find ‘mimeType’."); > > > > Not sure it's necessary to prefix the error here — > > gdata_parser_string_from_json_member() should be returning an appropriate > > error message. > > I was looking at gdata_parser_error_from_json_error which doesn't mention > the element that we were trying to parse, and skimming through the json-glib > sources it doesn't look like the reader mentions it either. Up to you. Fair enough, leave it as it is in the patch. :-) > Also, do you want to change the error domain (and code) to > GDATA_PARSER_ERROR? The errors from the parser use GDATA_SERVICE_ERROR. Th error domain needs to stay the same as in the parse_xml() code.
Review of attachment 302212 [details] [review]: Do the unit tests not need G_GNUC_BEGIN_IGNORE_DEPRECATIONS? Otherwise, looks good to me.
(In reply to Philip Withnall from comment #79) > Review of attachment 302212 [details] [review] [review]: > > Do the unit tests not need G_GNUC_BEGIN_IGNORE_DEPRECATIONS? Added it. I was not running the tests since they would be failing.
Created attachment 302214 [details] [review] documents: Deprecate GDataDocumentsEntry:edited
Created attachment 302215 [details] [review] documents: Parse lastViewedByMeDate as GDataDocumentsEntry:last-viewed Not sure what to do with this. There is also a markedViewedByMeDate. We could add a new property for it, or coalesce both into last-viewed.
(In reply to Philip Withnall from comment #78) > (In reply to Debarshi Ray from comment #76) >> Also, do you want to change the error domain (and code) to >> GDATA_PARSER_ERROR? The errors from the parser use GDATA_SERVICE_ERROR. > > Th error domain needs to stay the same as in the parse_xml() code. I read through the older XML code again, and I think it should be the other way round. We used to throw GDATA_PARSER_ERROR only when the response can not be parsed at all. For every other semantic mismatch we forwarded the GDATA_SERVICE_ERROR thrown by gdata-parser.c. So, I think we should switch to GDATA_SERVICE_ERROR here.
Created attachment 302216 [details] [review] core: Parse selfLink from JSON for GDataFeed
Created attachment 302226 [details] [review] core: Expose _gdata_feed_add_link() internally
Created attachment 302227 [details] [review] documents: Add support for paging through Drive v2 file lists Works for the simple cases. Possibly missing features.
Review of attachment 302214 [details] [review]: ++
Review of attachment 302215 [details] [review]: ++
Review of attachment 302216 [details] [review]: Is this sufficiently generic to go in GDataFeed, or does selfLink only come from Drive?
Review of attachment 302226 [details] [review]: Some justification for why it's needed internally would be good to have in the commit message.
Review of attachment 302227 [details] [review]: ::: gdata/services/documents/gdata-documents-query.c @@ +359,3 @@ + if (max_results > 0) { + APPEND_SEP + max_results = max_results > 1000 ? 1000 : max_results; A link to the docs which presumably state there's a 1000 limit would be good here.
Created attachment 302238 [details] [review] documents: Add support for parsing Drive v2 file lists Pushed after fixing the errors.
(In reply to Philip Withnall from comment #89) > Review of attachment 302216 [details] [review] [review]: > > Is this sufficiently generic to go in GDataFeed, or does selfLink only come > from Drive? selfLink is parsed in GDataEntry, so I assumed that it is generic enough for GDataFeed also.
Created attachment 302239 [details] [review] documents: Add support for parsing Drive v2 files Pushed after fixing the errors.
Created attachment 302240 [details] Sample test program to test the new protocol implementation Latest version of the test program.
Review of attachment 302216 [details] [review]: (In reply to Debarshi Ray from comment #93) > (In reply to Philip Withnall from comment #89) > > Review of attachment 302216 [details] [review] [review] [review]: > > > > Is this sufficiently generic to go in GDataFeed, or does selfLink only come > > from Drive? > > selfLink is parsed in GDataEntry, so I assumed that it is generic enough for > GDataFeed also. OK, please commit then.
Created attachment 302294 [details] [review] core: Expose _gdata_feed_add_link() internally Add some justification in the commit message.
Created attachment 302295 [details] [review] core: Add internal helper API for adding query strings
Created attachment 302296 [details] [review] documents: Add support for paging through Drive v2 file lists Split the maxResults bit into a separate patch about query properties.
Created attachment 302297 [details] [review] documents: Add support for Drive v2 query properties Add a comment about the 1000 limit on maxResults, and support a few more properties. I am wondering whether we should deprecate the GDataDocumentsQuery methods for all those properties which do not have a separate parameter in the Drive v2 API, and encourage users to construct their own q strings.
Review of attachment 302294 [details] [review]: ++
Review of attachment 302295 [details] [review]: ::: gdata/gdata-query.c @@ +570,3 @@ +void +_gdata_query_add_q_internal (GDataQuery *self, const gchar *q) This definitely needs a documentation comment, explaining similarly to what’s in the commit message. @@ +580,3 @@ + str = g_string_new (priv->q_internal); + + if (priv->q_internal != NULL && priv->q_internal[0] != '\0') It might be simpler to check (str->len > 0) here. @@ +581,3 @@ + + if (priv->q_internal != NULL && priv->q_internal[0] != '\0') + g_string_append (str, " and "); Do you have an API reference link for the ‘ and ’ syntax?
Review of attachment 302296 [details] [review]: ++
Review of attachment 302297 [details] [review]: ::: gdata/services/documents/gdata-documents-query.c @@ +310,3 @@ } + /* Search parameters: https://developers.google.com/drive/web/ */ Is there nothing more specific than this?
(In reply to Philip Withnall from comment #104) > Review of attachment 302297 [details] [review] [review]: > > ::: gdata/services/documents/gdata-documents-query.c > @@ +310,3 @@ > } > > + /* Search parameters: https://developers.google.com/drive/web/ */ > > Is there nothing more specific than this? Oops. That is a copy paste error. The URL is https://developers.google.com/drive/web/search-parameters I will fix it.
(In reply to Debarshi Ray from comment #105) > (In reply to Philip Withnall from comment #104) > > Review of attachment 302297 [details] [review] [review] [review]: > > > > ::: gdata/services/documents/gdata-documents-query.c > > @@ +310,3 @@ > > } > > > > + /* Search parameters: https://developers.google.com/drive/web/ */ > > > > Is there nothing more specific than this? > > Oops. That is a copy paste error. The URL is > https://developers.google.com/drive/web/search-parameters I will fix it. Thought so. :-) Please commit once that's fixed.
(In reply to Philip Withnall from comment #102) > Review of attachment 302295 [details] [review] [review]: > > ::: gdata/gdata-query.c > @@ +570,3 @@ > > +void > +_gdata_query_add_q_internal (GDataQuery *self, const gchar *q) > > This definitely needs a documentation comment, explaining similarly to > what’s in the commit message. Ok, I have added some now. > @@ +580,3 @@ > + str = g_string_new (priv->q_internal); > + > + if (priv->q_internal != NULL && priv->q_internal[0] != '\0') > > It might be simpler to check (str->len > 0) here. Ok. > @@ +581,3 @@ > + > + if (priv->q_internal != NULL && priv->q_internal[0] != '\0') > + g_string_append (str, " and "); > > Do you have an API reference link for the ‘ and ’ syntax? I added a comment with https://developers.google.com/drive/web/search-parameters
Created attachment 302445 [details] [review] core: Add internal helper API for adding query strings
Created attachment 302447 [details] [review] documents: Add support for Drive v2 query properties
(In reply to Philip Withnall from comment #106) > (In reply to Debarshi Ray from comment #105) > > (In reply to Philip Withnall from comment #104) > > > Review of attachment 302297 [details] [review] [review] [review] [review]: > > > > > > ::: gdata/services/documents/gdata-documents-query.c > > > @@ +310,3 @@ > > > } > > > > > > + /* Search parameters: https://developers.google.com/drive/web/ */ > > > > > > Is there nothing more specific than this? > > > > Oops. That is a copy paste error. The URL is > > https://developers.google.com/drive/web/search-parameters I will fix it. > > Thought so. :-) Please commit once that's fixed. Ok. It depends on the other patch, so I will wait for that to be cleared.
Review of attachment 302445 [details] [review]: ++
Review of attachment 302447 [details] [review]: ++
Created attachment 302464 [details] [review] core: Set description in JSON
Created attachment 302465 [details] [review] core: Set parsed GDataAccessRule fields in JSON
Review of attachment 302464 [details] [review]: ++
Review of attachment 302465 [details] [review]: ::: gdata/gdata-access-rule.c @@ +552,3 @@ +static void +get_json (GDataParsable *parsable, JsonBuilder *builder) Could do with a comment linking to the Drive reference documentation (as _one_ example of the access control API). @@ +559,3 @@ + + json_builder_set_member_name (builder, "kind"); + json_builder_add_string_value (builder, "drive#permission"); Unfortunately, GDataAccessRule has to work for any service, not just Google Drive. Is it really necessary to set the kind here?
(In reply to Philip Withnall from comment #116) > Review of attachment 302465 [details] [review] [review]: > > ::: gdata/gdata-access-rule.c > @@ +552,3 @@ > > +static void > +get_json (GDataParsable *parsable, JsonBuilder *builder) > > Could do with a comment linking to the Drive reference documentation (as > _one_ example of the access control API). Ok, I will add a link to https://developers.google.com/drive/v2/reference/permissions > @@ +559,3 @@ > + > + json_builder_set_member_name (builder, "kind"); > + json_builder_add_string_value (builder, "drive#permission"); > > Unfortunately, GDataAccessRule has to work for any service, not just Google > Drive. Is it really necessary to set the kind here? I have not checked whether it will work if we don't specify the 'kind'. However, it is so widespread in the Drive v2 API, that specifying it might be the right thing to do. How about a GDataDocumentsAccessRule to solve this?
(In reply to Debarshi Ray from comment #117) > (In reply to Philip Withnall from comment #116) > > @@ +559,3 @@ > > + > > + json_builder_set_member_name (builder, "kind"); > > + json_builder_add_string_value (builder, "drive#permission"); > > > > Unfortunately, GDataAccessRule has to work for any service, not just Google > > Drive. Is it really necessary to set the kind here? > > I have not checked whether it will work if we don't specify the 'kind'. > However, it is so widespread in the Drive v2 API, that specifying it might > be the right thing to do. How about a GDataDocumentsAccessRule to solve this? I would prefer to use GDataAccessRule without a kind, if possible, but it’s probable that we will need to subclass and add the kind that way, yes. :-( Would be worth testing this if it can avoid us adding another subclass though.
Debarshi, apart from • the pending GDataAccessRule patch, • re-enabling and updating the unit tests, and • upstreaming gdata-query.c (attachment #302240 [details]) as a demo utility in demos/; is there anything left to do on this bug? What features are missing? See also: bug #656974, bug #654652, bug #607272.
(In reply to Philip Withnall from comment #119) > Debarshi, apart from > • the pending GDataAccessRule patch, > • re-enabling and updating the unit tests, and > • upstreaming gdata-query.c (attachment #302240 [details]) as a demo > utility in demos/; > is there anything left to do on this bug? What features are missing? Uploading is missing.
(In reply to Debarshi Ray from comment #117) > > @@ +559,3 @@ > > + > > + json_builder_set_member_name (builder, "kind"); > > + json_builder_add_string_value (builder, "drive#permission"); > > > > Unfortunately, GDataAccessRule has to work for any service, not just Google > > Drive. Is it really necessary to set the kind here? > > I have not checked whether it will work if we don't specify the 'kind'. > However, it is so widespread in the Drive v2 API, that specifying it might > be the right thing to do. How about a GDataDocumentsAccessRule to solve this? It does work without that part, so I am taking it out. However, we will need a sub-class to make it use JSON, not XML, for Drive.
Created attachment 304643 [details] [review] core: Set parsed GDataAccessRule fields in JSON
Created attachment 304646 [details] [review] documents: Use JSON when editing ACLs
Created attachment 304647 [details] [review] core: Add a comment pointing to the Drive permissions API
Created attachment 304648 [details] Sample test program for adding new ACLs
Review of attachment 304643 [details] [review]: No, this JSON schema is specific to Google Drive, so should not be in GDataAccessRule. GDataAccessRule should ideally be abstract, but it’s a bit late for that now. For example, this JSON doesn’t really match what Google Calendar does: https://developers.google.com/google-apps/calendar/v3/reference/acl#scope Please move it to GDataDocumentsAccessRule.
Review of attachment 304646 [details] [review]: This looks good, but please move the get_json() implementation from attachment #304643 [details] into it.
Review of attachment 304647 [details] [review]: Nope, and the parse_json() implementation from here should be moved out into GDataDocumentsAccessRule too.
Review of attachment 304646 [details] [review]: This also means updating all the Documents unit tests to use GDATA_TYPE_DOCUMENTS_ACCESS_RULE, and overriding the get_rules() vfunc in all the Documents implementations of GDataAccessHandler.
(In reply to Debarshi Ray from comment #125) > Created attachment 304648 [details] > Sample test program for adding new ACLs That looks like a good idea. Could you please integrate it with the other sample program (attachment #302240 [details]) as a new CLI demo to go in the demos/ folder?
(In reply to Philip Withnall from comment #130) > (In reply to Debarshi Ray from comment #125) > > Created attachment 304648 [details] > > Sample test program for adding new ACLs > > That looks like a good idea. Could you please integrate it with the other > sample program (attachment #302240 [details]) as a new CLI demo to go in the > demos/ folder? Yes, I will. Let me first finish the actual port. :)
(In reply to Philip Withnall from comment #129) > Review of attachment 304646 [details] [review] [review]: > > This also means updating all the Documents unit tests to use > GDATA_TYPE_DOCUMENTS_ACCESS_RULE, and overriding the get_rules() vfunc in > all the Documents implementations of GDataAccessHandler. How about a new GDataAccessHandler vfunc (say get_rule_type) to plug in the GType from sub-classes? That would avoid duplicating the code. Although, given that it won't help GDataCalendarAccessHandler, it is probably not worth it.
(In reply to Debarshi Ray from comment #131) > (In reply to Philip Withnall from comment #130) > > That looks like a good idea. Could you please integrate it with the other > > sample program (attachment #302240 [details]) as a new CLI demo to go in the > > demos/ folder? > > Yes, I will. Let me first finish the actual port. :) *nod* :-) (In reply to Debarshi Ray from comment #132) > How about a new GDataAccessHandler vfunc (say get_rule_type) to plug in the > GType from sub-classes? That would avoid duplicating the code. Although, > given that it won't help GDataCalendarAccessHandler, it is probably not > worth it. I thought about that when I added the get_rules() vfunc, but decided that overall, the vfuncs should be as flexible as possible. We can always add private abstract utility functions which the vfuncs pass a GType to, which do the bulk of the work (i.e. share code between the services). We only have a limited number of vfunc slots. Do you think that reasoning makes sense? https://git.gnome.org/browse/libgdata/commit/gdata/gdata-access-handler.h?id=746739fab55dbc0493ea4a29d68cf545e3f151dc
(In reply to Philip Withnall from comment #133) > (In reply to Debarshi Ray from comment #132) > > How about a new GDataAccessHandler vfunc (say get_rule_type) to plug in the > > GType from sub-classes? That would avoid duplicating the code. Although, > > given that it won't help GDataCalendarAccessHandler, it is probably not > > worth it. > > I thought about that when I added the get_rules() vfunc, but decided that > overall, the vfuncs should be as flexible as possible. We can always add > private abstract utility functions which the vfuncs pass a GType to, which > do the bulk of the work (i.e. share code between the services). We only have > a limited number of vfunc slots. Makes sense.
Created attachment 304969 [details] [review] core: Read-only properties are not going to be set via the public API
Created attachment 304970 [details] [review] core: Expose _gdata_access_rule_set_key() internally
Created attachment 304971 [details] [review] documents: Add support for editing ACLs using Drive v2
Created attachment 304972 [details] [review] calendar: Fix a path in the GDataCalendarAccessRule documentation
Review of attachment 304969 [details] [review]: ::: gdata/gdata-access-rule.c @@ -355,3 @@ - self->priv->key = g_value_dup_string (value); - g_object_notify (object, "key"); - break; Instead of removing it entirely, which should cause -Wswitch-enums warnings, please change it to: case PROP_KEY: /* Read-only; fall through */
Review of attachment 304970 [details] [review]: ++
Review of attachment 304971 [details] [review]: ++
Review of attachment 304972 [details] [review]: Whoops. :-(
(In reply to Philip Withnall from comment #139) > Review of attachment 304969 [details] [review] [review]: > > ::: gdata/gdata-access-rule.c > @@ -355,3 @@ > - self->priv->key = g_value_dup_string (value); > - g_object_notify (object, "key"); > - break; > > Instead of removing it entirely, which should cause -Wswitch-enums warnings, > please change it to: > > case PROP_KEY: > /* Read-only; fall through */ It doesn't warn because we are directly using a guint instead of an enumerated type.
Created attachment 304993 [details] [review] core: Read-only properties are not going to be set via the public API
Review of attachment 304993 [details] [review]: ++
(In reply to Debarshi Ray from comment #143) > (In reply to Philip Withnall from comment #139) > > Review of attachment 304969 [details] [review] [review] [review]: > > > > ::: gdata/gdata-access-rule.c > > @@ -355,3 @@ > > - self->priv->key = g_value_dup_string (value); > > - g_object_notify (object, "key"); > > - break; > > > > Instead of removing it entirely, which should cause -Wswitch-enums warnings, > > please change it to: > > > > case PROP_KEY: > > /* Read-only; fall through */ > > It doesn't warn because we are directly using a guint instead of an > enumerated type. Yeah, ideally I would go through and change all the switch (property_id) to switch ((FooBarProperty) property_id) and add typedef FooBarProperty to all the PROP_* enums. Maybe in some ideal future.
Created attachment 305076 [details] [review] documents: Support searching for readers using Drive v2 queries
Review of attachment 305076 [details] [review]: ++
Created attachment 305153 [details] [review] documents: Split out the code to map a content type to a GType
Created attachment 305154 [details] [review] documents: Add support for uploads using Drive v2
Review of attachment 305153 [details] [review]: ++ with this minor change. ::: gdata/services/documents/gdata-documents-utils.c @@ +35,3 @@ + * + * Return value: a #GType corresponding to @content_type + */ ‘Since: UNRELEASED’ please.
Review of attachment 305154 [details] [review]: ++
(In reply to Philip Withnall from comment #151) > Review of attachment 305153 [details] [review] [review]: > > ++ with this minor change. > > ::: gdata/services/documents/gdata-documents-utils.c > @@ +35,3 @@ > + * > + * Return value: a #GType corresponding to @content_type > + */ > > ‘Since: UNRELEASED’ please. Even though this is an internal API? The documentation comment isn't supposed to be picked up by gtk-doc.
(In reply to Debarshi Ray from comment #153) > (In reply to Philip Withnall from comment #151) > > Review of attachment 305153 [details] [review] [review] [review]: > > > > ++ with this minor change. > > > > ::: gdata/services/documents/gdata-documents-utils.c > > @@ +35,3 @@ > > + * > > + * Return value: a #GType corresponding to @content_type > > + */ > > > > ‘Since: UNRELEASED’ please. > > Even though this is an internal API? The documentation comment isn't > supposed to be picked up by gtk-doc. Yeah, for general interest and consistency.
Created attachment 305300 [details] [review] documents: Split out the code to map a content type to a GType Pushed after adding 'Since: UNRELEASED' as suggested.
Created attachment 305301 [details] [review] documents: Add support for uploads using Drive v2 Removed comments that don't apply any more, added a link to https://developers.google.com/drive/web/manage-uploads and added "?uploadType=multipart" to the URI for (non-resumable) uploads as mentioned in the Drive documentation.
Created attachment 305302 [details] [review] tests: Use GDataDocumentsAccessRule for documents ACLs
Created attachment 305303 [details] [review] core: Use constructed instead of constructor
Created attachment 305305 [details] [review] documents: Support uploads to a specific location using Drive v2
Review of attachment 305301 [details] [review]: ++
Review of attachment 305302 [details] [review]: ++
Review of attachment 305303 [details] [review]: A bit of justification in the commit message would be nice.
Review of attachment 305305 [details] [review]: ::: gdata/services/documents/gdata-documents-utils.h @@ +26,3 @@ G_BEGIN_DECLS +#define GDATA_DOCUMENTS_URI_PREFIX "https://www.googleapis.com/drive/v2/files/" Please add a comment describing what this is.
Created attachment 305390 [details] [review] core: Use constructed instead of constructor Added some explanation in favour of constructed.
Created attachment 305391 [details] [review] documents: Support uploads to a specific location using Drive v2
Created attachment 305399 [details] [review] core: Use constructed instead of constructor
Review of attachment 305391 [details] [review]: ++
Review of attachment 305399 [details] [review]: ++
Created attachment 305410 [details] Sample test program to create a new folder, upload/delete/copy files
Created attachment 305411 [details] [review] documents: Set mimeType in JSON
Created attachment 305412 [details] [review] documents: Split out the code add the parent folder's link to an entry
Created attachment 305413 [details] [review] documents: Support creating folders and copying files using Drive v2
Review of attachment 305411 [details] [review]: ++
Review of attachment 305412 [details] [review]: With this change to the commit message: s/Split out the code add/Split out the code to add/
Review of attachment 305413 [details] [review]: Please document this clearly in the documentation comments for GDataDocumentsService (the overall section) and gdata_documents_service_add_entry_to_folder().
(In reply to Philip Withnall from comment #174) > Review of attachment 305412 [details] [review] [review]: > > With this change to the commit message: s/Split out the code add/Split out > the code to add/ Sorry for the typo. Fixed. I changed it to "Split out the code to add a parent folder's link to an entry" - ie. "a parent folder's" instead of "the parent folder's". Otherwise it wouldn't fit the 72 character limit. It is not entirely garbage since an entry can have multiple parents.
Created attachment 306072 [details] [review] documents: Support creating folders and copying files using Drive v2 As discussed on email, I changed the logic to put the copy only in the folder specified in the method's arguments, instead of also putting it where the source entry currently is. This makes sense to me because the copy will anyway have a separate ID, so putting it also in the source's location when a location was explicitly specified would be strange. This works by creating a new GDataEntry of the same type as the source entry and copying over the relevant meta data from source. Currently it only copies the title, which should be enough for a good percentage of use-cases. It might make sense to copy the summary, authors and ACLs too? I have only updated the method's documentation for the time being. If this is OK, I will add an example at the top of the section too.
Review of attachment 306072 [details] [review]: “One must now use gdata_documents_service_add_entry_to_folder for creating folders” For *creating* folders? How? ::: gdata/services/documents/gdata-documents-service.c @@ +1167,1 @@ /* NOTE: adding a document to a folder doesn't have server-side ETag support (throws "noPostConcurrency" error) */ Is this comment still relevant? We should make sure that we’re using ETags wherever possible, or people _will_ encounter race conditions and potentially lose data as a result.
Created attachment 306371 [details] Sample test program to create a new folder, upload/delete/copy files
(In reply to Philip Withnall from comment #178) > Review of attachment 306072 [details] [review] [review]: > > “One must now use gdata_documents_service_add_entry_to_folder for creating > folders” > > For *creating* folders? How? Like this: * gdata_documents_folder_new * gdata_entry_set_title * gdata_documents_service_add_entry_to_folder See attachment 306371 [details] for the actual code. I did notice one limitation with folders though. They can't be copied like other entries (except maps). > ::: gdata/services/documents/gdata-documents-service.c > @@ +1167,1 @@ > /* NOTE: adding a document to a folder doesn't have server-side ETag > support (throws "noPostConcurrency" error) */ > > Is this comment still relevant? Good question. No, it isn't. Copying an existing entry works if I copy entry's ETag to local_entry. Or was this comment about some other scenario?
Created attachment 306375 [details] [review] documents: Support creating folders and copying files using Drive v2 Document the inability to copy existing folders, and use the source entry's ETag.
Created attachment 306399 [details] [review] documents: Add support for updating content using Drive v2
(In reply to Debarshi Ray from comment #180) > (In reply to Philip Withnall from comment #178) > > Review of attachment 306072 [details] [review] [review] [review]: > > > > “One must now use gdata_documents_service_add_entry_to_folder for creating > > folders” > > > > For *creating* folders? How? > > Like this: > * gdata_documents_folder_new > * gdata_entry_set_title > * gdata_documents_service_add_entry_to_folder What about creating a top-level folder? That’s where the semantics of ‘add entry to folder’ grate a little for me. > > ::: gdata/services/documents/gdata-documents-service.c > > @@ +1167,1 @@ > > /* NOTE: adding a document to a folder doesn't have server-side ETag > > support (throws "noPostConcurrency" error) */ > > > > Is this comment still relevant? > > Good question. No, it isn't. Copying an existing entry works if I copy > entry's ETag to local_entry. Or was this comment about some other scenario? No, it was about this scenario. Glad that it works now.
Review of attachment 306375 [details] [review]: ++
Review of attachment 306399 [details] [review]: Sure.
Review of attachment 306375 [details] [review]: Actually, I just spotted an example at the top of gdata-documents-folder.c which needs to be updated.
(In reply to Philip Withnall from comment #183) > What about creating a top-level folder? That’s where the semantics of ‘add > entry to folder’ grate a little for me. I updated the GDataDocumentsFolder example to illustrate this.
Created attachment 306482 [details] [review] documents: Support creating folders and copying files using Drive v2
Review of attachment 306482 [details] [review]: Actually ++ this time!
rishi, we still need that sample program to be spruced up and committed, and all the unit tests to be updated. When will you get time for that?
(In reply to Philip Withnall from comment #190) > rishi, we still need that sample program to be spruced up and committed, and > all the unit tests to be updated. When will you get time for that? Of course. I am trying to wrap up the backend right now. Would it be fine if I got the tests and sample program in order before the 3.17.92 release?
(In reply to Debarshi Ray from comment #191) > (In reply to Philip Withnall from comment #190) > > rishi, we still need that sample program to be spruced up and committed, and > > all the unit tests to be updated. When will you get time for that? > > Of course. I am trying to wrap up the backend right now. Would it be fine if > I got the tests and sample program in order before the 3.17.92 release? Yup, please.
Created attachment 310532 [details] [review] documents: Parse quotaBytesUsed as GDataDocumentsEntry:quota-used I am having a feeling of déjà vu about the "long represented as string" part, but I can't remember where I did this.
Created attachment 310534 [details] [review] documents: Parse quotaBytesUsed as GDataDocumentsEntry:quota-used Add an explicit cast from gint64 to goffset. As far as I can see, it is always typedef'ed to gint64 in current glib, but it doesn't hurt to make this explicit.
Review of attachment 310534 [details] [review]: ::: gdata/services/documents/gdata-documents-entry.c @@ +618,3 @@ + */ + errno = 0; + val = g_ascii_strtoll (quota_used, &end_ptr, 10); Why signed and not unsigned? @@ +620,3 @@ + val = g_ascii_strtoll (quota_used, &end_ptr, 10); + if (errno == 0 && quota_used != end_ptr) + priv->quota_used = (goffset) val; I think it’s safe to just check whether `*end_ptr == '\0'`. If you check `quota_used != end_ptr` it would erroneously accept the string ‘100garbage’. And `errno` is just yucky.
(In reply to Philip Withnall from comment #195) > Review of attachment 310534 [details] [review] [review]: > > ::: gdata/services/documents/gdata-documents-entry.c > @@ +618,3 @@ > + */ > + errno = 0; > + val = g_ascii_strtoll (quota_used, &end_ptr, 10); > > Why signed and not unsigned? To reduce casting back and forth between signed and unsigned and all the little things that can go wrong when doing it because the final value is supposed to go into a goffset (ie. gint64). I am still having to do some casting because I am putting this value in G_FILE_ATTRIBUTE_STANDARD_SIZE, which takes a guint64. Whatever you prefer. > @@ +620,3 @@ > + val = g_ascii_strtoll (quota_used, &end_ptr, 10); > + if (errno == 0 && quota_used != end_ptr) > + priv->quota_used = (goffset) val; > > I think it’s safe to just check whether `*end_ptr == '\0'`. If you check > `quota_used != end_ptr` it would erroneously accept the string ‘100garbage’. > And `errno` is just yucky. I stole this block from gdata_parser_int64_from_element for consistency, but I can change it if you want.
(In reply to Debarshi Ray from comment #196) > (In reply to Philip Withnall from comment #195) > > Review of attachment 310534 [details] [review] [review] [review]: > > > > ::: gdata/services/documents/gdata-documents-entry.c > > @@ +618,3 @@ > > + */ > > + errno = 0; > > + val = g_ascii_strtoll (quota_used, &end_ptr, 10); > > > > Why signed and not unsigned? > > To reduce casting back and forth between signed and unsigned and all the > little things that can go wrong when doing it because the final value is > supposed to go into a goffset (ie. gint64). I am still having to do some > casting because I am putting this value in G_FILE_ATTRIBUTE_STANDARD_SIZE, > which takes a guint64. > > Whatever you prefer. I would prefer the data type to match whatever makes sense for the semantics of the data. So if we’re never going to receive negative values from the server, let’s go unsigned. > > @@ +620,3 @@ > > + val = g_ascii_strtoll (quota_used, &end_ptr, 10); > > + if (errno == 0 && quota_used != end_ptr) > > + priv->quota_used = (goffset) val; > > > > I think it’s safe to just check whether `*end_ptr == '\0'`. If you check > > `quota_used != end_ptr` it would erroneously accept the string ‘100garbage’. > > And `errno` is just yucky. > > I stole this block from gdata_parser_int64_from_element for consistency, but > I can change it if you want. Ah, I’m to blame for the errno then. :-( Please do change it.
Created attachment 310647 [details] [review] documents: Parse quotaBytesUsed as GDataDocumentsEntry:quota-used
Created attachment 310650 [details] [review] core: Ignore overflows, but not alphamerics when parsing int64 from XML
Review of attachment 310650 [details] [review]: ++
Review of attachment 310647 [details] [review]: ++
Created attachment 310665 [details] [review] tests: Remove redundant if-else branches
Created attachment 310666 [details] [review] tests: Don't leak the authorizer
Created attachment 310667 [details] [review] documents: Port gdata_documents_service_copy_document to Drive v2 The tests use gdata_documents_service_copy_document, so let's make it work.
Review of attachment 310665 [details] [review]: ++ if the tests still pass/fail as before in online and offline mode.
Review of attachment 310666 [details] [review]: Good catch.
Review of attachment 310667 [details] [review]: ::: gdata/services/documents/gdata-documents-service.c @@ +988,3 @@ } + parent_folders_list = gdata_entry_look_up_links (GDATA_ENTRY (document), GDATA_LINK_PARENT); parent_folders_list is leaked. @@ +1006,3 @@ + break; + } + } This is the same hack as in gdata-documents-entry.c, right? Please factor it out into gdata-documents-utils.c.
(In reply to Philip Withnall from comment #205) > Review of attachment 310665 [details] [review] [review]: > > ++ if the tests still pass/fail as before in online and offline mode. make check is clean. I see some failures when running the test binaries with -c, but these are not affected by the presence of this patch. I will file bugs for those.
Still waiting for those sample programs and unit tests. Debarshi, any ETA on those? I only allowed the Drive stuff to land because I was promised unit tests (https://bugzilla.gnome.org/show_bug.cgi?id=684920#c190).
(In reply to Philip Withnall from comment #207) > Review of attachment 310667 [details] [review] [review]: > > [...] > > @@ +1006,3 @@ > + break; > + } > + } > > This is the same hack as in gdata-documents-entry.c, right? Please factor it > out into gdata-documents-utils.c. It is the same hack, but the logic is not identical. In gdata-documents-entry, we iterate over all the parents, extract the ID from each and do something with it. In this patch, we find the first parent from which we can extract an ID and break the loop.
(In reply to Philip Withnall from comment #209) > Still waiting for those sample programs and unit tests. Debarshi, any ETA on > those? I only allowed the Drive stuff to land because I was promised unit > tests (https://bugzilla.gnome.org/show_bug.cgi?id=684920#c190). I got buried under an avalanche of pending work that had accumulated while I was working on Drive. I'll finish this one now.
(In reply to Debarshi Ray from comment #210) > (In reply to Philip Withnall from comment #207) > > Review of attachment 310667 [details] [review] [review] [review]: > > > > [...] > > > > @@ +1006,3 @@ > > + break; > > + } > > + } > > > > This is the same hack as in gdata-documents-entry.c, right? Please factor it > > out into gdata-documents-utils.c. > > It is the same hack, but the logic is not identical. In > gdata-documents-entry, we iterate over all the parents, extract the ID from > each and do something with it. In this patch, we find the first parent from > which we can extract an ID and break the loop. OK. I think the leak in comment #207 still needs fixing.
Created attachment 315959 [details] [review] documents: Port gdata_documents_service_copy_document to Drive v2
Created attachment 315961 [details] [review] tests: Remove outdated async authentication tests from Documents
Review of attachment 315959 [details] [review]: ++
Review of attachment 315961 [details] [review]: ++
Comment on attachment 315959 [details] [review] documents: Port gdata_documents_service_copy_document to Drive v2 Pushed to master.
Comment on attachment 315961 [details] [review] tests: Remove outdated async authentication tests from Documents Pushed to master.
Created attachment 315977 [details] [review] tests: Set the right expectations when deleting using Drive v2
Created attachment 315980 [details] [review] documents: Fix a typo in the documentation comment
Created attachment 315981 [details] [review] tests: Make folder creation work with Drive v2
Review of attachment 315977 [details] [review]: ++
Review of attachment 315980 [details] [review]: ++
Review of attachment 315981 [details] [review]: ++
Comment on attachment 315977 [details] [review] tests: Set the right expectations when deleting using Drive v2 Thanks for the reviews, Philip. Pushed to master.
Created attachment 316564 [details] [review] tests: Align
Created attachment 316565 [details] [review] tests: Make metadata-only uploads work with Drive v2
Review of attachment 316564 [details] [review]: ++
Review of attachment 316565 [details] [review]: ++
Created attachment 319119 [details] [review] documents: Update URL to Drive v2 API documentation
Review of attachment 319119 [details] [review]: ++
Created attachment 319139 [details] [review] tests: Ensure that the test data for downloads is created with Drive v2
Review of attachment 319139 [details] [review]: ++
Created attachment 335997 [details] [review] documents: Drop unnecessary instance variable
Created attachment 336009 [details] [review] documents: Split out the code to add a content type to an entry
Created attachment 336010 [details] [review] documents: Retain the content type when adding an entry to a folder
Review of attachment 335997 [details] [review]: ++
Review of attachment 336009 [details] [review]: ++ ::: gdata/services/documents/gdata-documents-entry.c @@ +588,3 @@ return success; } else if (gdata_parser_string_from_json_member (reader, "mimeType", P_DEFAULT, &mime_type, &success, error) == TRUE) { + if (success && mime_type != NULL && mime_type[0] != '\0') Nitpick: Could drop the `mime_type != NULL && mime_type[0] != '\0'` here.
Review of attachment 336010 [details] [review]: ++
Created attachment 336016 [details] [review] documents: Split out the code to add a content type to an entry Dropped the 'mime_type != NULL && mime_type[0] != '\0'.
Created attachment 336017 [details] [review] documents: Set the content type for GDataDocumentsDocument sub-classes
Created attachment 336018 [details] [review] tests: GDataDocumentsSpreadsheet is not meant for ODS files in Drive v2
Created attachment 336019 [details] [review] tests: ODTs are represented by GDataDocumentsDocument in Drive v2
Review of attachment 336016 [details] [review]: ++
Review of attachment 336017 [details] [review]: ++
Review of attachment 336018 [details] [review]: eh, how not?
Review of attachment 336019 [details] [review]: I'm not sure why this is necessary either.
(In reply to Philip Withnall from comment #246) > Review of attachment 336018 [details] [review] [review]: > > eh, how not? This is needed to make the test_delete_document test work. With attachment 336017 [details] [review], GDataDocumentsSpreadsheet has a specific MIME type - application/vnd.google-apps.spreadsheet (also see below). If we use GDataDocumentsSpreadsheet to represent an ODS, then gdata_service_delete_entry fails because it thinks that we don't have the latest revision of the entry. I think we can solve this some other way by doing a 'single query' and using an updated entry, but this seems more correct to me. (In reply to Philip Withnall from comment #247) > Review of attachment 336019 [details] [review] [review]: > > I'm not sure why this is necessary either. This is needed to avoid hitting the g_assert_not_reached in _test_download_document in gdata/tests/documents.c with the "Temporary Arbitrary Document". We create a GDataDocumentsDocument to represent a test.odt file, upload it with a GDataUploadStream in _set_up_temp_document, then query it and cast it to a GDataDocumentsDocument. I suspect that underneath that cast, the result of the query is actually a GDataDocumentsText. Otherwise, _test_download_document would have failed to work before. This means that, somehow, the GDataDocumentsDocument that we started with has become a GDataDocumentsText, which looks a bit bizarre to me. :) I am not familiar with the old Documents v3 API, but to me Drive v2 seems more POSIX-y - everything is a file (including folders) with a MIME type. That is why the libgdata code (eg., gdata_documents_utils_get_type_from_content_type) assumes that everything is a GDataDocumentsDocument, unless it is a folder, or one of the Drive-specific MIME types for which we have a GDataDocumentsDocument subclass. Therefore, the GDataDocumentsDocument to GDataDocumentsText conversion doesn't happen in the current libgdata code, and we hit the assertion.
Created attachment 336162 [details] [review] tests: ODTs are represented by GDataDocumentsDocument in Drive v2 Expand the commit to clarify why this is required.
Created attachment 336163 [details] [review] tests: ODTs are represented by GDataDocumentsDocument in Drive v2
Created attachment 336164 [details] [review] tests: Make folder creation work with Drive v2
Created attachment 336165 [details] [review] tests: GDATA_LINK_PARENT links don't have a title in Drive v2
These, plus the other hacky commits in libgdata:wip/rishi/drive gets us through most of the test cases. Right now /documents/folders/remove_from_folder is failing with: # > DELETE /feeds/default/private/full/document%3A0B9J_DkaiBDRFSU1PTDRMbGlobGc/contents/document%3A0B9J_DkaiBDRFQXN2REFBb0hLV0U HTTP/1.1 # > Soup-Debug-Timestamp: 1474643489 # > Soup-Debug: SoupSession 1 (0xac5240), SoupMessage 156 (0xaec370), SoupSocket 78 (0xd5b500) # > Host: docs.google.com # > Authorization: Bearer ya29.Ci9nA_RTK2UzpswYLHIqydAJyR2Bpzk4hrz1XcnsJUlbGCRMYyHKgGFfOa3HsWIJ1Q # > GData-Version: 3 # > If-Match: "4Hh092tvAgcmaH9ZBXiK3CtxXlw/MTQ3NDY0MzQ4MjkxMQ" # > Accept-Encoding: gzip, deflate # > User-Agent: libgdata/0.17.7 - gzip # > Connection: Keep-Alive # # < HTTP/1.1 401 Token invalid - AuthSub token has wrong scope # < Soup-Debug-Timestamp: 1474643489 # < Soup-Debug: SoupMessage 156 (0xaec370) # < WWW-Authenticate: AuthSub realm="http://www.google.com/accounts/AuthSubRequest" # < Content-Type: text/html; charset=UTF-8 # < Content-Encoding: gzip # < Date: Fri, 23 Sep 2016 15:11:29 GMT # < Expires: Fri, 23 Sep 2016 15:11:29 GMT # < Cache-Control: private, max-age=0 # < X-Content-Type-Options: nosniff # < X-Frame-Options: SAMEORIGIN # < X-XSS-Protection: 1; mode=block # < Server: GSE # < Alt-Svc: quic=":443"; ma=2592000; v="36,35,34,33,32" # < Transfer-Encoding: chunked # < Entity: line 1: parser error : Premature end of data in tag HTML line 1 Entity: line 1: parser error : Premature end of data in tag HEAD line 1 # <?xml version="1.0"?> # <TITLE>Token invalid - AuthSub token has wrong scope</TITLE> # Entity: line 1: parser error : StartTag: invalid element name </HEAD> ^ Entity: line 1: parser error : Extra content at the end of the document </HEAD> ^ Entity: line 1: parser error : Premature end of data in tag BODY line 1 # <?xml version="1.0"?> # <H1>Token invalid - AuthSub token has wrong scope</H1> # # <?xml version="1.0"?> # <H2>Error 401</H2> # Entity: line 1: parser error : StartTag: invalid element name </BODY> ^ Entity: line 1: parser error : Extra content at the end of the document </BODY> ^ Entity: line 1: parser error : StartTag: invalid element name </HTML> ^ Entity: line 1: parser error : Extra content at the end of the document </HTML> ^ # # > POST /o/oauth2/token HTTP/1.1 # > Soup-Debug-Timestamp: 1474643489 # > Soup-Debug: SoupSession 1 (0xac5110), SoupMessage 4 (0xaec190), SoupSocket 4 (0xab4b40) # > Host: accounts.google.com # > Content-Type: application/x-www-form-urlencoded # > Accept-Encoding: gzip, deflate # > User-Agent: libgdata/0.17.7 - gzip # > Connection: Keep-Alive # > # > client_id=352818697630-nqu2cmt5quqd6lr17ouoqmb684u84l1f.apps.googleusercontent.com&client_secret=-fA4pHQJxR3zJ-FyAMPQsikg&refresh_token=1%2FMkl_VW4AsGQ7JICOvxO4EwPXf39Rxt2eKdcbIi-9LnY&grant_type=refresh_token # # < HTTP/1.1 200 OK # < Soup-Debug-Timestamp: 1474643489 # < Soup-Debug: SoupMessage 4 (0xaec190) # < Content-Type: application/json; charset=utf-8 # < X-Content-Type-Options: nosniff # < Cache-Control: no-cache, no-store, max-age=0, must-revalidate # < Pragma: no-cache # < Expires: Mon, 01 Jan 1990 00:00:00 GMT # < Date: Fri, 23 Sep 2016 15:11:29 GMT # < Content-Disposition: attachment; filename="json.txt"; filename*=UTF-8''json.txt # < Content-Encoding: gzip # < X-Frame-Options: SAMEORIGIN # < X-XSS-Protection: 1; mode=block # < Server: GSE # < Alt-Svc: quic=":443"; ma=2592000; v="36,35,34,33,32" # < Transfer-Encoding: chunked # < # < { # < "access_token" : "ya29.Ci9nA9hXjVRk1XDBsl2RN8bwrOzVIfa-H1n7ePj5SnSkWB1-FmQeSiPfwpXFBLHrqg", # < "token_type" : "Bearer", # < "expires_in" : 3600 # < } # # > DELETE /feeds/default/private/full/document%3A0B9J_DkaiBDRFSU1PTDRMbGlobGc/contents/document%3A0B9J_DkaiBDRFQXN2REFBb0hLV0U HTTP/1.1 # > Soup-Debug-Timestamp: 1474643489 # > Soup-Debug: SoupSession 1 (0xac5240), SoupMessage 156 (0xaec370), SoupSocket 78 (0xd5b500), restarted # > Host: docs.google.com # > Authorization: Bearer ya29.Ci9nA_RTK2UzpswYLHIqydAJyR2Bpzk4hrz1XcnsJUlbGCRMYyHKgGFfOa3HsWIJ1Q # > GData-Version: 3 # > If-Match: "4Hh092tvAgcmaH9ZBXiK3CtxXlw/MTQ3NDY0MzQ4MjkxMQ" # > Accept-Encoding: gzip, deflate # > Connection: Keep-Alive # > Authorization: Bearer ya29.Ci9nA9hXjVRk1XDBsl2RN8bwrOzVIfa-H1n7ePj5SnSkWB1-FmQeSiPfwpXFBLHrqg # > User-Agent: libgdata/0.17.7 - gzip # # < HTTP/1.1 401 Token invalid - AuthSub token has wrong scope # < Soup-Debug-Timestamp: 1474643489 # < Soup-Debug: SoupMessage 156 (0xaec370) # < WWW-Authenticate: AuthSub realm="http://www.google.com/accounts/AuthSubRequest" # < Content-Type: text/html; charset=UTF-8 # < Content-Encoding: gzip # < Date: Fri, 23 Sep 2016 15:11:29 GMT # < Expires: Fri, 23 Sep 2016 15:11:29 GMT # < Cache-Control: private, max-age=0 # < X-Content-Type-Options: nosniff # < X-Frame-Options: SAMEORIGIN # < X-XSS-Protection: 1; mode=block # < Server: GSE # < Alt-Svc: quic=":443"; ma=2592000; v="36,35,34,33,32" # < Transfer-Encoding: chunked # < Entity: line 1: parser error : Premature end of data in tag HTML line 1 Entity: line 1: parser error : Premature end of data in tag HEAD line 1 # <?xml version="1.0"?> # <TITLE>Token invalid - AuthSub token has wrong scope</TITLE> # Entity: line 1: parser error : StartTag: invalid element name </HEAD> ^ Entity: line 1: parser error : Extra content at the end of the document </HEAD> ^ Entity: line 1: parser error : Premature end of data in tag BODY line 1 # <?xml version="1.0"?> # <H1>Token invalid - AuthSub token has wrong scope</H1> # # <?xml version="1.0"?> # <H2>Error 401</H2> # Entity: line 1: parser error : StartTag: invalid element name </BODY> ^ Entity: line 1: parser error : Extra content at the end of the document </BODY> ^ Entity: line 1: parser error : StartTag: invalid element name </HTML> ^ Entity: line 1: parser error : Extra content at the end of the document </HTML> ^ # ** libgdata:ERROR:documents.c:1370:test_folders_remove_from_folder: assertion failed (error == NULL): Authentication required: <HTML> <HEAD> <TITLE>Token invalid - AuthSub token has wrong scope</TITLE> </HEAD> <BODY BGCOLOR="#FFFFFF" TEXT="#000000"> <H1>Token invalid - AuthSub token has wrong scope</H1> <H2>Error 401</H2> </BODY> </HTML> (gdata-service-error-quark, 4) Slowly getting there!
(In reply to Debarshi Ray from comment #253) > These, plus the other hacky commits in libgdata:wip/rishi/drive gets us > through most of the test cases. Right now > /documents/folders/remove_from_folder is failing with: That's because gdata_documents_service_remove_entry_from_folder doesn't work with Drive v2.
(In reply to Debarshi Ray from comment #248) > (In reply to Philip Withnall from comment #246) > > Review of attachment 336018 [details] [review] [review] [review]: > > > > eh, how not? > > This is needed to make the test_delete_document test work. > > With attachment 336017 [details] [review] [review], GDataDocumentsSpreadsheet has a > specific MIME type - application/vnd.google-apps.spreadsheet (also see > below). If we use GDataDocumentsSpreadsheet to represent an ODS, then > gdata_service_delete_entry fails because it thinks that we don't have the > latest revision of the entry. I think we can solve this some other way by > doing a 'single query' and using an updated entry, but this seems more > correct to me. It’s normal to need to do a single query to refresh an entry before deleting it; it seems that Google do some asynchronous server-side updating of entries after upload, which means that the entry we’re sent in response to an upload is sometimes outdated a few seconds later. :-( (I haven’t had a chance to check through the code to see if this fits though, so if I’m talking rubbish please say.) > (In reply to Philip Withnall from comment #247) > > Review of attachment 336019 [details] [review] [review] [review]: > > > > I'm not sure why this is necessary either. > > This is needed to avoid hitting the g_assert_not_reached in > _test_download_document in gdata/tests/documents.c with the "Temporary > Arbitrary Document". > *snip* > Therefore, the GDataDocumentsDocument to GDataDocumentsText conversion > doesn't happen in the current libgdata code, and we hit the assertion. That’s what seems a bit odd to me — we’re uploading an ODT file, so you’d expect it to be handled as a GDataDocumentsText object, because it’s a text file. Maybe we should just deprecate the GDataDocumentsDocument subclasses?
Review of attachment 336164 [details] [review]: ++
Review of attachment 336165 [details] [review]: ++
(In reply to Philip Withnall from comment #255) > (In reply to Debarshi Ray from comment #248) > > With attachment 336017 [details] [review] [review] [review], GDataDocumentsSpreadsheet has a > > specific MIME type - application/vnd.google-apps.spreadsheet (also see > > below). If we use GDataDocumentsSpreadsheet to represent an ODS, then > > gdata_service_delete_entry fails because it thinks that we don't have the > > latest revision of the entry. I think we can solve this some other way by > > doing a 'single query' and using an updated entry, but this seems more > > correct to me. > > It’s normal to need to do a single query to refresh an entry before deleting > it; it seems that Google do some asynchronous server-side updating of > entries after upload, which means that the entry we’re sent in response to > an upload is sometimes outdated a few seconds later. :-( > > (I haven’t had a chance to check through the code to see if this fits > though, so if I’m talking rubbish please say.) Yes, I think you are right. Just changing GDataDocumentsSpreadsheet to GDataDocumentsDocument didn't fix the error where it things the entry changed behind our backs. For a while it seemed that it did, but it was a false positive. So, yes, I think we need to add a single query here. However, I still think, even if for purely pedantic reasons, that we shouldn't use a GDataDocumentsSpreadsheet to represent an ODS with Drive v2. More on that below. > > Therefore, the GDataDocumentsDocument to GDataDocumentsText conversion > > doesn't happen in the current libgdata code, and we hit the assertion. > > That’s what seems a bit odd to me — we’re uploading an ODT file, so you’d > expect it to be handled as a GDataDocumentsText object, because it’s a text > file. It boils down to whether each GDataDocumentsDocument sub-classes should represent a single content type or multiple ones. Things like gdata_documents_utils_get_type_from_content_type and commit 85c99df27bda expect the former. It does make the internals a bit simpler. eg., when you create an instance with gdata_documents_document_new, we can bind it to application/vnd.google-apps.document. However, if we also want it to represent ODTs, then we need to change the content type to that of an ODT when we start uploading via GDataUploadStream (see the explanation in attachment 336163 [details] [review]). It's not the end of the world, but just a tad more complicated. See below for another justification for this. > Maybe we should just deprecate the GDataDocumentsDocument subclasses? Yes, I thought about this earlier, but now I am not so sure. The Drive-specific content types seem to be a bit special. We can't really upload a Google Document from GNOME but only create an empty one. On the other hand, we can upload an ODT and for all practical purposes it is just a regular file with a content type. Hence, it is fine for a gdata_documents_document_new to represent a new/empty file as an entry that's an octet stream until any data has been uploaded. Once any data is uploaded, the server will figure out the content type, and we will get an updated GDataDocumentsDocument of the same type. However, it is handy to be able to create a GDataDocumentsDocument to make it clear that it's an empty Google Document. If we were to do that with a generic class, then the application would have to explicitly add the content type (ie. the GDataCategory) to the entry, which is error-prone.
Created attachment 336619 [details] [review] documents: Split out the code to get an ID from a GDataLink
Created attachment 336620 [details] [review] Port gdata_documents_service_remove_entry_from_folder to Drive v2
Created attachment 336621 [details] [review] tests: Fix the set up for /documents/folders/remove_from_folder
It isn't clear to me how the asynchronous cancellation tests were running earlier. First, libgdata doesn't use g_simple_async_result_set_check_cancellable. That means that if the cancellation happens between the GSimpleAsyncThreadFunc completing successfully and the _finish function being called, then the GError won't be set. This is racy, no? Second, in case of /documents/folders/remove_from_folder/async/cancellation, the entry can still be removed from the folder if the cancellation occurred while the message was in flight. It can happen if the request had already reached the server, but we were waiting for the response. However, now that the entry has been removed, the next iteration of the cancellation timeout loop will fail with a "parent not found" error, which will fail the test. How was this working so far? Third, I am seeing this strange behaviour where a PUT message seems to be sent twice for no reason. It doesn't happen for the synchronous variant of the method. URL: https://www.googleapis.com/drive/v2/files/0B9J_DkaiBDRFWjgxSHR0dHp1dzQ The chain of function calls is: gdata_documents_service_remove_entry_from_folder gdata_service_update_entry _gdata_service_actually_send_message Logs: # > PUT /drive/v2/files/0B9J_DkaiBDRFWjgxSHR0dHp1dzQ HTTP/1.1 # > Soup-Debug-Timestamp: 1475233953 # > Soup-Debug: SoupSession 1 (0x1728240), SoupMessage 6 (0x1a35730), SoupSocket 3 (0x1717a70) # > Host: www.googleapis.com # > Authorization: Bearer ya29.Ci9uA9kcOEXRGCq-D0GpeqAeeQsCkj9IO1gU7tVxJE5uA2lnaCozQoDl0MzDzgqz9w # > GData-Version: 3 # > If-Match: "FXFlDpx3iVh8-JZ3_98ya-W0yh4/MTQ3NTIzMzk0NzY0Nw" # > Content-Type: application/json # > Accept-Encoding: gzip, deflate # > User-Agent: libgdata/0.17.7 - gzip # > Connection: Keep-Alive # > # > {"title":"add_file_folder_move_text","id":"0B9J_DkaiBDRFWjgxSHR0dHp1dzQ","updated":"2016-09-30T11:12:27Z","kind":"http://schemas.google.com/docs/2007#file","etag":"\"FXFlDpx3iVh8-JZ3_98ya-W0yh4/MTQ3NTIzMzk0NzY0Nw\"","selfLink":"https://www.googleapis.com/drive/v2/files/0B9J_DkaiBDRFWjgxSHR0dHp1dzQ","mimeType":"application/vnd.oasis.opendocument.text","parents":[{"kind":"drive#fileLink","id":"0ANJ_DkaiBDRFUk9PVA"}],"userPermission":{"kind":"drive#permission","etag":"\"FXFlDpx3iVh8-JZ3_98ya-W0yh4/gvbSY42k45Ty7JHS7I6X-Q_I-TI\"","id":"me","selfLink":"https://www.googleapis.com/drive/v2/files/0B9J_DkaiBDRFWjgxSHR0dHp1dzQ/permissions/me","role":"owner","type":"user"},"fileSize":"8033","explicitlyTrashed":false,"modifiedByMeDate":"2016-09-30T11:12:27.647Z","ownerNames":["Debarshi Ray"],"originalFilename":"add_file_folder_move_text","webContentLink":"https://docs.google.com/uc?id=0B9J_DkaiBDRFWjgxSHR0dHp1dzQ&export=download","md5Checksum":"95439a6d6456a79d43980c05c9f25c7d","writersCanShare":true,"headRevisionId":"0B9J_DkaiBDRFbUh5TFc0UDd4ODhsc2RLQjJwaVNCamR2K2lvPQ","lastModifyingUser":{"kind":"drive#user","displayName":"Debarshi Ray","picture":{"url":"https://lh3.googleusercontent.com/-9eTkLDDUs_0/AAAAAAAAAAI/AAAAAAAAAEo/umY9GnmATq4/s64/photo.jpg"},"isAuthenticatedUser":true,"permissionId":"08252394466501353579","emailAddress":"debarshi.ray@gmail.com"},"markedViewedByMeDate":"1970-01-01T00:00:00.000Z","version":"36174","lastModifyingUserName":"Debarshi Ray","appDataContents":false,"editable":true,"spaces":["drive"],"iconLink":"https://ssl.gstatic.com/docs/doclist/images/icon_10_word_list.png","fileExtension":"","copyable":true} # # > PUT /drive/v2/files/0B9J_DkaiBDRFWjgxSHR0dHp1dzQ HTTP/1.1 # > Soup-Debug-Timestamp: 1475233954 # > Soup-Debug: SoupSession 1 (0x1728240), SoupMessage 6 (0x1a35730), SoupSocket 4 (0x1717a70), restarted # > Host: www.googleapis.com # > Authorization: Bearer ya29.Ci9uA9kcOEXRGCq-D0GpeqAeeQsCkj9IO1gU7tVxJE5uA2lnaCozQoDl0MzDzgqz9w # > GData-Version: 3 # > If-Match: "FXFlDpx3iVh8-JZ3_98ya-W0yh4/MTQ3NTIzMzk0NzY0Nw" # > Content-Type: application/json # > Accept-Encoding: gzip, deflate # > Connection: Keep-Alive # > Content-Length: 1647 # > User-Agent: libgdata/0.17.7 - gzip # > # > {"title":"add_file_folder_move_text","id":"0B9J_DkaiBDRFWjgxSHR0dHp1dzQ","updated":"2016-09-30T11:12:27Z","kind":"http://schemas.google.com/docs/2007#file","etag":"\"FXFlDpx3iVh8-JZ3_98ya-W0yh4/MTQ3NTIzMzk0NzY0Nw\"","selfLink":"https://www.googleapis.com/drive/v2/files/0B9J_DkaiBDRFWjgxSHR0dHp1dzQ","mimeType":"application/vnd.oasis.opendocument.text","parents":[{"kind":"drive#fileLink","id":"0ANJ_DkaiBDRFUk9PVA"}],"userPermission":{"kind":"drive#permission","etag":"\"FXFlDpx3iVh8-JZ3_98ya-W0yh4/gvbSY42k45Ty7JHS7I6X-Q_I-TI\"","id":"me","selfLink":"https://www.googleapis.com/drive/v2/files/0B9J_DkaiBDRFWjgxSHR0dHp1dzQ/permissions/me","role":"owner","type":"user"},"fileSize":"8033","explicitlyTrashed":false,"modifiedByMeDate":"2016-09-30T11:12:27.647Z","ownerNames":["Debarshi Ray"],"originalFilename":"add_file_folder_move_text","webContentLink":"https://docs.google.com/uc?id=0B9J_DkaiBDRFWjgxSHR0dHp1dzQ&export=download","md5Checksum":"95439a6d6456a79d43980c05c9f25c7d","writersCanShare":true,"headRevisionId":"0B9J_DkaiBDRFbUh5TFc0UDd4ODhsc2RLQjJwaVNCamR2K2lvPQ","lastModifyingUser":{"kind":"drive#user","displayName":"Debarshi Ray","picture":{"url":"https://lh3.googleusercontent.com/-9eTkLDDUs_0/AAAAAAAAAAI/AAAAAAAAAEo/umY9GnmATq4/s64/photo.jpg"},"isAuthenticatedUser":true,"permissionId":"08252394466501353579","emailAddress":"debarshi.ray@gmail.com"},"markedViewedByMeDate":"1970-01-01T00:00:00.000Z","version":"36174","lastModifyingUserName":"Debarshi Ray","appDataContents":false,"editable":true,"spaces":["drive"],"iconLink":"https://ssl.gstatic.com/docs/doclist/images/icon_10_word_list.png","fileExtension":"","copyable":true} # # < HTTP/1.1 412 Precondition Failed # < Soup-Debug-Timestamp: 1475233954 # < Soup-Debug: SoupMessage 6 (0x1a35730) # < Vary: Origin # < Vary: X-Origin # < Content-Type: application/json; charset=UTF-8 # < Content-Encoding: gzip # < Date: Fri, 30 Sep 2016 11:12:34 GMT # < Expires: Fri, 30 Sep 2016 11:12:34 GMT # < Cache-Control: private, max-age=0 # < X-Content-Type-Options: nosniff # < X-Frame-Options: SAMEORIGIN # < X-XSS-Protection: 1; mode=block # < Server: GSE # < Alt-Svc: quic=":443"; ma=2592000; v="36,35,34,33,32" # < Transfer-Encoding: chunked # < # < { # < "error": { # < "errors": [ # < { # < "domain": "global", # < "reason": "conditionNotMet", # < "message": "Precondition Failed", # < "locationType": "header", # < "location": "If-Match" # < } # < ], # < "code": 412, # < "message": "Precondition Failed" # < } # < } #
Latest code is in libgdata:wip/rishi/drive, in case you want to try it out.
Review of attachment 336619 [details] [review]: ++
Review of attachment 336620 [details] [review]: ++
Review of attachment 336621 [details] [review]: ++
(In reply to Debarshi Ray from comment #262) > It isn't clear to me how the asynchronous cancellation tests were running > earlier. > > First, libgdata doesn't use g_simple_async_result_set_check_cancellable. > That means that if the cancellation happens between the > GSimpleAsyncThreadFunc completing successfully and the _finish function > being called, then the GError won't be set. This is racy, no? The thinking there is that cancellation is inherently racy — if you cancel sometime after sending a request but before receiving a response, you have no way of knowing what state changes take effect on the server, and there’s no way to send a subsequent request to the server saying ”actually, cancel that last thing”. So libgdata takes the approach that cancellation can cause an early return before a request is sent, but it would be more helpful to return the server’s actual response if that is received, rather than simply returning G_IO_ERROR_CANCELLED. The calling code should then be ready to handle various server responses including success, or errors due to cancelling the request part-way through. > Second, in case of /documents/folders/remove_from_folder/async/cancellation, > the entry can still be removed from the folder if the cancellation occurred > while the message was in flight. It can happen if the request had already > reached the server, but we were waiting for the response. However, now that > the entry has been removed, the next iteration of the cancellation timeout > loop will fail with a "parent not found" error, which will fail the test. > How was this working so far? Unclear. > Logs: > # > PUT /drive/v2/files/0B9J_DkaiBDRFWjgxSHR0dHp1dzQ HTTP/1.1 > # > Soup-Debug-Timestamp: 1475233953 > # > Soup-Debug: SoupSession 1 (0x1728240), SoupMessage 6 (0x1a35730), > SoupSocket 3 (0x1717a70) > # > Host: www.googleapis.com > # > Authorization: Bearer > ya29.Ci9uA9kcOEXRGCq-D0GpeqAeeQsCkj9IO1gU7tVxJE5uA2lnaCozQoDl0MzDzgqz9w > # > GData-Version: 3 > # > If-Match: "FXFlDpx3iVh8-JZ3_98ya-W0yh4/MTQ3NTIzMzk0NzY0Nw" > # > Content-Type: application/json > # > Accept-Encoding: gzip, deflate > # > User-Agent: libgdata/0.17.7 - gzip > # > Connection: Keep-Alive > # > > # > > *snip* > # > # > PUT /drive/v2/files/0B9J_DkaiBDRFWjgxSHR0dHp1dzQ HTTP/1.1 > # > Soup-Debug-Timestamp: 1475233954 > # > Soup-Debug: SoupSession 1 (0x1728240), SoupMessage 6 (0x1a35730), > SoupSocket 4 (0x1717a70), restarted Could the ‘restarted’ be a clue?
Rishi, any idea when you’ll get time to finish this off? Google have now released v3 of the Drive API, and we’re going to have to port to that at some point. :-)
(In reply to Philip Withnall from comment #268) > Rishi, any idea when you’ll get time to finish this off? I am going to push this forward a bit today. > Google have now > released v3 of the Drive API, and we’re going to have to port to that at > some point. :-) Can we skip v3 and directly move to whatever comes after that?
Thankfully I can't find a deprecation notice in https://developers.google.com/drive/v2/reference/ and the delta with v3 seems small.
(In reply to Debarshi Ray from comment #269) > (In reply to Philip Withnall from comment #268) > > Google have now > > released v3 of the Drive API, and we’re going to have to port to that at > > some point. :-) > > Can we skip v3 and directly move to whatever comes after that? Give me your time travel machine, yesterday. (In reply to Debarshi Ray from comment #270) > Thankfully I can't find a deprecation notice in > https://developers.google.com/drive/v2/reference/ and the delta with v3 > seems small. Yeah, hopefully it will be fairly painless. Let’s open a new bug about that one (bug #786041).
(In reply to Philip Withnall from comment #267) > (In reply to Debarshi Ray from comment #262) > > It isn't clear to me how the asynchronous cancellation tests were running > > earlier. > > > > First, libgdata doesn't use g_simple_async_result_set_check_cancellable. > > That means that if the cancellation happens between the > > GSimpleAsyncThreadFunc completing successfully and the _finish function > > being called, then the GError won't be set. This is racy, no? > > The thinking there is that cancellation is inherently racy — if you cancel > sometime after sending a request but before receiving a response, you have > no way of knowing what state changes take effect on the server, and there’s > no way to send a subsequent request to the server saying ”actually, cancel > that last thing”. So libgdata takes the approach that cancellation can cause > an early return before a request is sent, but it would be more helpful to > return the server’s actual response if that is received, rather than simply > returning G_IO_ERROR_CANCELLED. The calling code should then be ready to > handle various server responses including success, or errors due to > cancelling the request part-way through. This sounds bad. It is a pretty widespread convention that GAsyncResult based API will always throw G_IO_ERROR_CANCELLED if the GCancellable was triggered. Not doing that can lead to memory errors. Imagine an instance which does a some asynchronous operations internally, and passes itself around as user_data. static void foo_bar_async_op_cb (GObject *source_object, GAsyncResult *res, gpointer user_data) { MyObject *self; FooBar *bar = FOO_BAR (source_object); GError *error = NULL; if (!foo_bar_async_op_finish (bar, res, &error)) { if (g_error_matches (error, G_IO_ERROR, G_IO_ERROR_CANCELLED)) { /* self was disposed */ ... return; } } self = MY_OBJECT (user_data); ... } static void my_object_some_method (MyObject *self, ...) { ... foo_bar_async_op (bar, ..., self->cancellable, foo_bar_async_op_cb, self); ... } static void my_object_dispose (GObject *object) { MyObject *self = MY_OBJECT (object); if (self->cancellable != NULL) { g_cancellable_cancel (self->cancellable); g_clear_object (&self->cancellable); } ... } Yes, one can also keep a reference on 'self', but that's non-ideal. It keeps self alive even though there are no external users of it. If 'self' goes on to emit a signal then it can surprise any past users who didn't bother to use g_signal_connect_object because they thought they are the sole users of it.
(In reply to Philip Withnall from comment #271) > (In reply to Debarshi Ray from comment #269) > > (In reply to Philip Withnall from comment #268) > > > Google have now > > > released v3 of the Drive API, and we’re going to have to port to that at > > > some point. :-) > > > > Can we skip v3 and directly move to whatever comes after that? > > Give me your time travel machine, yesterday. The number of people lining up to keep our SDKs updated versus the rate at which online service providers churn out new versions paints a very sorry picture. Right now, I am looking at dealing with libzapojit / Microsoft, libgfbgraph / Facebook, and figuring out what to do with libgdata / Google Photos. Plus, keeping gnome-online-accounts working across whatever tweaks these people do to their OAuth flows. So, while I'd like to implement the latest version, realistically, we might have to skip a version if the one that we support is not going to be deprecated any time soon.
(In reply to Debarshi Ray from comment #272) > (In reply to Philip Withnall from comment #267) > > (In reply to Debarshi Ray from comment #262) > > > It isn't clear to me how the asynchronous cancellation tests were running > > > earlier. > > > > > > First, libgdata doesn't use g_simple_async_result_set_check_cancellable. > > > That means that if the cancellation happens between the > > > GSimpleAsyncThreadFunc completing successfully and the _finish function > > > being called, then the GError won't be set. This is racy, no? > > > > The thinking there is that cancellation is inherently racy — if you cancel > > sometime after sending a request but before receiving a response, you have > > no way of knowing what state changes take effect on the server, and there’s > > no way to send a subsequent request to the server saying ”actually, cancel > > that last thing”. So libgdata takes the approach that cancellation can cause > > an early return before a request is sent, but it would be more helpful to > > return the server’s actual response if that is received, rather than simply > > returning G_IO_ERROR_CANCELLED. The calling code should then be ready to > > handle various server responses including success, or errors due to > > cancelling the request part-way through. > > This sounds bad. > > It is a pretty widespread convention that GAsyncResult based API will always > throw G_IO_ERROR_CANCELLED if the GCancellable was triggered. Not doing that > can lead to memory errors. Imagine an instance which does a some > asynchronous operations internally, and passes itself around as user_data. By the way, GTask's check-cancellable flag is TRUE by default, which is good as far as the conventions are concerned. Should we add a mechanism to indicate how far the operation managed to proceed before getting cancelled?
Created attachment 357281 [details] [review] documents: Silence a CRITICAL when querying a feed of entries
(In reply to Debarshi Ray from comment #272) > This sounds bad. > > It is a pretty widespread convention that GAsyncResult based API will always > throw G_IO_ERROR_CANCELLED if the GCancellable was triggered. Not doing that > can lead to memory errors. Imagine an instance which does a some > asynchronous operations internally, and passes itself around as user_data. > > Yes, one can also keep a reference on 'self', but that's non-ideal. It keeps > self alive even though there are no external users of it. If 'self' goes on > to emit a signal then it can surprise any past users who didn't bother to > use g_signal_connect_object because they thought they are the sole users of > it. eh? If you keep a reference on `self`, then drop that reference at the end of foo_bar_async_op_cb(), you’re only keeping `self` alive for the duration of the asynchronous operation. You can still use that in conjunction with a `self->cancellable` to cancel pending operations when you want to destroy `self`. Typically, I would trigger that cancellable with an explicit `my_object_stop()` method. If you only want the cancellable to be triggered on `my_object_dispose()`, then use a weak-ref to `self` for the async operation, rather than a strong ref. Tying the lifecycle of an object to the error code returned by a method of a different object seems esoteric to me. libgdata is not going to change its behaviour in this respect. I think it makes sense to return G_IO_ERROR_CANCELLED only if server-side changes have not been made. If they’ve been made, we’re not going to return significantly faster by aborting the network request, and we are going to return useful state change information to the user. (In reply to Debarshi Ray from comment #273) > (In reply to Philip Withnall from comment #271) > > (In reply to Debarshi Ray from comment #269) > > > (In reply to Philip Withnall from comment #268) > > > > Google have now > > > > released v3 of the Drive API, and we’re going to have to port to that at > > > > some point. :-) > > > > > > Can we skip v3 and directly move to whatever comes after that? > > > > Give me your time travel machine, yesterday. > > The number of people lining up to keep our SDKs updated versus the rate at > which online service providers churn out new versions paints a very sorry > picture. Right now, I am looking at dealing with libzapojit / Microsoft, > libgfbgraph / Facebook, and figuring out what to do with libgdata / Google > Photos. Plus, keeping gnome-online-accounts working across whatever tweaks > these people do to their OAuth flows. > > So, while I'd like to implement the latest version, realistically, we might > have to skip a version if the one that we support is not going to be > deprecated any time soon. I was making a joke; sorry if it wasn’t clear. I agree that v2 is the priority, and since v2 is going to be supported for a while yet, moving to v3 is not a priority at all. I appreciate the work you’re doing on this (and the other online APIs) and am sorry that I can’t help out more. (In reply to Debarshi Ray from comment #274) > By the way, GTask's check-cancellable flag is TRUE by default, which is good > as far as the conventions are concerned. Should we add a mechanism to > indicate how far the operation managed to proceed before getting cancelled? That’s an idea, but I don’t think it would be useful for the majority of cases: a request to add a new entry on the server is atomic; the server won’t add half the entry then return due to cancellation. I can’t immediately think of any async APIs in libgdata which operate at a higher level of atomicity and could potentially benefit from returning progress information. GDataProgressCallback is not really a candidate for this, since it’s called once for each entry in a results feed; and we receive and parse the results feed from the network in a single blob, so we either have all the JSON (and can parse all the entries), or have none of it. I guess some kind of progress notification would help for downloads and uploads, but that’s already present in the form of the stream offset. (Also, downloading or uploading half a picture or document is about as useful as downloading or uploading none of it.)
Review of attachment 357281 [details] [review]: ++
(In reply to Philip Withnall from comment #276) > (In reply to Debarshi Ray from comment #272) > > This sounds bad. > > > > It is a pretty widespread convention that GAsyncResult based API will always > > throw G_IO_ERROR_CANCELLED if the GCancellable was triggered. Not doing that > > can lead to memory errors. Imagine an instance which does a some > > asynchronous operations internally, and passes itself around as user_data. > > > > Yes, one can also keep a reference on 'self', but that's non-ideal. It keeps > > self alive even though there are no external users of it. If 'self' goes on > > to emit a signal then it can surprise any past users who didn't bother to > > use g_signal_connect_object because they thought they are the sole users of > > it. > > eh? If you keep a reference on `self`, then drop that reference at the end > of foo_bar_async_op_cb(), you’re only keeping `self` alive for the duration > of the asynchronous operation. And what if self/MyObject emits a signal from the callback? > You can still use that in conjunction with a > `self->cancellable` to cancel pending operations when you want to destroy > `self`. Typically, I would trigger that cancellable with an explicit > `my_object_stop()` method. Expecting callers to use a my_object_stop sounds like leaking the implementation details of MyObject. What if MyObject stops using asynchronous operations internally in a future iteration of the code? Or, even worse, every object that doesn't have internal asynchronous operation should grow a stop method just in case they ever start doing so. > If you only want the cancellable to be triggered > on `my_object_dispose()`, then use a weak-ref to `self` for the async > operation, rather than a strong ref. Tying the lifecycle of an object to the > error code returned by a method of a different object seems esoteric to me. Yet, it is a pretty widespread pattern. :) Yes, a GWeakRef is an option, but it's not surprising to want to cancel pending/ongoing internal operations when there are no users of MyObject. Note that whether the operation actually got cancelled or not is not the point I am arguing. One is still expected to return G_IO_ERROR_CANCELLED even if the operation ran to completion and the GCancellable was triggered just before the user's callback was invoked. eg., see: https://developer.gnome.org/gio/stable/GTask.html#g-task-set-check-cancellable Note that it is a documented expectation that GAsyncResult-based asynchronous operations should return G_IO_ERROR_CANCELLED on cancellation: https://developer.gnome.org/gio/stable/GAsyncResult.html > libgdata is not going to change its behaviour in this respect. Unless you use g_task_set_check_cancellable, GTask already enforces the behaviour conforming to GAsyncResult conventions. This means that current libgdata.git master will always return G_IO_ERROR_CANCELLED on cancellation. Is the libgdata-specific deviation from the convention documented? I flipped through a few APIs (eg., gdata_service_delete_entry_async and gdata_service_query_async) and it doesn't seem obvious that cancellation may not return G_IO_ERROR_CANCELLED. > I think it > makes sense to return G_IO_ERROR_CANCELLED only if server-side changes have > not been made. If they’ve been made, we’re not going to return significantly > faster by aborting the network request, and we are going to return useful > state change information to the user. The question is not about returning "immediately" from foo_bar_async_op on cancellation. Returning G_IO_ERROR_CANCELLED means the GCancellable was triggered. It doesn't mean that the ongoing operation was interrupted. eg., if its was triggered between completion of the operation and invoking the callbacl. If foo_bar_async_op involves steps that shouldn't be cancelled for the sake of consistency, then the implementation should simply make those steps uncancellable. However, MyObject need not have to worry about that. It should be able to trigger the GCancellable in the hope that needless operations are avoided on a best effort basis, and it should be able to rely on documented GAsyncResult behaviour. Maybe libgdata's _finish functions should have two GError ** parameters? The second GError * would reflect errors that might have been masked by a G_IO_ERROR_CANCELLED.
(In reply to Debarshi Ray from comment #278) > (In reply to Philip Withnall from comment #276) > > (In reply to Debarshi Ray from comment #272) > > > This sounds bad. > > > > > > It is a pretty widespread convention that GAsyncResult based API will always > > > throw G_IO_ERROR_CANCELLED if the GCancellable was triggered. Not doing that > > > can lead to memory errors. Imagine an instance which does a some > > > asynchronous operations internally, and passes itself around as user_data. > > > > > > Yes, one can also keep a reference on 'self', but that's non-ideal. It keeps > > > self alive even though there are no external users of it. If 'self' goes on > > > to emit a signal then it can surprise any past users who didn't bother to > > > use g_signal_connect_object because they thought they are the sole users of > > > it. > > > > eh? If you keep a reference on `self`, then drop that reference at the end > > of foo_bar_async_op_cb(), you’re only keeping `self` alive for the duration > > of the asynchronous operation. > > And what if self/MyObject emits a signal from the callback? Signal invocations are synchronous; if any of the signal handlers want to keep a reference to `self` after they return, they need to add their own strong ref. That’s the same in all signal emission situations. > > You can still use that in conjunction with a > > `self->cancellable` to cancel pending operations when you want to destroy > > `self`. Typically, I would trigger that cancellable with an explicit > > `my_object_stop()` method. > > Expecting callers to use a my_object_stop sounds like leaking the > implementation details of MyObject. Depends on what MyObject looks like, I guess. Classes which have asynchronous operations going on in the background which are not the direct result of the user calling a method on them are generally servers, which it makes sense to have some kind of stop/cleanup method for. For example, GDBusServer or GSocketService. > What if MyObject stops using asynchronous operations internally in a future > iteration of the code? Or, even worse, every object that doesn't have > internal asynchronous operation should grow a stop method just in case they > ever start doing so. Kind of unlikely, but yes. > > If you only want the cancellable to be triggered > > on `my_object_dispose()`, then use a weak-ref to `self` for the async > > operation, rather than a strong ref. Tying the lifecycle of an object to the > > error code returned by a method of a different object seems esoteric to me. > > Yet, it is a pretty widespread pattern. :) I’ve never come across it. There are a lot of cases where people omit (weak/strong) refs when they shouldn’t, and it’s only due to various implicit guarantees about other bits of the code that what they write works. > Yes, a GWeakRef is an option, but it's not surprising to want to cancel > pending/ongoing internal operations when there are no users of MyObject. > > Note that whether the operation actually got cancelled or not is not the > point I am arguing. One is still expected to return G_IO_ERROR_CANCELLED > even if the operation ran to completion and the GCancellable was triggered > just before the user's callback was invoked. eg., see: Yes, and that’s the point I’m arguing. When an operation results in a state change on the server, returning G_IO_ERROR_CANCELLED rather than the new server state (which is non-trivial to find out in retrospect) is stupid if the GCancellable was triggered just before the user’s callback. The idea is that the GCancellable is normally triggered by the user getting bored waiting for a timeout; so returning G_IO_ERROR_CANCELLED rather than useful results is pointless given they both happen in the same time frame (after all the network activity has happened). > Note that it is a documented expectation that GAsyncResult-based > asynchronous operations should return G_IO_ERROR_CANCELLED on cancellation: > https://developer.gnome.org/gio/stable/GAsyncResult.html That documentation is generally applicable. You could also read it as ‘if a method does support being cancelled, it should return G_IO_ERROR_CANCELLED rather than any other error code to indicate cancellation’. > > libgdata is not going to change its behaviour in this respect. > > Unless you use g_task_set_check_cancellable, GTask already enforces the > behaviour conforming to GAsyncResult conventions. This means that current > libgdata.git master will always return G_IO_ERROR_CANCELLED on cancellation. Good spot. :-( Filed as bug #786282. > Is the libgdata-specific deviation from the convention documented? I flipped > through a few APIs (eg., gdata_service_delete_entry_async and > gdata_service_query_async) and it doesn't seem obvious that cancellation may > not return G_IO_ERROR_CANCELLED. https://developer.gnome.org/gdata/unstable/gdata-overview.html#cancellable-support > > I think it > > makes sense to return G_IO_ERROR_CANCELLED only if server-side changes have > > not been made. If they’ve been made, we’re not going to return significantly > > faster by aborting the network request, and we are going to return useful > > state change information to the user. > > The question is not about returning "immediately" from foo_bar_async_op on > cancellation. Returning G_IO_ERROR_CANCELLED means the GCancellable was > triggered. It doesn't mean that the ongoing operation was interrupted. eg., > if its was triggered between completion of the operation and invoking the > callbacl. If you want to find out whether a GCancellable was triggered, use g_cancellable_is_cancelled(). If you want to find out whether an ongoing operation was interrupted, look for G_IO_ERROR_CANCELLED. > Maybe libgdata's _finish functions should have two GError ** parameters? The > second GError * would reflect errors that might have been masked by a > G_IO_ERROR_CANCELLED. No, that would be horrific to use and would break almost all GObject conventions around error handling. It would be impossible to use from most bindings (which convert GError** into exceptions).
(In reply to Philip Withnall from comment #279) > Signal invocations are synchronous; if any of the signal handlers want to > keep a reference to `self` after they return, they need to add their own > strong ref. That’s the same in all signal emission situations. Umm... I don't think that's what I meant. See the last paragraph in comment 272 >>> If you only want the cancellable to be triggered >>> on `my_object_dispose()`, then use a weak-ref to `self` for the async >>> operation, rather than a strong ref. Tying the lifecycle of an object to the >>> error code returned by a method of a different object seems esoteric to me. >> >> Yet, it is a pretty widespread pattern. :) > > I’ve never come across it. There are a lot of cases where people omit > (weak/strong) refs when they shouldn’t, and it’s only due to various > implicit guarantees about other bits of the code that what they write works. Implicit? Returning G_IO_ERROR_CANCELLED is a documented expectation of GAsyncResult-based APIs. >> Note that whether the operation actually got cancelled or not is not the >> point I am arguing. One is still expected to return G_IO_ERROR_CANCELLED >> even if the operation ran to completion and the GCancellable was triggered >> just before the user's callback was invoked. eg., see: > > Yes, and that’s the point I’m arguing. When an operation results in a state > change on the server, returning G_IO_ERROR_CANCELLED rather than the new > server state (which is non-trivial to find out in retrospect) is stupid if > the GCancellable was triggered just before the user’s callback. The idea is > that the GCancellable is normally triggered by the user getting bored > waiting for a timeout; so returning G_IO_ERROR_CANCELLED rather than useful > results is pointless given they both happen in the same time frame (after > all the network activity has happened). I am not saying that the actual error must be masked. There can be alternative ways to return it. But not returning the G_IO_ERROR_CANCELLED is wrong. >> Note that it is a documented expectation that GAsyncResult-based >> asynchronous operations should return G_IO_ERROR_CANCELLED on cancellation: >> https://developer.gnome.org/gio/stable/GAsyncResult.html > > That documentation is generally applicable. You could also read it as ‘if a > method does support being cancelled, it should return G_IO_ERROR_CANCELLED > rather than any other error code to indicate cancellation’. If a method accepts a GCancellable then it supports cancellation. How it internally implements cancellation, whether every sub-operation is cancellable or not, and how it ensures consistency of the data is not something that the caller needs to always worry about. >> Is the libgdata-specific deviation from the convention documented? I flipped >> through a few APIs (eg., gdata_service_delete_entry_async and >> gdata_service_query_async) and it doesn't seem obvious that cancellation may >> not return G_IO_ERROR_CANCELLED. > > https://developer.gnome.org/gdata/unstable/gdata-overview.html#cancellable- > support Can we at least refer to that page from every cancellable API if we are going to keep violating GAsyncResult behaviour? This text seems surprising: "However, if the operation has finished its network activity, libgdata does not guarantee that it will return with an error — it may return successfully. There is no way to fix this, as it is an inherent race condition between checking for cancellation for the last time, and returning the successful result." The "inherent race condition" is already solved by both GSimpleAsyncResult and GTask. To avoid the G_IO_ERROR_CANCELLED from overwriting any errors from the server, I'd just carry around a second GError *, and let the user have access to it afterwards. > If you want to find out whether a GCancellable was triggered, use > g_cancellable_is_cancelled(). g_cancellable_is_cancelled can't be used from the user's callback in C unless we add a g_async_result_get_cancellable. >> Maybe libgdata's _finish functions should have two GError ** parameters? The >> second GError * would reflect errors that might have been masked by a >> G_IO_ERROR_CANCELLED. > > No, that would be horrific to use and would break almost all GObject > conventions around error handling. It would be impossible to use from most > bindings (which convert GError** into exceptions). Obviously that wasn't meant to be taken literally. It can be wired in ways that works with bindings. eg., libgdata could have its custom GAsyncResult implementation [*] that can track 2 errors. Then, for any operation, you could have something like this to get the masked error: void gdata_async_result_get_server_status (GDataAsyncResult *res, GError **error); It sounds odd to have a getter for an error, but I can't think of a better name. We do have some oddities like this in our platform, though. eg., gtk_print_operation_get_error, gtk_gl_area_get_error, spice_channel_get_error, etc.. [*] Most of it would be a wrapper around GTask.
(This is the last comment I’m going to write here about cancellation, since this bug is rapidly heading off topic and it doesn’t look like we’re going to agree on things. If you want to continue this, let’s do so on bug #786282.) (In reply to Debarshi Ray from comment #280) > (In reply to Philip Withnall from comment #279) > > Signal invocations are synchronous; if any of the signal handlers want to > > keep a reference to `self` after they return, they need to add their own > > strong ref. That’s the same in all signal emission situations. > > Umm... I don't think that's what I meant. See the last paragraph in comment > 272 Ah yes. Technically that is still the fault of the code which installed the signal handler: by leaving the signal handler in place rather than removing it when dropping its last reference to the object, the code is relying on the assumption that the object is not running any background operations and keeping itself alive. > >>> If you only want the cancellable to be triggered > >>> on `my_object_dispose()`, then use a weak-ref to `self` for the async > >>> operation, rather than a strong ref. Tying the lifecycle of an object to the > >>> error code returned by a method of a different object seems esoteric to me. > >> > >> Yet, it is a pretty widespread pattern. :) > > > > I’ve never come across it. There are a lot of cases where people omit > > (weak/strong) refs when they shouldn’t, and it’s only due to various > > implicit guarantees about other bits of the code that what they write works. > > Implicit? Returning G_IO_ERROR_CANCELLED is a documented expectation of > GAsyncResult-based APIs. I was speaking more generally than just about error codes. People regularly omit refs when they think that an object’s lifetime is guaranteed to be longer than what they need it for, even if that guarantee is provided by some unrelated code which might change. > >> Note that whether the operation actually got cancelled or not is not the > >> point I am arguing. One is still expected to return G_IO_ERROR_CANCELLED > >> even if the operation ran to completion and the GCancellable was triggered > >> just before the user's callback was invoked. eg., see: > > > > Yes, and that’s the point I’m arguing. When an operation results in a state > > change on the server, returning G_IO_ERROR_CANCELLED rather than the new > > server state (which is non-trivial to find out in retrospect) is stupid if > > the GCancellable was triggered just before the user’s callback. The idea is > > that the GCancellable is normally triggered by the user getting bored > > waiting for a timeout; so returning G_IO_ERROR_CANCELLED rather than useful > > results is pointless given they both happen in the same time frame (after > > all the network activity has happened). > > I am not saying that the actual error must be masked. There can be > alternative ways to return it. But not returning the G_IO_ERROR_CANCELLED is > wrong. > > >> Note that it is a documented expectation that GAsyncResult-based > >> asynchronous operations should return G_IO_ERROR_CANCELLED on cancellation: > >> https://developer.gnome.org/gio/stable/GAsyncResult.html > > > > That documentation is generally applicable. You could also read it as ‘if a > > method does support being cancelled, it should return G_IO_ERROR_CANCELLED > > rather than any other error code to indicate cancellation’. > > If a method accepts a GCancellable then it supports cancellation. How it > internally implements cancellation, whether every sub-operation is > cancellable or not, and how it ensures consistency of the data is not > something that the caller needs to always worry about. > > >> Is the libgdata-specific deviation from the convention documented? I flipped > >> through a few APIs (eg., gdata_service_delete_entry_async and > >> gdata_service_query_async) and it doesn't seem obvious that cancellation may > >> not return G_IO_ERROR_CANCELLED. > > > > https://developer.gnome.org/gdata/unstable/gdata-overview.html#cancellable- > > support > > Can we at least refer to that page from every cancellable API if we are > going to keep violating GAsyncResult behaviour? Technically we should, but I have no motivation to add that link everywhere at the moment. I’ll mention it in bug #786282. > This text seems surprising: > "However, if the operation has finished its network activity, libgdata does > not guarantee that it will return with an error — it may return > successfully. There is no way to fix this, as it is an inherent race > condition between checking for cancellation for the last time, and returning > the successful result." > > The "inherent race condition" is already solved by both GSimpleAsyncResult > and GTask. To avoid the G_IO_ERROR_CANCELLED from overwriting any errors > from the server, I'd just carry around a second GError *, and let the user > have access to it afterwards. No, there’s always still a race condition, since a GCancellable can be cancelled from any thread, which means it can be cancelled right while g_task_propagate_*() is being called. There is a mutex in GTask to resolve the race safely, but it is still a race in the sense that the caller of g_cancellable_cancel() in the second thread can never be sure of whether they cancelled it ‘in time’. > > If you want to find out whether a GCancellable was triggered, use > > g_cancellable_is_cancelled(). > > g_cancellable_is_cancelled can't be used from the user's callback in C > unless we add a g_async_result_get_cancellable. That seems like a better approach than returning two GErrors or returning G_IO_ERROR_CANCELLED when the operation hasn’t effectively been cancelled. > >> Maybe libgdata's _finish functions should have two GError ** parameters? The > >> second GError * would reflect errors that might have been masked by a > >> G_IO_ERROR_CANCELLED. > > > > No, that would be horrific to use and would break almost all GObject > > conventions around error handling. It would be impossible to use from most > > bindings (which convert GError** into exceptions). > > Obviously that wasn't meant to be taken literally. It can be wired in ways > that works with bindings. eg., libgdata could have its custom GAsyncResult > implementation [*] that can track 2 errors. Then, for any operation, you > could have something like this to get the masked error: > void gdata_async_result_get_server_status (GDataAsyncResult *res, > GError **error); > > It sounds odd to have a getter for an error, but I can't think of a better > name. We do have some oddities like this in our platform, though. eg., > gtk_print_operation_get_error, gtk_gl_area_get_error, > spice_channel_get_error, etc.. > > [*] Most of it would be a wrapper around GTask. Ouch, no. I don’t think it makes sense to return G_IO_ERROR_CANCELLED if the operation itself was not cancelled. The error code reflects the state of the operation, not the state of its GCancellable argument. Hence it should not return G_IO_ERROR_CANCELLED unless the operation was actually cancelled (server-side).
Created attachment 360046 [details] [review] tests: Use OAuth2 for authentication instead of ClientLogin Split out from the big "Port the tests" lump in libgdata:wip/rishi/drive.
Created attachment 360047 [details] [review] tests: Resumable uploads haven't been ported to Drive v2
Review of attachment 360046 [details] [review]: Looks good apart from the FIXME. ::: gdata/tests/documents.c @@ +158,3 @@ + } else { + /* Hard coded, extracted from the trace file. */ + authorisation_code = g_strdup ("FIXME: to be updated from the trace file"); FIXME comment needs fixing.
(In reply to Philip Withnall from comment #284) > Review of attachment 360046 [details] [review] [review]: > > Looks good apart from the FIXME. > > ::: gdata/tests/documents.c > @@ +158,3 @@ > + } else { > + /* Hard coded, extracted from the trace file. */ > + authorisation_code = g_strdup ("FIXME: to be updated from the trace > file"); > > FIXME comment needs fixing. Right now I am always running the tests with -w -t, so the trace files keep changing. So, I was planning to insert the right string once everything is in place and we are ready to enable the tests in Makefile.am. How do you want to handle this?
(In reply to Debarshi Ray from comment #285) > (In reply to Philip Withnall from comment #284) > > Review of attachment 360046 [details] [review] [review] [review]: > > > > Looks good apart from the FIXME. > > > > ::: gdata/tests/documents.c > > @@ +158,3 @@ > > + } else { > > + /* Hard coded, extracted from the trace file. */ > > + authorisation_code = g_strdup ("FIXME: to be updated from the trace > > file"); > > > > FIXME comment needs fixing. > > Right now I am always running the tests with -w -t, so the trace files keep > changing. So, I was planning to insert the right string once everything is > in place and we are ready to enable the tests in Makefile.am. > > How do you want to handle this? OK, that suits me. a_c-n+
Review of attachment 360047 [details] [review]: ++
Created attachment 360075 [details] [review] tests: Make folder creation work with Drive v2
Created attachment 360077 [details] [review] tests: Resumable updates haven't been ported to Drive v2
Review of attachment 360075 [details] [review]: ++
Review of attachment 360077 [details] [review]: ++
Created attachment 360199 [details] [review] documents: Expose _gdata_documents_entry_set_resource_id internally
Created attachment 360200 [details] [review] documents: Correct the prefix used for resource IDs of folders
ping Other than these last two, we also need to decide the collective fates of attachment 336018 [details] [review], attachment 336162 [details] [review] and attachment 336163 [details] [review]. I think that we should aim to deprecate the GDataDocumentsDocument sub-classes. We discussed this before in comment 13 and comment 255, but I am not sure off-hand if there are use-cases that might still need them. We could, maybe, keep them for the Google-specific web formats, but not more than that. Maintaining the MIME to sub-class mapping seems onerous because Drive can have all sorts of files.
Review of attachment 360199 [details] [review]: ++, sorry, not sure how I missed this before
Review of attachment 360200 [details] [review]: ++
(In reply to Debarshi Ray from comment #294) > I think that we should aim to > deprecate the GDataDocumentsDocument sub-classes. We discussed this before > in comment 13 and comment 255, but I am not sure off-hand if there are > use-cases that might still need them. > > We could, maybe, keep them for the Google-specific web formats, but not more > than that. Maintaining the MIME to sub-class mapping seems onerous because > Drive can have all sorts of files. I agree we should just drop them. The only use case which would get dropped is downloading a specific sheet of a spreadsheet (gdata_documents_spreadsheet_get_download_uri()), but that doesn’t seem to be supported by the web API any more. In any case, it would be better served by being in the Google Sheets API (which libgdata doesn’t currently implement). See https://developers.google.com/sheets/api/reference/rest/. (I’m not suggesting we implement Google Sheets — I think we should deprecate spreadsheet_get_download_uri() with no direct replacement, just with document_get_download_uri() as the replacement.)
Pushed the two a_c-n patches. Attachment 360199 [details] pushed as d513445 - documents: Expose _gdata_documents_entry_set_resource_id internally Attachment 360200 [details] pushed as 6d01589 - documents: Correct the prefix used for resource IDs of folders
Created attachment 373309 [details] [review] tests: Skip UPLOAD_ODT_CONVERT
Review of attachment 373309 [details] [review]: OK.
Comment on attachment 373309 [details] [review] tests: Skip UPLOAD_ODT_CONVERT Pushed to master, thanks.
-- GitLab Migration Automatic Message -- This bug has been migrated to GNOME's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.gnome.org/GNOME/libgdata/issues/12.