GNOME Bugzilla – Bug 676188
Limit use of house_arrest clients to a max. 5 to fix access of apps Documents folders
Last modified: 2016-06-10 06:45:37 UTC
Currently we spawn any required amount of house_arrest clients for an iDevice as needed during browsing the "Documents" folders of installed apps in order to expose the "iTunes File Sharing" functionality to users. However, the amount of house_arrest daemons that are allowed to be spawned on the device simultaneously by lockdownd are limited to 5 instances. The problem already arises if you have more than 5 apps installed and you attempt to access all of the Documents folders. At one point the functionality will stop working and yield errors as it can't spawn further house_arrest clients/connections. The gvfs code must be refactored to: - Use as few as possible simultaneous house_arrest connections - Use a maximum of 5 house_arrest connections per device (and block operations?) Some issues come into my mind like multiple open windows/tabs that might make things a bit more complex.
Got the same problem here. 'Unhandled Apple File Control error (7).
I am also being affected by this issue: $ uname -a Linux mac17 3.2.0-26-generic #41-Ubuntu SMP Thu Jun 14 17:49:24 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux $ gvfs-ls com.tencent.xin com.burbn.instagram com.ilegendsoft.MBrowser com.stormyimaging.Filterstorm com.goodiware.GoodReader com.getdropbox.Dropbox com.realvnc.VNCViewer com.evernote.iPhone.Evernote com.olimsoft.oplayer com.10baseT.Scanner dev.nemustech.RotateImage $ gvfs-ls afc://9d5fed639c4f614da6cbb732057ff76105ebf6ac:3/com.olimsoft.oplayer/ Error: Unhandled Apple File Control error (7) $ ls -lh total 0 drwx------ 1 bruno bruno 0 Jan 1 1970 apphome.com.systemiphone drwx------ 1 bruno bruno 0 Jan 1 1970 au.com.cineplex drwx------ 1 bruno bruno 0 Jan 1 1970 au.com.commbank.commbank drwx------ 1 bruno bruno 0 Jan 1 1970 au.com.dominos.Dominos drwx------ 1 bruno bruno 0 Jan 1 1970 au.com.eBroadcast.TVAndMovieGuide.iPhone drwx------ 1 bruno bruno 0 Jan 1 1970 au.com.fairfaxdigital.Domain drwx------ 1 bruno bruno 0 Jan 1 1970 au.com.fairfaxdigital.Drive drwx------ 1 bruno bruno 0 Jan 1 1970 au.edu.qut.qutmobile drwx------ 1 bruno bruno 0 Jan 1 1970 ch.reeder drwx------ 1 bruno bruno 0 Jan 1 1970 com.10baseT.Scanner drwx------ 1 bruno bruno 0 Jan 1 1970 com.ageet.AGEphone.Brastel drwx------ 1 bruno bruno 0 Jan 1 1970 com.ageet.AGEphone.iPhone drwx------ 1 bruno bruno 0 Jan 1 1970 com.ajnaware.ozweather drwx------ 1 bruno bruno 0 Jan 1 1970 com.antair.EasterSneeze drwx------ 1 bruno bruno 0 Jan 1 1970 com.appshopper.app drwx------ 1 bruno bruno 0 Jan 1 1970 com.appzap.LogicLand drwx------ 1 bruno bruno 0 Jan 1 1970 com.atebits.Tweetie2 drwx------ 1 bruno bruno 0 Jan 1 1970 com.bigbluebubble.PaperMunchers drwx------ 1 bruno bruno 0 Jan 1 1970 com.brastel.brastel-light drwx------ 1 bruno bruno 0 Jan 1 1970 com.burbn.instagram drwx------ 1 bruno bruno 0 Jan 1 1970 com.chillingo.cogs drwx------ 1 bruno bruno 0 Jan 1 1970 com.chillingo.cuttherope drwx------ 1 bruno bruno 0 Jan 1 1970 com.clickgamer.AngryBirds drwx------ 1 bruno bruno 0 Jan 1 1970 com.davaconsulting.ruler drwx------ 1 bruno bruno 0 Jan 1 1970 com.doclouisjones.iScopaTRIAL drwx------ 1 bruno bruno 0 Jan 1 1970 com.domain.wififreeaustralia drwx------ 1 bruno bruno 0 Jan 1 1970 com.donutgames.catphysics drwx------ 1 bruno bruno 0 Jan 1 1970 com.donutgames.cavebowling drwx------ 1 bruno bruno 0 Jan 1 1970 com.evernote.iPhone.Evernote drwx------ 1 bruno bruno 0 Jan 1 1970 com.facebook.Facebook drwx------ 1 bruno bruno 0 Jan 1 1970 com.foodspotting.iphone.lite drwx------ 1 bruno bruno 0 Jan 1 1970 com.getdropbox.Dropbox drwx------ 1 bruno bruno 0 Jan 1 1970 com.goodiware.GoodReader drwx------ 1 bruno bruno 0 Jan 1 1970 com.google.Gmail drwx------ 1 bruno bruno 0 Jan 1 1970 com.google.GoogleLatitude drwx------ 1 bruno bruno 0 Jan 1 1970 com.google.GooglePlus drwx------ 1 bruno bruno 0 Jan 1 1970 com.google.Translate drwx------ 1 bruno bruno 0 Jan 1 1970 com.Halfbrick.Fruit drwx------ 1 bruno bruno 0 Jan 1 1970 com.ilegendsoft.MBrowser drwx------ 1 bruno bruno 0 Jan 1 1970 com.imangi.templerun drwx------ 1 bruno bruno 0 Jan 1 1970 com.intersectworld.Australia drwx------ 1 bruno bruno 0 Jan 1 1970 com.jeffreygrossman.currencyapp drwx------ 1 bruno bruno 0 Jan 1 1970 com.kolordesign.aussieslang drwx------ 1 bruno bruno 0 Jan 1 1970 com.Larsa.RestaurantDeals drwx------ 1 bruno bruno 0 Jan 1 1970 com.linkedin.LinkedIn drwx------ 1 bruno bruno 0 Jan 1 1970 com.livingsocial.deals drwx------ 1 bruno bruno 0 Jan 1 1970 com.lonelyplanet.cityguide.brisbane drwx------ 1 bruno bruno 0 Jan 1 1970 com.MirageLabs.FlexiCorder drwx------ 1 bruno bruno 0 Jan 1 1970 com.mobiledataforce.hp15c drwx------ 1 bruno bruno 0 Jan 1 1970 com.moonrabbit.sudokuad drwx------ 1 bruno bruno 0 Jan 1 1970 com.naveenium.foursquare drwx------ 1 bruno bruno 0 Jan 1 1970 com.olimsoft.oplayer drwx------ 1 bruno bruno 0 Jan 1 1970 com.omgpop.dmtmobile drwx------ 1 bruno bruno 0 Jan 1 1970 com.Palringo.PalringoPremium drwx------ 1 bruno bruno 0 Jan 1 1970 com.punchbox.fishingjoyiphone drwx------ 1 bruno bruno 0 Jan 1 1970 com.realvnc.VNCViewer drwx------ 1 bruno bruno 0 Jan 1 1970 com.reference.dictionary.dictionary drwx------ 1 bruno bruno 0 Jan 1 1970 com.rovio.AngryBirdsHalloween drwx------ 1 bruno bruno 0 Jan 1 1970 com.rovio.angrybirdsrio drwx------ 1 bruno bruno 0 Jan 1 1970 com.shazam.Shazam drwx------ 1 bruno bruno 0 Jan 1 1970 com.skype.skype drwx------ 1 bruno bruno 0 Jan 1 1970 com.spica.minesweeperq drwx------ 1 bruno bruno 0 Jan 1 1970 com.stormyimaging.Filterstorm drwx------ 1 bruno bruno 0 Jan 1 1970 com.TapMediaLtd.QRReader drwx------ 1 bruno bruno 0 Jan 1 1970 com.tencent.xin drwx------ 1 bruno bruno 0 Jan 1 1970 com.toggl.toggl drwx------ 1 bruno bruno 0 Jan 1 1970 com.tomtom.Australia drwx------ 1 bruno bruno 0 Jan 1 1970 com.yourcompany.DoodleJump drwx------ 1 bruno bruno 0 Jan 1 1970 cz.acrobits.softphone.own drwx------ 1 bruno bruno 0 Jan 1 1970 dev.nemustech.RotateImage drwx------ 1 bruno bruno 0 Jan 1 1970 dk.denfalskezebra.blocksfree drwx------ 1 bruno bruno 0 Jan 1 1970 dk.mochasoft.vnc drwx------ 1 bruno bruno 0 Jan 1 1970 net.whatsapp.WhatsApp drwx------ 1 bruno bruno 0 Jan 1 1970 us.digitalsmoke.solitairecitylite
In g_vfs_backend_setup_afc_for_app() we need to keep track of the number of apps setup and disconnect from apps that haven't been used for a while. Which means that we need to start tracking last used apps in g_vfs_backend_parse_house_arrest_path(). This also means that we need to track open files on each backend, unless there is a way for libimobiledevice to tell us that a particular house arrest backend is being used. Martin?
I am also affected by this. And it is very annoying! $ uname -a Linux EliteBook 3.13.0-30-generic #55-Ubuntu SMP Fri Jul 4 21:40:53 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Is there anyother way to cancel/disconnect/null/reset the number of house_arrest connections than restart the iOS device?
@Bastien: Well, limd is not able to tell you how many house_arrest clients you have running. However, I am just adding a new lockdown error code LOCKDOWN_E_SERVICE_LIMIT to handle the case when the service limit is reached. There is no API in place to manage individual service clients or compare them. Thus I assume the consumer (here gvfs-afc) has to manage the list of house_arrest clients in use itself. It is handy that they seem to be tracked in a GHashTable already. From a code logic standpoint I think we have two key concepts to apply. One is a limit of house_arrest clients in use (block any further action by the user), the other is releasing house_arrest clients that have not been actively in use for a specific time.
I'm going back to this. It seems that the situation is even worse now. When it was mildly annoying before, nowadays iOS looks to be restricting it to fewer than 5 house arrest clients, and it doesn't look like they're cleaned up after unplugging the phone. Martin, do you have any insights on those changes device-side? I'll add support for limiting the number of clients to 5 to gvfs, but I'd need to know what to do with lingering house arrest services. You'll want the patches in this bug to test it out: https://bugzilla.gnome.org/show_bug.cgi?id=763606
(In reply to Martin Szulecki from comment #5) > @Bastien: Well, limd is not able to tell you how many house_arrest clients > you have running. However, I am just adding a new lockdown error code > LOCKDOWN_E_SERVICE_LIMIT to handle the case when the service limit is > reached. In which version was this error added?
Created attachment 324613 [details] [review] afc: Better debug error message again For starting the HouseArrest lockdownd service this time.
Created attachment 324614 [details] [review] afc: Lock apps mutex when setting up HouseArrest So that we don't have cases where the AppInfo disappears from under us.
Created attachment 324615 [details] [review] afc: Indicate whether to retry to setup HouseArrest for an app If setting up the HouseArrest service for an app fails, we should clean up unused services, and try again. This tells us whether we should try again in the first place.
Created attachment 324616 [details] [review] afc: Limit the number of HouseArrest services we start To avoid being unable to navigate within applications because we ran out of HouseArrest services, keep track of file operations within each application, and disconnect from unused ones when trying to access an unconnected one. The tracking only covers file operations, and not enumerations, and the garbage collection algorithm is quite naive, but could easily be extended.
Review of attachment 324613 [details] [review]: Looks good!
Review of attachment 324615 [details] [review]: Looks good!
Review of attachment 324614 [details] [review]: I am not really sure why we need the lock at all... From a brief look it seems that the whole backend runs in one thread, doesn't it? Just _idevice_event_cb is called from another, but it doesn't use apps related stuff. Can you explain me a bit more, why is the lock needed?
Review of attachment 324616 [details] [review]: ::: daemon/gvfsbackendafc.c @@ +875,3 @@ + return handle; + + g_mutex_unlock (&self->apps_lock); g_mutex_lock? @@ +897,3 @@ + goto out; + + g_mutex_unlock (&self->apps_lock); g_mutex_lock? @@ +899,3 @@ + g_mutex_unlock (&self->apps_lock); + info = g_hash_table_lookup (self->apps, fh->app); + g_atomic_int_dec_and_test (&info->num_users); Is g_atomic needed, when there is mutex?
(In reply to Ondrej Holy from comment #15) > Review of attachment 324616 [details] [review] [review]: > > ::: daemon/gvfsbackendafc.c > @@ +875,3 @@ > + return handle; > + > + g_mutex_unlock (&self->apps_lock); > > g_mutex_lock? Fixed. > @@ +897,3 @@ > + goto out; > + > + g_mutex_unlock (&self->apps_lock); > > g_mutex_lock? Fixed. > @@ +899,3 @@ > + g_mutex_unlock (&self->apps_lock); > + info = g_hash_table_lookup (self->apps, fh->app); > + g_atomic_int_dec_and_test (&info->num_users); > > Is g_atomic needed, when there is mutex? Made it a uint, and added an assertion to make sure we're not going to < 0.
I'm not sure we need locking, but I'm not 100% confident it's single-threaded. This seems easier than not to show where the hot path is, and if it's really single-threaded, there won't be contention, and it won't do anything.
Created attachment 329209 [details] [review] afc: Limit the number of HouseArrest services we start To avoid being unable to navigate within applications because we ran out of HouseArrest services, keep track of file operations within each application, and disconnect from unused ones when trying to access an unconnected one. The tracking only covers file operations, and not enumerations, and the garbage collection algorithm is quite naive, but could easily be extended.
Review of attachment 329209 [details] [review]: ::: daemon/gvfsbackendafc.c @@ +932,3 @@ + * connect to the service */ + if (g_strcmp0 (info->id, app) == 0) + continue; It seems to me that it shouldn't happen, however if it happens, we can return immediately, can't we? @@ +934,3 @@ + continue; + + if (info->afc_cli == NULL || Shouldn't this be != ? @@ +997,3 @@ + g_debug ("Ran out of HouseArrest clients for app '%s', trying again", app); + g_vfs_backend_gc_house_arrest (self, app); + g_vfs_backend_setup_afc_for_app (self, app); What happens if this fails again? @@ +1073,3 @@ handle->afc_cli = afc_cli; + g_free (app); You should add g_free (app) also into the if-statements before. @@ +1138,3 @@ fh->afc_cli = afc_cli; + g_free (app); dtto @@ +1221,3 @@ fh->afc_cli = afc_cli; + g_free (app); dtto @@ +1299,3 @@ fh->afc_cli = afc_cli; + g_free (app); dtto
Review of attachment 329209 [details] [review]: ::: daemon/gvfsbackendafc.c @@ +934,3 @@ + continue; + + if (info->afc_cli == NULL || No, if there's no afc client for that app, we can't tear it down. If it was != NULL, we'd exit when the app has a client, and we would never tear it down. @@ +997,3 @@ + g_debug ("Ran out of HouseArrest clients for app '%s', trying again", app); + g_vfs_backend_gc_house_arrest (self, app); + g_vfs_backend_setup_afc_for_app (self, app); g_vfs_backend_parse_house_arrest_path() callers have a NULL afc_cli for that app in self->apps, and should error out as expected. For example, g_vfs_backend_afc_open_for_read() will say "Can't open directory". @@ +1073,3 @@ handle->afc_cli = afc_cli; + g_free (app); Done. @@ +1138,3 @@ fh->afc_cli = afc_cli; + g_free (app); Done. @@ +1221,3 @@ fh->afc_cli = afc_cli; + g_free (app); Done. @@ +1299,3 @@ fh->afc_cli = afc_cli; + g_free (app); Done.
Created attachment 329374 [details] [review] afc: Limit the number of HouseArrest services we start To avoid being unable to navigate within applications because we ran out of HouseArrest services, keep track of file operations within each application, and disconnect from unused ones when trying to access an unconnected one. The tracking only covers file operations, and not enumerations, and the garbage collection algorithm is quite naive, but could easily be extended.
(In reply to Bastien Nocera from comment #20) > Review of attachment 329209 [details] [review] [review]: > > ::: daemon/gvfsbackendafc.c > @@ +934,3 @@ > + continue; > + > + if (info->afc_cli == NULL || > > No, if there's no afc client for that app, we can't tear it down. > If it was != NULL, we'd exit when the app has a client, and we would never > tear it down. You are right, my fault! > @@ +997,3 @@ > + g_debug ("Ran out of HouseArrest clients for app '%s', trying > again", app); > + g_vfs_backend_gc_house_arrest (self, app); > + g_vfs_backend_setup_afc_for_app (self, app); > > g_vfs_backend_parse_house_arrest_path() callers have a NULL afc_cli for that > app in self->apps, and should error out as expected. For example, > g_vfs_backend_afc_open_for_read() will say "Can't open directory". Shouldn't we return rather something like G_IO_ERROR_FAILED saying "Ran out of HouseArrest clients"?
Review of attachment 329374 [details] [review]: ::: daemon/gvfsbackendafc.c @@ +999,3 @@ setup_afc) { + if (!g_vfs_backend_setup_afc_for_app (self, app)) g_vfs_backend_setup_afc_for_app returns TRUE for retry, so the condition is wrong, but maybe we should rather change behavioral of g_vfs_backend_setup_afc_for_app...
(In reply to Ondrej Holy from comment #22) > Shouldn't we return rather something like G_IO_ERROR_FAILED saying "Ran out > of HouseArrest clients"? I don't know a way to check for whether the job is already failed, and I can't think of an API that would fit this use. Overall, I prefer for the calls to fail as if the files weren't there. Note that it shouldn't happen unless we get into corner cases.
Created attachment 329444 [details] [review] afc: Limit the number of HouseArrest services we start To avoid being unable to navigate within applications because we ran out of HouseArrest services, keep track of file operations within each application, and disconnect from unused ones when trying to access an unconnected one. The tracking only covers file operations, and not enumerations, and the garbage collection algorithm is quite naive, but could easily be extended.
Created attachment 329445 [details] [review] afc: Limit the number of HouseArrest services we start To avoid being unable to navigate within applications because we ran out of HouseArrest services, keep track of file operations within each application, and disconnect from unused ones when trying to access an unconnected one. The tracking only covers file operations, and not enumerations, and the garbage collection algorithm is quite naive, but could easily be extended.
Created attachment 329446 [details] [review] afc: Indicate whether to retry to setup HouseArrest for an app If setting up the HouseArrest service for an app fails, we should clean up unused services, and try again. This tells us whether we should try again in the first place.
Attachment 324613 [details] pushed as 50d7671 - afc: Better debug error message again Attachment 324614 [details] pushed as 6fe89eb - afc: Lock apps mutex when setting up HouseArrest Attachment 329445 [details] pushed as 0404528 - afc: Limit the number of HouseArrest services we start Attachment 329446 [details] pushed as 0420332 - afc: Indicate whether to retry to setup HouseArrest for an app
(In reply to Bastien Nocera from comment #24) > (In reply to Ondrej Holy from comment #22) > > Shouldn't we return rather something like G_IO_ERROR_FAILED saying "Ran out > > of HouseArrest clients"? > > I don't know a way to check for whether the job is already failed, and I > can't think of an API that would fit this use. Overall, I prefer for the > calls to fail as if the files weren't there. Note that it shouldn't happen > unless we get into corner cases. Ok, just idea...