Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Device or resource busy #48

Closed
brihuega opened this issue Jan 30, 2014 · 14 comments
Closed

Device or resource busy #48

brihuega opened this issue Jan 30, 2014 · 14 comments
Labels
Milestone

Comments

@brihuega
Copy link

First, thanks for this great software! Very useful.

I have an issue when copying big amounts of files to the drive, (commands as "cp -R"). It copies some dozens of files and then starts sending messages "device or resource busy" ("Dispositivo o recurso ocupado" in my locale).
If I interrupt the command, every access to the drive gives the same error. I have to unmount the drive and mount it again to retake control.

I'm running:
google-drive-ocamlfuse, version 0.5.2
Ubuntu Server 12.04.3 LTS with: Linux 3.8.0-29-generic #42~precise1-Ubuntu SMP x86_64 x86_64 x86_64 GNU/Linux

Thanks!

@astrada
Copy link
Owner

astrada commented Jan 31, 2014

You should try to reproduce the problem, running the program in debug mode :

$ google-drive-ocamlfuse -debug [mountpoint]

Then you should check the log files gdfuse.log and curl.log (you will find in ~/.gdfuse/default) and look for errors 400/500, returned by Google Drive.

@brihuega
Copy link
Author

brihuega commented Feb 5, 2014

I didn't reproduce the problem exactly in the debug session. Copying a big
amount of files it threw a couple of "resource busy" errors, and then
stopped the process with the message:

Error: cannot close sqlite db.
Please restart the program

At the same time, the "cp" command issues a lot of messages (one for every
file pending to copy):

Cannot 'stat': el otro extremo de la conexión no esta conectado.

The logs show "500 Internal Server" errors on the "resource busy" errors:

----gdfuse.log---
Loading resource /Mis imágenes/100504 (plantar un arbol - caleta)
(trashed=false) from db...found
Getting resource 100504 020.jpg (in folder 0BwglN0eAuhZ_THdFelo2OGNGTVE)
from server...ServiceError
Exception:Failure("{"error":{"errors":[{"domain":"global","reason":"internalError","message":"Internal
Error"}],"code"
:500,"message":"Internal Error"}}")
Backtrace:

[4039.294670] TID=0: getattr /Mis imágenes

----curl.log-----
[4033.976285] curl: info: About to connect() to www.googleapis.com port 443
(#0)
[4033.976315] curl: info: Trying 173.194.78.95...
[4034.012024] curl: info: connected
[4034.062611] curl: info: found 153 certificates in
/etc/ssl/certs/ca-certificates.crt
[4034.228901] curl: info: server certificate verification OK
[4034.229061] curl: info: common name: .googleapis.com (matched)
[4034.229073] curl: info: server certificate expiration date OK
[4034.229082] curl: info: server certificate activation date OK
[4034.229097] curl: info: certificate public key: RSA
[4034.229107] curl: info: certificate version: #3
[4034.229179] curl: info: subject: C=US,ST=California,L=Mountain
View,O=Google Inc,CN=
.googleapis.com
[4034.229194] curl: info: start date: Wed, 15 Jan 2014 14:37:40 GMT
[4034.229205] curl: info: expire date: Thu, 15 May 2014 00:00:00 GMT
[4034.229243] curl: info: issuer: C=US,O=Google Inc,CN=Google
Internet Authority G2
[4034.229261] curl: info: compression: NULL
[4034.229271] curl: info: cipher: ARCFOUR-128
[4034.229279] curl: info: MAC: SHA1
[4034.229368] curl: header out: GET
/drive/v2/files?fields=items%28alternateLink%2CcreatedDate%2CdownloadUrl%2Ceditable%2Cetag%2Cexplici
tlyTrashed%2CexportLinks%2CfileExtension%2CfileSize%2Cid%2Clabels%2ClastViewedByMeDate%2Cmd5Checksum%2CmimeType%2CmodifiedDate%2Cparents
%2Ctitle%29%2CnextPageToken&maxResults=1&q=title+%3D+%27100504+020.jpg%27+and+%270BwglN0eAuhZ_THdFelo2OGNGTVE%27+in+parents+and+trashed+
%3D+false HTTP/1.1
User-Agent: google-drive-ocamlfuse (0.5.2) gapi-ocaml/0.2.1/Unix
Host: www.googleapis.com
Accept: /
Accept-Encoding: identity
Authorization: Bearer
ya29.1.AADtN_Wz-w4X8Acsn7ASfhRAdxZtK0a6qtj1RKZCYyZlfAUwMezZbyYS9NnanA

[4039.270979] curl: header in: HTTP/1.1 500 Internal Server Error
[4039.271031] curl: header in: Content-Type: application/json; charset=UTF-8
[4039.271049] curl: header in: Date: Fri, 31 Jan 2014 10:25:32 GMT
[4039.271071] curl: header in: Expires: Fri, 31 Jan 2014 10:25:32 GMT
[4039.271080] curl: header in: Cache-Control: private, max-age=0
[4039.271089] curl: header in: X-Content-Type-Options: nosniff
[4039.271098] curl: header in: X-Frame-Options: SAMEORIGIN
[4039.271106] curl: header in: X-XSS-Protection: 1; mode=block
[4039.271114] curl: header in: Server: GSE
[4039.271123] curl: header in: Alternate-Protocol: 443:quic
[4039.271132] curl: header in: Transfer-Encoding: chunked
[4039.271141] curl: header in:
[4039.271150] curl: data in: b4
{
"error": {
"errors": [
{
"domain": "global",
"reason": "internalError",
"message": "Internal Error"
}
],
"code": 500,
"message": "Internal Error"
}
}
[4039.271170] curl: info: Connection #0 to host www.googleapis.com left
intact
[4039.271234] curl: info: Closing connection #0

[

On the final lines of gdfuse.log:

Thread id=0: Error: cannot close db
[10660.184127] TID=0: Exiting.
CURL cleanup...done

Clearing context...done

Saludos de jose.brihuega@gmail.com
Cádiz - España

2014-01-31 Alessandro Strada notifications@github.com:

You should try to reproduce the problem, running the program in debug mode
:

$ google-drive-ocamlfuse -debug [mountpoint]

Then you should check the log files gdfuse.log and curl.log (you will find
in ~/.gdfuse/default) and look for errors 400/500, returned by Google Drive.

Reply to this email directly or view it on GitHubhttps://github.com//issues/48#issuecomment-33771188
.

@piccaso
Copy link

piccaso commented Feb 6, 2014

I guess this is related...

I wanted to make a backup of my files stored in gdrive - so its the opposite direction.
First i tried tar but tar complaint about files changed while beeing read.
So i tried rsync but it did not copy a single file in 5 minutes.
Then i used plain old cp, which worked in the beginning.
Later fuse complaint about a disconnected endpoint and did not stop after remounting.
So i turned on the log -verbose and saw some authentication errors
after that i deleted the $HOME/.gdfuse/ folder and started started from scratch.
(did not keep the log file, but i will get back to this point again later)

i mounted the folder with:
google-drive-ocamlfuse -m -cc -verbose -label
and started cp with:
cp -avr
after 584 files where copied i got 'Transport endpoint is not connected', here is a tail from the log:

This is probably related so i'll post it here...
[1420.815159] TID=9735: read <somefilename> buf 655360 0
[Getting metadata from context...valid
Loading resource <somefilename> (trashed=false) from db...found
1420.815371] TID=9736: read <somefilename> buf 786432 0
Getting metadata from context...valid
Loading resource <somefilename> (trashed=false) from db...found
Thread id=9734: Error: cannot close db
[1425.810249] TID=9734: Exiting.
CURL cleanup...done
Clearing context...done

remounted with:
google-drive-ocamlfuse -verbose -label
and started copy again:
cp -avru
a copuple of minutes later 'endpoint is not connected' again and log tail looks like this:

Removing file (/root/.gdfuse/<label>/cache/0B9ulBR3Y_MjrUTd2Zs0hqXA2RVE: resource 584) from cache...done
Removing file (/root/.gdfuse/<label>/cache/0B9ulBR3Y_MjrM2xLadjRG012VkU: resource 585) from cache...done
Removing file (/root/.gdfuse/<label>/cache/0B9ulBR3Y_MjrRHhodVdffUtMQ1U: resource 586) from cache...done
Removing file (/root/.gdfuse/<label>/cache/0B0O_abtEZsTVVnhOWXRjYsFFSGs: resource 587) from cache...done
Removing file (/root/.gdfuse/<label>/cache/0B0O_abtEZsTVSE4wWsstS0prWHc: resource 588) from cache...done
Removing file (/root/.gdfuse/<label>/cache/0B0O_abtEZsTVM3Nwbk5xxWlQc28: resource 589) from cache...done
Removing file (/root/.gdfuse/<label>/cache/0B0O_abtEZsTVa3A3aUZLRaryYWs: resource 590) from cache...done
Removing file (/root/.gdfuse/<label>/cache/0B2weBPlqIflIVUxYMzlKRVgasdE: resource 630) from cache...done
Removing file (/root/.gdfuse/<label>/cache/0AtulBR3Y_MjrdEo3zZVQTM1OGadxUmVNQ2w1RFFwREE: resource 633) from cache...done
Removing file (/root/.gdfuse/<label>/cache/0AsEO6-ojjER0dFhP0cHBqSa1aahESUdXQTluX3VOeEE: resource 634) from cache...done
Removing file (/root/.gdfuse/<label>/cache/0AsEO6-ojjER0dDFQvdkVCNaaajhCRjBNNEljdm1CdFE: resource 635) from cache...done
Removing file (/root/.gdfuse/<label>/cache/0AtulBR3Y_MjrdHNwW1remhKabzVlTmhUOXZlbEc0RFE: resource 636) from cache...done
Removing file (/root/.gdfuse/<label>/cache/0AsEO6-ojjER0dE0jVsV1pXbXaka3azJLc1A5SFhwdlE: resource 637) from cache...done
Removing file (/root/.gdfuse/<label>/cache/1Xgp-iLqZX4Lg4dd8sx_xNI1uo2pBEUSMGSehq7AClDc: resource 638) from cache...done
Removing file (/root/.gdfuse/<label>/cache/1FPjpObOz6Z31-tAkdVmkfS6EaC5xrdpJh8pJ8vBoT7o: resource 640) from cache...done
Removing file (/root/.gdfuse/<label>/cache/1fLhnb1gIRGkLFWXfydEgMl9DDk3v4OZ1km86NEbflNo: resource 641) from cache...done
Removing file (/root/.gdfuse/<label>/cache/0B9ulBR3Y_MjrRUd1QldEY2dFMDA: resource 644) from cache...done
Removing file (/root/.gdfuse/<label>/cache/0ArlfH8xmiYlddHFWSXZdbFpOV1dVUkVOYnBlVnNvNnc: resource 645) from cache...done
Removing file (/root/.gdfuse/<label>/cache/0ArlfH8xmiYlddFpicGNndVVPRjlOcXZfVzVQUUFkR3c: resource 646) from cache...done
Removing file (/root/.gdfuse/<label>/cache/1hwKFodpHIDntUWKCd3kcqdCV5uRninsegyIStHSdv6o: resource 649) from cache...done
Removing file (/root/.gdfuse/<label>/cache/0B9ulBR3Y_MjrLWVIUWgyNnFqNlk: resource 672) from cache...done
Refreshing access token...fail (error_code=Exception)
Error refreshing access token (try=0):
Exception:GaeProxy.ServerError("error_code Exception")
Backtrace:
fail (error_code=Exception)
Error refreshing access token (try=1):
Exception:GaeProxy.ServerError("error_code Exception")
Backtrace:
fail (error_code=Exception)
Error refreshing access token (try=2):
Exception:GaeProxy.ServerError("error_code Exception")
Backtrace:
fail (error_code=Exception)
Error refreshing access token (try=3):
Exception:GaeProxy.ServerError("error_code Exception")
Backtrace:
fail (error_code=Exception)
Error refreshing access token (try=4):
Exception:GaeProxy.ServerError("error_code Exception")
Backtrace:
fail (error_code=Exception)
Error refreshing access token (try=5):
Exception:GaeProxy.ServerError("error_code Exception")
Backtrace:
[373.701048] TID=0: Exiting.
CURL cleanup...done
Clearing context...done

after remounting i get a similar output
even with -cc

i also noticed some files where corupted every time i found this in the logs:

read <some-filename>not valid

i'm using google-drive-ocamlfuse version 0.5.3 on Ubuntu 13.10 'headless'

so...

  • Is there a better way to recover the mountability in that state?
  • If i delete the folder i need to authenticate again, and that cant be automated (right?)
  • What would you recomend for copying large amount of data of the drive, are there alternatives to cp which could work better?
  • could you please make a parsable log entry for possibly corupted files.
  • any chance getting more detail out of 'Error: cannot close db'.
  • and of course, any idea how to fix this?

Thanks!
Flo

@piccaso
Copy link

piccaso commented Feb 7, 2014

setting 'sqlite3_busy_timeout=5000' helped, but when it was time to refresh the access token things went wrong again.

Removing file (/root/.gdfuse/laserbox/cache/0B9ulBR3Y_MjrY085N3dqc29aMGs: resource 1752) from cache...done
Refreshing access token...fail (error_code=Exception)
Error refreshing access token (try=0):
Exception:GaeProxy.ServerError("error_code Exception")
Backtrace:
fail (error_code=Exception)
Error refreshing access token (try=1):
Exception:GaeProxy.ServerError("error_code Exception")
Backtrace:
fail (error_code=Exception)
Error refreshing access token (try=2):
Exception:GaeProxy.ServerError("error_code Exception")
Backtrace:
fail (error_code=Exception)
Error refreshing access token (try=3):
Exception:GaeProxy.ServerError("error_code Exception")
Backtrace:
fail (error_code=Exception)
Error refreshing access token (try=4):
Exception:GaeProxy.ServerError("error_code Exception")
Backtrace:
fail (error_code=Exception)
Error refreshing access token (try=5):
Exception:GaeProxy.ServerError("error_code Exception")
Backtrace:
[3120.002309] TID=22027: Exiting.
CURL cleanup...done
Clearing context...done

my test system is a pretty old computer and is under full load doing this, maybe this is some kind of race condition... i'll try again without mounting -m

Cheers
Flo

@astrada
Copy link
Owner

astrada commented Feb 7, 2014

@piccaso: thanks for your feedback. There were some issues with the Drive backend on Feb 6. Maybe they are related to your issues.

Is there a better way to recover the mountability in that state?

No, I've never experienced errors during token refresh. But I never use the -m option, because the FUSE binding I'm using is too unstable with that option turned on.

If i delete the folder i need to authenticate again, and that cant be automated (right?)

Yes, that's the way OAuth works.

What would you recomend for copying large amount of data of the drive, are there alternatives to cp which could work better?

No, I've experienced some of your issues uploading a large amount of files, but downloading usually works with both cp and rsync.

could you please make a parsable log entry for possibly corupted files.

OK, I'm putting that on my TODO list.

any chance getting more detail out of 'Error: cannot close db'. and of course, any idea how to fix this?

No, close_db returns a boolean. And a failure, I wasn't able to reopen the DB. So I decided to make the program exit.

my test system is a pretty old computer and is under full load doing this, maybe this is some kind of race condition... i'll try again without mounting -m

I'm thinking about removing that option, because it usually causes more problems than it solves.

@brihuega: thanks for your feedback too.

I will try to improve stability, retrying requests on every error, and try to implement a better error handling mechanism. But it's not easy to do it right.

@piccaso
Copy link

piccaso commented Feb 8, 2014

Thanks for your response @astrada!

I don't think its related to the Feb 6 issue, i still have the same problems (even without -m)
Here is my workaround for now:

Large timeouts and cache - just clear cache on demand.
In my case no problem, because the content does not change atm.
metadata_cache_time=6000
sqlite3_busy_timeout=50000
max_cache_size_mb=5120

using rsync with -W (no delta operations - copy Whole file) speeds it up and handles corrupted files better than cp

When the connection fails i just umount and delete the state file - so the cache is not lost - and redo the autentication part, mount and than restart rsync.

To bad i cant wrap my head around ocaml, but it looks like i can only code in languages that have curly braces :/

@brihuega
Copy link
Author

brihuega commented Feb 8, 2014

Thanks for your efforts making this great software. I will give you more
feedback if I find it.

Saludos de jose.brihuega@gmail.com
Cádiz - España

2014-02-08 18:27 GMT+01:00 piccaso notifications@github.com:

Thanks for your response @astrada https://github.com/astrada!

I don't think its related to the Feb 6 issue, i still have the same
problems (even without -m)
Here is my workaround for now:

Large timeouts and cache - just clear cache on demand.
In my case no problem, because the content does not change atm.
metadata_cache_time=6000
sqlite3_busy_timeout=50000
max_cache_size_mb=5120

using rsync with -W (no delta operations - copy Whole file) speeds it up
and handles corrupted files better than cp

When the connection fails i just umount and delete the state file - so the
cache is not lost - and redo the autentication part, mount and than restart
rsync.

To bad i cant wrap my head around ocaml, but it looks like i can only code
in languages that have curly braces :/

Reply to this email directly or view it on GitHubhttps://github.com//issues/48#issuecomment-34549818
.

@astrada astrada added the bug label Feb 10, 2014
@brihuega
Copy link
Author

I did another test with similar results, but I got another message repeated several times:

Warning: Unexpected leaf: name=messsage data_type=Scalar in GapiService.RequestError.parse

@astrada
Copy link
Owner

astrada commented Feb 15, 2014

It's "messsage" with 3 s's? If it's so, then it's a bug on Google side and my application cannot parse the error.

@brihuega
Copy link
Author

No, I copied it by hand and mispelled it.

@ghost
Copy link

ghost commented May 19, 2014

I'm seeing same errors when issuing commands like git init in a directory, or trying to cp/mv a directory into gdrive...

PS what's the right way to stop/unmount a runing ocamluse mount?

@astrada
Copy link
Owner

astrada commented May 20, 2014

If fusermount -u ... does not work, you should kill the process (with -9 if it is in uninterruptible sleep). This should be enough to force an unmount.

@ghost
Copy link

ghost commented May 20, 2014

thank you! the errors I reported seem to have been solved using the -o big_writes option.

@astrada astrada added this to the 0.5.4 milestone Aug 27, 2014
@astrada
Copy link
Owner

astrada commented Aug 27, 2014

The issue occurred to @piccaso was solved fixing issue #80 (client_id and client_secret were not saved in configuration file, so refreshing the token was not possible).

@astrada astrada closed this as completed Aug 27, 2014
@brancomat brancomat mentioned this issue Feb 26, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants