-
Notifications
You must be signed in to change notification settings - Fork 451
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
It there a limitation of users, calenders, file size or parallel server access? #276
Comments
+1 |
Radicale can only handle one connection at a time. Trying to work around this by using e.g. gunicorn will in data races and -loss. Given that, I don't think there would be much of a difference between filesystem- and sqlite storage. I'd recommend calendarserver (Apple) or Baikal. Both of them handle your usecase pretty well, with Baikal being relatively easy to set up (although Radicale is still on top of that game) and calendarserver being pretty fast. |
Radicale seems to support storing data in database, it could be the solution, don't you think? |
The database is the second-most stable storage type, multifilesystem is utterly
broken.
In any case, you're not going to experience the performance speedups you're
probably hoping for (see above).
|
As said before, Radicale doesn't work with parallel user access. Using a database storage isn't enough. But using Radicale for 15 employees is definitely OK. I currently use it in production for about 3000 requests a day with no problem (behind uwsgi+nginx, file storage). Someone has recently made a couple of changes to fix the main bottlenecks he had found for his use, and now serves "~5000 contacts to ~5000 contacts" (#270). |
Serving with Gunicorn makes it possible for Radicale to have race conditions (due to the removal of single-threadedness) and makes data corruption possible (two requests writing to the same calendar at the same time). |
Concretely, run this test script against a radicale server: https://gist.github.com/untitaker/95b3a7d5676c6e92b496 Test it once with:
and once with:
and a
You'll see that a lot of items go missing. |
@untitaker I guess using the SQL backend would avoid that race condition - though it's still labeled as not-ready-for-production. |
@hobarrera no, it won't. For example, Radicale doesn't lock the storage between reading and writing during a single PUT request, so a data corruption can always happen if another process changes the collection between reading and writing. And that's just one simple example. |
I'm not sure if sqlite already does that automatically. |
Ah, no, the test script fails just as well. |
Hello, If I understand correctly this discussion: whatever storage (FS or DB) is used, Radicale can't handle more than one connection at the time and doing it can result in data loss. As this is known by the project, is there any plan to change this situation to have a multi-user cal/card-dav server, or this is an 'assumed' position and no desire to enhance it exists? |
@daks you're right, Radicale is not able to handle concurrent requests, and the storage is not the only problem to solve. The 1.0.x branch won't change anything about that, but that's a main goal of the future 2.0 version. You can check the dream features here: http://librelist.com/browser//radicale/2015/8/21/radicale-1-0-is-coming-what-s-next/ |
I know that this is on the roadmap for 2.0, but I'm still running 1.1.1; I want to move it behind nginx+uwsgi because of the hanging issue (and better authentication). Is there anything I need to do to ensure I don't run into data loss through race conditions? I'm unfamiliar with uwsgi and at least @liZe seems to have it configured well. Quite frankly, could you maybe just share your uwsgi (and if relevant, nginx) configuration for this setup? I'm unable to find decent documentation on the matter. |
@MacGyverNL Here is one of my configurations for Radicale 2.0, it's almost the same for 1.x: uWSGI configuration[uwsgi]
chdir = /usr/share/radicale/
socket = /tmp/%n.socket
wsgi-file = radicale.wsgi
callable = application
plugins = python35
processes = 1 nginx configuration...
server {
listen 127.0.0.1:5232;
server_name localhost;
access_log /var/log/nginx/radicale.access_log main;
error_log /var/log/nginx/radicale.error_log info;
location / {
uwsgi_pass unix:///tmp/radicale.socket;
include uwsgi_params;
}
}
... |
Thanks a lot! Looks like uwsgi may have already been defaulting to processes = 1 but I just set it explicitly now :) |
I don't know, that's my Radicale 2.0 config and I have |
Parallel access works now. The file system storage uses locking and all operations are atomic. |
Hi,
I am running radicale server since version 0.7 on my linux server for me and my wife. I use radicale with nginx and files as storage medium.
Now I think about to implement radicale on the linux server for a small company with 15 employees. Does anybody have experience with such a big amount of users? Are there any limitations of users, calenders, file size of parallel access?
concerns
My concerns are that to many parallel user access will kill the radicale server or mess up the collection files. Can somebody share his/her experience?
switch to sqlite or mysql
I already achieved to run radicale with sqlite and sqalchemy. But as I read the table structure this is not really a good alternative to the proven file storage system. Is it possible to use radicale with file based storage and more than 10 employees in parallel?
Best regards
Christoph
The text was updated successfully, but these errors were encountered: