Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

It there a limitation of users, calenders, file size or parallel server access? #276

Closed
christophdb opened this issue Mar 26, 2015 · 18 comments
Milestone

Comments

@christophdb
Copy link

Hi,

I am running radicale server since version 0.7 on my linux server for me and my wife. I use radicale with nginx and files as storage medium.
Now I think about to implement radicale on the linux server for a small company with 15 employees. Does anybody have experience with such a big amount of users? Are there any limitations of users, calenders, file size of parallel access?

  1. concerns
    My concerns are that to many parallel user access will kill the radicale server or mess up the collection files. Can somebody share his/her experience?

  2. switch to sqlite or mysql
    I already achieved to run radicale with sqlite and sqalchemy. But as I read the table structure this is not really a good alternative to the proven file storage system. Is it possible to use radicale with file based storage and more than 10 employees in parallel?

Best regards
Christoph

@Jaikant
Copy link

Jaikant commented May 3, 2015

+1

@untitaker
Copy link
Contributor

Radicale can only handle one connection at a time. Trying to work around this by using e.g. gunicorn will in data races and -loss.

Given that, I don't think there would be much of a difference between filesystem- and sqlite storage. I'd recommend calendarserver (Apple) or Baikal. Both of them handle your usecase pretty well, with Baikal being relatively easy to set up (although Radicale is still on top of that game) and calendarserver being pretty fast.

@daks
Copy link

daks commented Jul 2, 2015

Radicale seems to support storing data in database, it could be the solution, don't you think?
(I use it with files for home too, so I never tested it)

@untitaker
Copy link
Contributor

untitaker commented Jul 2, 2015 via email

@liZe
Copy link
Member

liZe commented Aug 21, 2015

As said before, Radicale doesn't work with parallel user access. Using a database storage isn't enough.

But using Radicale for 15 employees is definitely OK. I currently use it in production for about 3000 requests a day with no problem (behind uwsgi+nginx, file storage). Someone has recently made a couple of changes to fix the main bottlenecks he had found for his use, and now serves "~5000 contacts to ~5000 contacts" (#270).

@untitaker
Copy link
Contributor

Serving with Gunicorn makes it possible for Radicale to have race conditions (due to the removal of single-threadedness) and makes data corruption possible (two requests writing to the same calendar at the same time).

@untitaker
Copy link
Contributor

Concretely, run this test script against a radicale server: https://gist.github.com/untitaker/95b3a7d5676c6e92b496

Test it once with:

python radicale.py

and once with:

gunicorn -w 8 -k sync -b 127.0.0.1:5232 gunicorn_app:app

and a gunicorn_app.py:

from radicale import Application
app = Application()

You'll see that a lot of items go missing.

@WhyNotHugo
Copy link

@untitaker I guess using the SQL backend would avoid that race condition - though it's still labeled as not-ready-for-production.

@liZe
Copy link
Member

liZe commented Aug 21, 2015

@hobarrera no, it won't.

For example, Radicale doesn't lock the storage between reading and writing during a single PUT request, so a data corruption can always happen if another process changes the collection between reading and writing. And that's just one simple example.

@untitaker
Copy link
Contributor

I'm not sure if sqlite already does that automatically.

@untitaker
Copy link
Contributor

Ah, no, the test script fails just as well.

@daks
Copy link

daks commented Oct 12, 2015

Hello,

If I understand correctly this discussion: whatever storage (FS or DB) is used, Radicale can't handle more than one connection at the time and doing it can result in data loss.

As this is known by the project, is there any plan to change this situation to have a multi-user cal/card-dav server, or this is an 'assumed' position and no desire to enhance it exists?

@liZe
Copy link
Member

liZe commented Oct 12, 2015

@daks you're right, Radicale is not able to handle concurrent requests, and the storage is not the only problem to solve.

The 1.0.x branch won't change anything about that, but that's a main goal of the future 2.0 version. You can check the dream features here: http://librelist.com/browser//radicale/2015/8/21/radicale-1-0-is-coming-what-s-next/

@MacGyverNL
Copy link

I know that this is on the roadmap for 2.0, but I'm still running 1.1.1; I want to move it behind nginx+uwsgi because of the hanging issue (and better authentication). Is there anything I need to do to ensure I don't run into data loss through race conditions? I'm unfamiliar with uwsgi and at least @liZe seems to have it configured well.

Quite frankly, could you maybe just share your uwsgi (and if relevant, nginx) configuration for this setup? I'm unable to find decent documentation on the matter.

@liZe
Copy link
Member

liZe commented Jan 26, 2017

@MacGyverNL Here is one of my configurations for Radicale 2.0, it's almost the same for 1.x:

uWSGI configuration

[uwsgi]
chdir = /usr/share/radicale/
socket = /tmp/%n.socket
wsgi-file = radicale.wsgi
callable = application
plugins = python35
processes = 1

nginx configuration

...
        server {
                listen 127.0.0.1:5232;
                server_name localhost;
                access_log /var/log/nginx/radicale.access_log main;
                error_log /var/log/nginx/radicale.error_log info;
                location / {
                        uwsgi_pass unix:///tmp/radicale.socket;
                        include uwsgi_params;
                }
        }
...

@MacGyverNL
Copy link

Thanks a lot! Looks like uwsgi may have already been defaulting to processes = 1 but I just set it explicitly now :)

@liZe
Copy link
Member

liZe commented Jan 26, 2017

Looks like uwsgi may have already been defaulting to processes = 1

I don't know, that's my Radicale 2.0 config and I have processes = 4, I've just replaced 4 by 1 here 😉.

@liZe liZe modified the milestones: 2.0.0, 2.0.x May 27, 2017
@Unrud
Copy link
Collaborator

Unrud commented Jun 9, 2017

Parallel access works now. The file system storage uses locking and all operations are atomic.

@Unrud Unrud closed this as completed Jun 9, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants