Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

libcurl backend #641

Closed
wants to merge 4 commits into from
Closed

libcurl backend #641

wants to merge 4 commits into from

Conversation

cgwalters
Copy link
Member

@cgwalters cgwalters commented Jan 4, 2017

TODO:

@cgwalters cgwalters added the WIP label Jan 4, 2017
@cgwalters cgwalters force-pushed the libcurl branch 2 times, most recently from d8a591f to 525a91a Compare January 12, 2017 15:32
@cgwalters
Copy link
Member Author

OK, so things seem to be in a state useful to look at from an architecture perspective. The biggest outstanding thing is a URL parser (yes, this is very sad, leaning towards importing a copy of libsoup's).

@cgwalters
Copy link
Member Author

OK, squashed and rebased. Still WIP though.

@cgwalters cgwalters changed the title WIP: libcurl backend libcurl backend Jan 18, 2017
@cgwalters
Copy link
Member Author

Moving cleanup dependencies to #651

@cgwalters cgwalters force-pushed the libcurl branch 2 times, most recently from dbc0b7e to a9aa678 Compare January 23, 2017 22:33
@cgwalters
Copy link
Member Author

Split out, cleaned up prep patches in #654

For rpm-ostree, we already link to libcurl indirectly via librepo, and
only having one HTTP library in process makes sense.

Further, libcurl is (I think) more popular in the embedded space.  It
also supports HTTP/2.0 today, which is a *very* nice to have for OSTree.

This seems to be working fairly well for me in my local testing, but it's
obviously brand new nontrivial code, so it's going to need some soak time.

The ugliest part of this is having to vendor in the soup-url code. With
Oxidation we could follow the path of Firefox and use the
[Servo URL parser](https://github.com/servo/rust-url).  Having to redo
cookie parsing also sucked, and that would also be a good oxidation target.

But that's for the future.
@cgwalters
Copy link
Member Author

Rebased 🏄. Removing the WIP label. I think we can land this and just note that it's still experimental.

return FALSE;
}
}
#else
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any practical reason not to use the agnostic version for both?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mmm...it's net-new code that affects the libsoup path too. But it seems fairly safe though too.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It'd be a great way to have it battle tested. :) My original intent though was just to try to minimize conditional paths wherever possible. Though if you'd prefer keeping them separate for now, that's fine (would still be nice to export them into functions though).

@@ -69,6 +70,52 @@ ot_remote_builtin_delete_cookie (int argc, char **argv, GCancellable *cancellabl
cookie_file = g_strdup_printf ("%s.cookies.txt", remote_name);
jar_path = g_build_filename (gs_file_get_path_cached (repo->repodir), cookie_file, NULL);

#ifdef HAVE_LIBCURL
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we put these implementations in their own functions in ot-remote-cookie-util.c?

.redhat-ci.yml Outdated
--libdir=/usr/lib64
--enable-installed-tests
--enable-gtk-doc
--enable-curl
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be --with-curl, right?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Output from the curl tester:

    libOSTree 2017.1
    ===============


    introspection:                                yes
    Rust (internal oxidation):                    no
    rofiles-fuse:                                 yes
    HTTP backend:                                 libsoup
    ...

@cgwalters
Copy link
Member Author

OK, centralized the cookie 🍪 bits more.

@jlebon
Copy link
Member

jlebon commented Feb 8, 2017

OK, no performance regressions to note when pulling from a remote server. However, pulling from a local httpd, I easily get Too many open files:

...<snip>...
> GET /fedora-25/releng-mirror/objects/84/b9b90978fa923ab5de229a1a07c315d7ef830a7a47804dabf1c6c3d6d6fda3.filez HTTP/1.1
Host: jlebon.usersys.redhat.com:35000
User-Agent: ostree
Accept: */*

< HTTP/1.1 200 OK
< Server: ostree-httpd libsoup/2.54.1
< Date: Wed, 08 Feb 2017 21:22:03 GMT
< Content-Length: 9645
<
* Connection #9 to host jlebon.usersys.redhat.com left intact
* Found bundle for host jlebon.usersys.redhat.com: 0x1ed3740 [can pipeline]
* Re-using existing connection! (#9) with host jlebon.usersys.redhat.com
* Connected to jlebon.usersys.redhat.com (10.15.17.98) port 35000 (#9)
> GET /fedora-25/releng-mirror/objects/02/c8b58fe2db1c0b5eda852d5ae9550ad472f2cca9f33a9c42d09fd8cbf0edae.filez HTTP/1.1
Host: jlebon.usersys.redhat.com:35000
User-Agent: ostree
Accept: */*

< HTTP/1.1 200 OK
< Server: ostree-httpd libsoup/2.54.1
< Date: Wed, 08 Feb 2017 21:22:03 GMT
< Content-Length: 5427
<
* Failed writing body (18446744073709551615 != 5427)
* Closing connection 9
error: open(O_TMPFILE): Too many open files

It looks like it's pipelining requests, which is nice. But the requests are being sent faster than we can write them to disk, so we eventually hit our open fd resource limit.

This worked for me:

diff --git a/src/libostree/ostree-repo-private.h b/src/libostree/ostree-repo-private.h
index 73e0244..48de3ba 100644
--- a/src/libostree/ostree-repo-private.h
+++ b/src/libostree/ostree-repo-private.h
@@ -35,6 +35,16 @@ G_BEGIN_DECLS
 #define _OSTREE_MAX_OUTSTANDING_FETCHER_REQUESTS 8
 #define _OSTREE_MAX_OUTSTANDING_DELTAPART_REQUESTS 2

+/* In most cases, writing to disk should be much faster than
+ * fetching from the network, so we shouldn't actually hit
+ * this. But if using pipelining and e.g. pulling over LAN
+ * (or writing to slow media), we can have a runaway
+ * situation towards EMFILE.
+ * */
+#define _OSTREE_MAX_OUTSTANDING_WRITE_REQUESTS \
+    (_OSTREE_MAX_OUTSTANDING_FETCHER_REQUESTS * 2)
+
+
 typedef enum {
   OSTREE_REPO_TEST_ERROR_PRE_COMMIT = (1 << 0)
 } OstreeRepoTestErrorFlags;
diff --git a/src/libostree/ostree-repo-pull.c b/src/libostree/ostree-repo-pull.c
index 295973e..1323a8e 100644
--- a/src/libostree/ostree-repo-pull.c
+++ b/src/libostree/ostree-repo-pull.c
@@ -353,7 +353,10 @@ fetcher_queue_is_full (OtPullData *pull_data)
   return (pull_data->n_outstanding_metadata_fetches +
           pull_data->n_outstanding_content_fetches +
           pull_data->n_outstanding_deltapart_fetches) == _OSTREE_MAX_OUTSTANDING_FETCHER_REQUESTS ||
-    pull_data->n_outstanding_deltapart_fetches == _OSTREE_MAX_OUTSTANDING_DELTAPART_REQUESTS;
+    pull_data->n_outstanding_deltapart_fetches == _OSTREE_MAX_OUTSTANDING_DELTAPART_REQUESTS ||
+    (pull_data->n_outstanding_metadata_write_requests +
+     pull_data->n_outstanding_content_write_requests +
+     pull_data->n_outstanding_deltapart_write_requests) >= _OSTREE_MAX_OUTSTANDING_WRITE_REQUESTS;
 }

 static gboolean

It's not a tight max, since we always want to start the write when a fetch completes (hence the >=), but it does the trick.

@cgwalters
Copy link
Member Author

Can you convert this to a commit and submit as a PR? I think it looks good, though can we do something like:

const gboolean fetch_full = (pull_data->n_outstanding_metadata_fetches +
           pull_data->n_outstanding_content_fetches +
           pull_data->n_outstanding_deltapart_fetches) == _OSTREE_MAX_OUTSTANDING_FETCHER_REQUESTS;
const gboolean deltas_full = pull_data->n_outstanding_deltapart_fetches == _OSTREE_MAX_OUTSTANDING_DELTAPART_REQUESTS;
const gboolean writes_full = (pull_data->n_outstanding_metadata_write_requests +
     pull_data->n_outstanding_content_write_requests +
     pull_data->n_outstanding_deltapart_write_requests) >= _OSTREE_MAX_OUTSTANDING_WRITE_REQUESTS;
return fetch_full || deltas_full || writes_full;

?

@cgwalters
Copy link
Member Author

I don't see a strong connection between "max outstanding local writes" and "max http requests" - let's just hardcode it like #define _OSTREE_MAX_OUTSTANDING_WRITE_REQUESTS 16 or so?

See also https://bugzilla.gnome.org/show_bug.cgi?id=687223

@jlebon
Copy link
Member

jlebon commented Feb 9, 2017

Sure, that works. My reasoning was that each outstanding request will eventually become an outstanding write so you can end up with _OSTREE_MAX_OUTSTANDING_FETCHER_REQUESTS outstanding writes. I made it * 2 so we can deal with e.g. transient storage hiccups.

@jlebon
Copy link
Member

jlebon commented Feb 9, 2017

OK, I opened #675.
Apart from that, LGTM. 👍

@rh-atomic-bot r+ a29e2a0

@rh-atomic-bot
Copy link

⌛ Testing commit a29e2a0 with merge 669b241...

rh-atomic-bot pushed a commit that referenced this pull request Feb 9, 2017
For rpm-ostree, we already link to libcurl indirectly via librepo, and
only having one HTTP library in process makes sense.

Further, libcurl is (I think) more popular in the embedded space.  It
also supports HTTP/2.0 today, which is a *very* nice to have for OSTree.

This seems to be working fairly well for me in my local testing, but it's
obviously brand new nontrivial code, so it's going to need some soak time.

The ugliest part of this is having to vendor in the soup-url code. With
Oxidation we could follow the path of Firefox and use the
[Servo URL parser](https://github.com/servo/rust-url).  Having to redo
cookie parsing also sucked, and that would also be a good oxidation target.

But that's for the future.

Closes: #641
Approved by: jlebon
@jlebon
Copy link
Member

jlebon commented Feb 9, 2017

@rh-atomic-bot retry

@rh-atomic-bot
Copy link

⌛ Testing commit a29e2a0 with merge 361aa44...

@rh-atomic-bot
Copy link

☀️ Test successful - status-atomicjenkins
Approved by: jlebon
Pushing 361aa44 to master...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants