Skip to content

Latest commit

 

History

History
1383 lines (1155 loc) · 116 KB

v0.5.md

File metadata and controls

1383 lines (1155 loc) · 116 KB

go-ipfs changelog v0.5

v0.5.1 2020-05-08

Hot on the heels of 0.5.0 is 0.5.1 with some important but small bug fixes. This release:

  1. Removes the 1 minute timeout for IPNS publishes (fixes #7244).
  2. Backport a DHT fix to reduce CPU usage for canceled requests.
  3. Fixes some timer leaks in the QUIC transport (ipfs/go-ipfs#2515).

Changelog

  • github.com/ipfs/go-ipfs:
  • github.com/libp2p/go-libp2p-core (v0.5.2 -> v0.5.3):
  • github.com/libp2p/go-libp2p-kad-dht (v0.7.10 -> v0.7.11):
  • github.com/libp2p/go-libp2p-routing-helpers (v0.2.2 -> v0.2.3):
  • github.com/lucas-clemente/quic-go (v0.15.5 -> v0.15.7):
    • reset the PTO when dropping a packet number space
    • move deadlineTimer declaration out of the Read loop
    • stop the deadline timer in Stream.Read and Write
    • fix buffer use after it was released when sending an INVALID_TOKEN error
    • create the session timer at the beginning of the run loop
    • stop the timer when the session's run loop returns

Contributors

Contributor Commits Lines ± Files Changed
Marten Seemann 10 +81/-62 19
Steven Allen 5 +42/-18 10
Adin Schmahmann 1 +2/-8 1
dependabot-preview[bot] 2 +6/-2 4

v0.5.0 2020-04-28

We're excited to announce go-ipfs 0.5.0! This is by far the largest go-ipfs release with ~2500 commits, 98 contributors, and over 650 PRs across ipfs, libp2p, and multiformats.

Highlights

Content Routing

The primary focus of this release was on improving content routing. That is, advertising and finding content. To that end, this release heavily focuses on improving the DHT.

Improved DHT

The distributed hash table (DHT) is how IPFS nodes keep track of who has what data. The DHT implementation has been almost completely rewritten in this release. Providing, finding content, and resolving IPNS records are now all much faster. However, there are risks involved with this update due to the significant amount of changes that have gone into this feature.

The current DHT suffers from three core issues addressed in this release:

  • Most peers in the DHT cannot be dialed (e.g., due to firewalls and NATs). Much of a DHT query time is wasted trying to connect to peers that cannot be reached.
  • The DHT query logic doesn't properly terminate when it hits the end of the query and, instead, aggressively keeps on searching.
  • The routing tables are poorly maintained. This can cause search performance to slow down linearly with network size, instead of logarithmically as expected.
Reachability

We have addressed the problem of undialable nodes by having nodes wait to join the DHT as server nodes until they've confirmed that they are reachable from the public internet.

To ensure that nodes which are not publicly reachable (ex behind VPNs, offline LANs, etc.) can still coordinate and share data, go-ipfs 0.5 will run two DHTs: one for private networks and one for the public internet. Every node will participate in a LAN DHT and a public WAN DHT. See Dual DHT for more details.

Dual DHT

All IPFS nodes will now run two DHTs: one for the public internet WAN, and one for their local network LAN.

  1. When connected to the public internet, IPFS will use both DHTs for finding peers, content, and IPNS records. Nodes only publish provider and IPNS records to the WAN DHT to avoid flooding the local network.
  2. When not connected to the public internet, nodes publish provider and IPNS records to the LAN DHT.

The WAN DHT includes all peers with at least one public IP address. This release will only consider an IPv6 address public if it is in the public internet range 2000::/3.

This feature should not have any noticeable impact on go-ipfs, performance, or otherwise. Everything should continue to work in all the currently supported network configurations: VPNs, disconnected LANs, public internet, etc.

Query Logic

We've improved the DHT query logic to more closely follow Kademlia. This should significantly speed up:

  • Publishing IPNS & provider records.
  • Resolving IPNS addresses.

Previously, nodes would continue searching until they timed out or ran out of peers before stopping (putting or returning data found). Now, nodes will now stop as soon as they find the closest peers.

Routing Tables

Finally, we've addressed the poorly maintained routing tables by:

  • Reducing the likelihood that the connection manager will kill connections to peers in the routing table.
  • Keeping peers in the routing table, even if we get disconnected from them.
  • Actively and frequently querying the DHT to keep our routing table full.
  • Prioritizing useful peers that respond to queries quickly.
Testing

The DHT rewrite was made possible by Testground, our new testing framework. Testground allows us to spin up multi-thousand node tests with simulated real-world network conditions. By combining Testground and some custom analysis tools, we were able to gain confidence that the new DHT implementation behaves correctly.

Provider Record Changes

When you add content to your IPFS node, you advertise this content to the network by announcing it in the DHT. We call this providing.

However, go-ipfs has multiple ways to address the same underlying bytes. Specifically, we address content by content ID (CID) and the same underlying bytes can be addressed using (a) two different versions of CIDs (CIDv0 and CIDv1) and (b) with different codecs depending on how we're interpreting the data.

Prior to go-ipfs 0.5.0, we used the content id (CID) in the DHT when sending out provider records for content. Unfortunately, this meant that users trying to find data announced using one CID wouldn't find nodes providing the content under a different CID.

In go-ipfs 0.5.0, we're announcing data by multihash, not CID. This way, regardless of the CID version used by the peer adding the content, the peer trying to download the content should still be able to find it.

Warning: as part of the network, this could impact finding content added with CIDv1. Because go-ipfs 0.5.0 will announce and search for content using the bare multihash (equivalent to the v0 CID), go-ipfs 0.5.0 will be unable to find CIDv1 content published by nodes prior to go-ipfs 0.5.0 and vice-versa. As CIDv1 is not enabled by default so we believe this will have minimal impact. However, users are strongly encouraged to upgrade as soon as possible.

Content Transfer

A secondary focus in this release was improving content transfer, our data exchange protocols.

Refactored Bitswap

This release includes a major Bitswap refactor, running a new and backward compatible Bitswap protocol. We expect these changes to improve performance significantly.

With the refactored Bitswap, we expect:

  • Few to no duplicate blocks when fetching data from other nodes speaking the new protocol.
  • Better parallelism when fetching from multiple peers.

The new Bitswap won't magically make downloading content any faster until both seeds and leaches have updated. If you're one of the first to upgrade to 0.5.0 and try downloading from peers that haven't upgraded, you're unlikely to see much of a performance improvement.

Server-Side Graphsync Support (Experimental)

Graphsync is a new exchange protocol that operates at the IPLD Graph layer instead of the Block layer like bitswap.

For example, to download "/ipfs/QmExample/index.html":

  • Bitswap would download QmFoo, lookup "index.html" in the directory named by QmFoo, resolving it to a CID QmIndex. Finally, bitswap would download QmIndex.
  • Graphsync would ask peers for "/ipfs/QmFoo/index.html". Specifically, it would ask for the child named "index.html" of the object named by "QmFoo".

This saves us round-trips in exchange for some extra protocol complexity. Moreover, this protocol allows specifying more powerful queries like "give me everything under QmFoo". This can be used to quickly download a large amount of data with few round-trips.

At the moment, go-ipfs cannot use this protocol to download content from other peers. However, if enabled, go-ipfs can serve content to other peers over this protocol. This may be useful for pinning services that wish to quickly replicate client data.

To enable, run:

> ipfs config --json Experimental.GraphsyncEnabled true

Datastores

Continuing with the of improving our core data handling subsystems, both of the datastores used in go-ipfs, Badger and flatfs, have received important updates in this release:

Badger

Badger has been in go-ipfs for over a year as an experimental feature, and we're promoting it to stable (but not default). For this release, we've switched from writing to disk synchronously to explicitly syncing where appropriate, significantly increasing write throughput.

The current and default datastore used by go-ipfs is FlatFS. FlatFS essentially stores blocks of data as individual files on your file system. However, there are lots of optimizations a specialized database can do that a standard file system can not.

The benefit of Badger is that adding/fetching data to/from Badger is significantly faster than adding/fetching data to/from the default datastore, FlatFS. In some tests, adding data to Badger is 32x faster than FlatFS (in this release).

Enable Badger

In this release, we're marking the badger datastore as stable. However, we're not yet enabling it by default. You can enable it at initialization by running: ipfs init --profile=badgerds

Issues with Badger

While Badger is a great solution, there are some issues you should consider before enabling it.

Badger is complicated. FlatFS pushes all the complexity down into the filesystem itself. That means that FlatFS is only likely to lose your data if your underlying filesystem gets corrupted while there are more opportunities for Badger itself to get corrupted.

Badger can use a lot of memory. In this release, we've tuned Badger to use ~20MB of memory by default. However, it can still produce spikes as large as 1GiB of data in memory usage when garbage collecting.

Finally, Badger isn't very aggressive when it comes to garbage collection, and we're still investigating ways to get it to more aggressively clean up after itself.

We suggest you use Badger if:

  • Performance is your main requirement.
  • You rarely delete anything.
  • You have some memory to spare.
Flatfs

In the flatfs datastore, we've fixed an issue where temporary files could be left behind in some cases. While this release will avoid leaving behind temporary files, you may want to remove any left behind by previous releases:

> rm ~/.ipfs/blocks/*/put-*
> rm ~/.ipfs/blocks/du-*

We've also hardened several edge-cases in flatfs to reduce the impact of file descriptor limits, spurious crashes, etc.

Libp2p

Many improvements and bug fixes were made to libp2p over the course of this release. These release notes only include the most important and those most relevant to the content routing improvements.

Improved Backoff Logic

When we fail to connect to a peer, we "backoff" and refuse to re-connect to that peer for a short period of time. This prevents us from wasting resources repeatedly failing to connect to the same unreachable peer.

Unfortunately, the old backoff logic was flawed: if we failed to connect to a peer and entered the "backoff" state, we wouldn't try to re-connect to that peer even if we had learned new and potentially working addresses for the peer. We've fixed this by applying backoff to each address instead of to the peer as a whole. This achieves the same result as we'll stop repeatedly trying to connect to the peer at known-bad addresses, but it allows us to reach the peer if we later learn about a good address.

AutoNAT

This release uses Automatic NAT Detection (AutoNAT) - determining if the node is reachable from the public internet - to make decisions about how to participate in IPFS. This subsystem is used to determine if the node should store some of the public DHT, and if it needs to use relays to be reached by others. In short:

  1. An AutoNAT client asks a node running an AutoNAT service if it can be reached at one of a set of guessed addresses.
  2. The AutoNAT service attempts to dial back those addresses, with some restrictions. We won't dial back to a different IP address, for example.
  3. If the AutoNAT service succeeds, it reports back the address it successfully dialed, and the AutoNAT client knows that it is reachable from the public internet.

All nodes act as AutoNAT clients to determine if they should switch into DHT server mode. As of this release, nodes will by default run the service side of AutoNAT - verifying connectivity - for up to 30 peers every minute. This service should have minimal overhead and will be disabled for nodes in the lowpower configuration profile, and those which believe they are not publicly reachable.

In addition to enabling the AutoNAT service by default, this release changes the AutoNAT config options:

  1. The Swarm.EnableAutoNATService option has been removed.
  2. A new AutoNAT section has been added to the config. This section is empty by default.
IPFS/Libp2p Address Format

If you've ever run a command like ipfs swarm peers, you've likely seen paths that look like /ip4/193.45.1.24/tcp/4001/ipfs/QmSomePeerID. These paths are not file paths, they're multiaddrs; addresses of peers on the network.

Unfortunately, /ipfs/Qm... is also the same path format we use for files. This release, changes the multiaddr format from /ip4/193.45.1.24/tcp/4001/ipfs/QmSomePeerID to /ip4/193.45.1.24/tcp/4001/p2p/QmSomePeerID to make the distinction clear.

What this means for users:

  • Old-style multiaddrs will still be accepted as inputs to IPFS.
  • If you were using a multiaddr library (go, js, etc.) to name files because /ipfs/QmSomePeerID looks like /ipfs/QmSomeFile, your tool may break if you upgrade this library.
  • If you're manually parsing multiaddrs and are searching for the string /ipfs/..., you'll need to search for /p2p/....
Minimum RSA Key Size

Previously, IPFS did not enforce a minimum RSA key size. In this release, we've introduced a minimum 2048 bit RSA key size. IPFS generates 2048 bit RSA keys by default so this shouldn't be an issue for anyone in practice. However, users who explicitly chose a smaller key size will not be able to communicate with new nodes.

Unfortunately, some of the bootstrap peers did intentionally generate 1024 bit RSA keys so they'd have vanity peer addresses (starting with QmSoL for "solar net"). All IPFS nodes should also have peers with >= 2048 bit RSA keys in their bootstrap list, but we've introduced a migration to ensure this.

We implemented this change to follow security best practices and to remove a potential foot-gun. However, in practice, the security impact of allowing insecure RSA keys should have been next to none because IPFS doesn't trust other peers on the network anyways.

TLS By Default

In this release, we're switching TLS to be the default transport. This means we'll try to encrypt the connection with TLS before re-trying with SECIO.

Contrary to the announcement in the go-ipfs 0.4.23 release notes, this release does not remove SECIO support to maintain compatibility with js-ipfs.

Note: The Experimental.PreferTLS configuration option is now ignored.

SECIO Deprecation Notice

SECIO should be considered to be well on the way to deprecation and will be completely disabled in either the next release (0.6.0, ~mid May) or the one following that (0.7.0, ~end of June). Before SECIO is disabled, support will be added for the NOISE transport for compatibility with other IPFS implementations.

QUIC Upgrade

If you've been using the experimental QUIC support, this release upgrades to a new and incompatible version of the QUIC protocol (draft 27). Old and new go-ipfs nodes will still interoperate, but not over the QUIC transport.

We intend to standardize on this draft of the QUIC protocol and enable QUIC by default in the next release if all goes well.

NOTE: QUIC does not yet support private networks.

Gateway

In addition to a bunch of bug fixes, we've made two improvements to the gateway.

You can play with both of these features by visiting:

http://bafybeia6po64b6tfqq73lckadrhpihg2oubaxgqaoushquhcek46y3zumm.ipfs.localhost:8080

Subdomain Gateway

First up, we've changed how URLs in the IPFS gateway work for better browser security. The gateway will now redirect from http://localhost:8080/ipfs/CID/... to http://CID.ipfs.localhost:8080/... by default. This:

  • Ensures that every dapp gets its own browser origin.
  • Makes it easier to write websites that "just work" with IPFS because absolute paths will now work (though you should still use relative links because they're better).

Paths addressing the gateway by IP address (http://127.0.0.1:5001/ipfs/CID) will not be altered as IP addresses can't have subdomains.

Note: cURL doesn't follow redirects by default. To avoid breaking cURL and other clients that don't support redirects, go-ipfs will return the requested file along with the redirect. Browsers will follow the redirect and abort the download while cURL will ignore the redirect and finish the download.

Directory Listing

The second feature is a face-lift to the directory listing theme and color palette.

http://bafybeia6po64b6tfqq73lckadrhpihg2oubaxgqaoushquhcek46y3zumm.ipfs.localhost:8080

IPNS

This release includes several new IPNS and IPNS-related features.

ENS

IPFS now resolves ENS names (e.g., /ipns/ipfs.eth) via DNSLink provided by https://eth.link service.

IPNS over PubSub

IPFS has had experimental support for resolving IPNS over pubsub for a while. However, in the past, this feature was passive. When resolving an IPNS name, one would join a pubsub topic for the IPNS name and subscribe to future updates. Unfortunately, this wouldn't speed-up initial IPNS lookups.

In this release, we've introduced a new "record fetch" protocol to speedup the initial lookup. Now, after subscribing to the pubsub topic for the IPNS key, nodes will use this new protocol to "fetch" the last-seen IPNS record from all peers subscribed to the topic.

This feature will be enabled by default in 0.6.0.

IPNS with base32 PIDs

IPNS names can now be expressed as special multibase CIDs. E.g.,

/ipns/bafzbeibxfjp4gaxc4cdn57257cyvc7jfa4rlp4e5min6geg44m57g6nx7e

Importantly, this allows IPNS names to appear in subdomains in the new subdomain gateway feature.

PubSub

We have made two major changes to the pubsub subsystem in this release:

  1. Pubsub now more aggressively finds and connects to other peers subscribing to the same topic.
  2. Go-ipfs has switched its default pubsub router from "floodsub", an inefficient but simple "flooding" pubsub implementation, to "gossipsub".

PubSub will be stabilized in go-ipfs 0.6.0.

CLI & API

The IPFS CLI and API have a couple of new features and changes.

POST Only

IPFS has two HTTP APIs:

As of this release, the main IPFS API (port 5001) will only accept POST requests. This change is necessary to tighten cross origin security in browsers.

If you're using the go-ipfs API in your application, you may need to change GET calls to POST calls or upgrade your libraries and tools.

  • go - go-ipfs-api - v0.0.3
  • js-ipfs-http-api - v0.41.1
  • orbit-db - v0.24.0 (unreleased)
RIP "Error: api not running"

If you've ever seen the error:

Error: api not running

when trying to run a command without the daemon running, we have good news! You should never see this error again. The ipfs command now correctly detects that the daemon is not, in fact, running, and directly opens the IPFS repo.

RIP ipfs repo fsck

The ipfs repo fsck now does nothing but print an error message. Previously, it was used to cleanup some lock files: the "api" file that caused the aforementioned "api not running" error and the repo lock. However, this is no longer necessary.

Init with config

It's now possible to initialize an IPFS node with an existing IPFS config by running:

> ipfs init /path/to/existing/config

This will re-use the existing configuration in it's entirety (including the private key) and can be useful when:

  • Migrating a node's identity between machines without keeping the data.
  • Resetting the datastore.
Ignoring Files

Files can now be ignored on add by passing the --ignore and/or --ignore-rules-path flags.

  • --ignore=PATTERN will ignore all files matching the gitignore rule PATTERN.
  • --ignore-rules-path=FILENAME will apply the gitignore rules from the specified file.

For example, to add a git repo while ignoring all files git would ignore, you could run:

> cd path/to/some/repo
> ipfs add -r --hidden=false --ignore=.git --ignore-rules-path=.gitignore .
Named Pipes

It's now possible to add data directly from a named pipe:

> mkfifo foo
> echo -n "hello " > foo &
> echo -n "world" > bar &
> ipfs add foo bar

This can be useful when adding data from multiple streaming sources.

NOTE: To avoid surprising users, IPFS will only add data from FIFOs directly named on the command line, not FIFOs in a recursively added directory. Otherwise, ipfs add would halt whenever it encountered a FIFO with no data to be read leading to difficult to debug stalls.

DAG import/export (.car)

IPFS now allows rapid reading and writing of blocks in .car format. The functionality is accessible via the experimental dag import and dag export commands:

~$ ipfs dag export QmQPeNsJPyVWPFDVHb77w8G42Fvo15z4bG2X8D2GhfbSXc \
| xz > welcome_to_ipfs.car.xz

 0s  6.73 KiB / ? [-------=-------------------------------------] 5.16 MiB/s 0s

Then on another ipfs instance, not even connected to the network:

~$ xz -dc welcome_to_ipfs.car.xz | ipfs dag import

Pinned root	QmQPeNsJPyVWPFDVHb77w8G42Fvo15z4bG2X8D2GhfbSXc	success
Pins

We've made two minor changes to the pinning subsystem:

  1. ipfs pin ls --stream allows streaming a pin listing.
  2. ipfs pin update no longer holds the global pin lock while fetching files from the network. This should hopefully make it significantly more useful.

Daemon

Zap Logging

The go-ipfs daemon has switched to using Uber's Zap. Unlike our previous logging system, Zap supports structured logging which can make parsing, filtering, and analyzing go-ipfs logs much simpler.

To enable structured logging, set the IPFS_LOGGING_FMT environment variable to "json".

Note: while we've switched to using Zap as the logging backend, most of go-ipfs still logs strings.

Systemd Support

For Linux users, this release includes support for two systemd features: socket activation and startup/shutdown notifications. This makes it possible to:

  • Start IPFS on demand on first use.
  • Wait for IPFS to finish starting before starting services that depend on it.

You can find the new systemd units in the go-ipfs repo under misc/systemd.

IPFS API Over Unix Domain Sockets

This release supports exposing the IPFS API over a unix domain socket in the filesystem. You use this feature, run:

> ipfs config Addresses.API "/unix/path/to/socket/location"
Docker

We've made a few improvements to our docker image in this release:

  • It can now be cross-built for multiple architectures.
  • It now builds go-ipfs with OpenSSL support by default for faster libp2p handshakes.
  • A private-network "swarm" key can now be passed in to a docker image via either the IPFS_SWARM_KEY=<inline key> or IPFS_SWARM_KEY_FILE=<path/to/key/file> docker variables. Check out the Docker section of the README for more information.

Plugins

go-ipfs plugins allow users to extend go-ipfs without modifying the original source-code. This release includes a few important changes.

See docs/plugins.md for details.

MacOS Support

Plugins are now supported on MacOS, in addition to Linux. Unfortunately, Go still doesn't support plugins on Windows.

New Plugin Type: InternalPlugin

This release introduces a new InternalPlugin plugin type. When started, this plugin will be passed a raw *IpfsNode object, giving it access to all go-ipfs internals.

This plugin interface is permanently unstable as it has access to internals that can change frequently. However, it should allow power-users to develop deeply integrated extensions to go-ipfs, out-of-tree.

Plugin Config

BREAKING

Plugins can now be configured and/or disabled via the ipfs config file.

To make this possible, the plugin interface has changed. The Init function now takes an *Environment object. Specifically, the plugin signature has changed from:

type Plugin interface {
	Name() string
	Version() string
	Init() error
}

to

type Environment struct {
	// Path to the IPFS repo.
	Repo string

	// The plugin's config, if specified.
	Config interface{}
}

type Plugin interface {
	Name() string
	Version() string
	Init(env *Environment) error
}

Repo Migrations

IPFS uses repo migrations to make structural changes to the "repo" (the config, data storage, etc.) on upgrade.

This release includes two very simple repo migrations: a config migration to ensure that the config contains working bootstrap nodes and a keystore migration to base32 encode all key filenames.

In general, migrations should not require significant manual intervention. However, you should be aware of migrations and plan for them.

  • If you update go-ipfs with ipfs update, ipfs update will run the migration for you. Note: ipfs update will refuse to run the migrations while ipfs itself is running.
  • If you start the ipfs daemon with ipfs daemon --migrate, ipfs will migrate your repo for you on start.

Otherwise, if you want more control over the repo migration process, you can manually install and run the repo migration tool.

Bootstrap Peer Changes

AUTOMATIC MIGRATION REQUIRED

The first migration will update the bootstrap peer list to:

  1. Replace the old bootstrap nodes (ones with peer IDs starting with QmSoL), with new bootstrap nodes (ones with addresses that start with /dnsaddr/bootstrap.libp2p.io).
  2. Rewrite the address format from /ipfs/QmPeerID to /p2p/QmPeerID.

We're migrating addresses for a few reasons:

  1. We're using DNS to address the new bootstrap nodes so we can change the underlying IP addresses as necessary.
  2. The new bootstrap nodes use 2048 bit keys while the old bootstrap nodes use 1024 bit keys.
  3. We're normalizing the address format to /p2p/Qm....

Note: This migration won't add the new bootstrap peers to your config if you've explicitly removed the old bootstrap peers. It will also leave custom entries in the list alone. In other words, if you've customized your bootstrap list, this migration won't clobber your changes.

Keystore Changes

AUTOMATIC MIGRATION REQUIRED

go-ipfs stores additional keys (i.e., all keys other than the "identity" key) in the keystore. You can list these keys with ipfs key.

Currently, the keystore stores keys as regular files, named after the key itself. Unfortunately, filename restrictions and case-insensitivity are platform specific. To avoid platform specific issues, we're base32 encoding all key names and renaming all keys on-disk.

Windows

As usual, this release contains several Windows specific fixes and improvements:

  • Double-clicking ipfs.exe will now start the daemon inside a console window.
  • ipfs add -r now correctly recognizes and ignores hidden files on Windows.
  • The default datastore, flatfs, now takes extra precautions to avoid "file in use" errors caused by both go-ipfs and external programs like anti-viruses. If you've ever seen go-ipfs print out an "access denied" or "file in use" error on Windows, this issue was likely the cause.

Changelog

Contributors

Contributor Commits Lines ± Files Changed
Steven Allen 858 +27833/-15919 1906
Dirk McCormick 134 +18058/-8347 282
Aarsh Shah 83 +13458/-11883 241
Adin Schmahmann 144 +11878/-6236 397
Raúl Kripalani 94 +6894/-10214 598
vyzo 60 +8923/-1160 102
Will Scott 79 +3776/-1467 175
Michael Muré 29 +1734/-3290 104
dependabot[bot] 365 +3419/-361 728
Hector Sanjuan 64 +2053/-1321 132
Marten Seemann 52 +1922/-1268 147
Michael Avila 29 +828/-1733 70
Peter Rabbitson 53 +1073/-1197 100
Yusef Napora 36 +1610/-378 57
hannahhoward 16 +1342/-559 61
Łukasz Magiera 9 +277/-1623 41
Marcin Rataj 9 +1686/-99 32
Will 7 +936/-709 34
Alex Browne 27 +1019/-503 46
David Dias 30 +987/-431 43
Jakub Sztandera 43 +912/-436 77
Cole Brown 21 +646/-398 57
Oli Evans 29 +488/-466 43
Cornelius Toole 3 +827/-60 20
Hlib 15 +331/-185 28
Adrian Lanzafame 9 +123/-334 18
Petar Maymounkov 1 +385/-48 5
Alan Shaw 18 +262/-146 35
lnykww 1 +303/-52 6
Hannah Howard 1 +198/-27 3
Dominic Della Valle 9 +163/-52 14
Adam Uhlir 1 +211/-2 3
Dimitris Apostolou 1 +105/-105 64
Frrist 1 +186/-18 5
Henrique Dias 22 +119/-28 22
Gergely Tabiczky 5 +74/-60 7
Matt Joiner 2 +63/-62 4
@RubenKelevra 12 +46/-55 12
whyrusleeping 6 +87/-11 7
deepakgarg 4 +42/-43 4
protolambda 2 +49/-17 9
hucg 2 +47/-11 3
Arber Avdullahu 3 +31/-27 3
Sameer Puri 1 +46/-4 2
Hucg 3 +17/-33 3
Guilhem Fanton 2 +29/-10 7
Christian Muehlhaeuser 6 +20/-19 14
Djalil Dreamski 3 +27/-9 3
Caian 2 +36/-0 2
Topper Bowers 2 +31/-4 4
flowed 1 +16/-16 11
Vibhav Pant 4 +21/-10 5
frrist 1 +26/-4 1
Hlib Kanunnikov 1 +25/-3 1
george xie 3 +12/-15 11
optman 1 +13/-9 1
Roman Proskuryakov 1 +11/-11 2
Vasco Santos 1 +10/-10 5
Pretty Please Mark Darkly 2 +16/-2 2
Piotr Dyraga 2 +15/-2 2
Andrew Nesbitt 1 +5/-11 5
postables 4 +19/-8 4
Jim McDonald 2 +13/-1 2
PoorPockets McNewHold 1 +12/-0 1
Henri S 1 +6/-6 1
Igor Velkov 1 +8/-3 1
swedneck 4 +7/-3 4
Devin 2 +5/-5 4
iulianpascalau 1 +5/-3 2
MollyM 3 +7/-1 3
Jorropo 2 +5/-3 3
lukesolo 1 +6/-1 2
Wes Morgan 1 +3/-3 1
Kishan Mohanbhai Sagathiya 1 +3/-3 2
songjiayang 1 +4/-0 1
Terry Ding 1 +2/-2 1
Preston Van Loon 2 +3/-1 2
Jim Pick 2 +2/-2 2
Jakub Kaczmarzyk 1 +2/-2 1
Simon Menke 2 +2/-1 2
Jessica Schilling 2 +1/-2 2
Edgar Aroutiounian 1 +2/-1 1
hikerpig 1 +1/-1 1
ZenGround0 1 +1/-1 1
Thomas Preindl 1 +1/-1 1
Sander Pick 1 +1/-1 1
Ronsor 1 +1/-1 1
Roman Khafizianov 1 +1/-1 1
Rod Vagg 1 +1/-1 1
Max Inden 1 +1/-1 1
Leo Arias 1 +1/-1 1
Kuro1 1 +1/-1 1
Kirill Goncharov 1 +1/-1 1
John B Nelson 1 +1/-1 1
George Masgras 1 +1/-1 1
Aliabbas Merchant 1 +1/-1 1
Lorenzo Setale 1 +1/-0 1
Boris Mann 1 +1/-0 1