Skip to content

Commit

Permalink
Move to organization
Browse files Browse the repository at this point in the history
  • Loading branch information
Ivan Dmitriesvkii committed May 1, 2021
0 parents commit f68e687
Show file tree
Hide file tree
Showing 37 changed files with 2,560 additions and 0 deletions.
234 changes: 234 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,234 @@
### JetBrains+all ###
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio, WebStorm and Rider
# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839

# User-specific stuff
.idea/**/workspace.xml
.idea/**/tasks.xml
.idea/**/usage.statistics.xml
.idea/**/dictionaries
.idea/**/shelf

# Generated files
.idea/**/contentModel.xml

# Sensitive or high-churn files
.idea/**/dataSources/
.idea/**/dataSources.ids
.idea/**/dataSources.local.xml
.idea/**/sqlDataSources.xml
.idea/**/dynamic.xml
.idea/**/uiDesigner.xml
.idea/**/dbnavigator.xml

# Gradle
.idea/**/gradle.xml
.idea/**/libraries

# Gradle and Maven with auto-import
# When using Gradle or Maven with auto-import, you should exclude module files,
# since they will be recreated, and may cause churn. Uncomment if using
# auto-import.
# .idea/artifacts
# .idea/compiler.xml
# .idea/jarRepositories.xml
# .idea/modules.xml
# .idea/*.iml
# .idea/modules
# *.iml
# *.ipr

# CMake
cmake-build-*/

# Mongo Explorer plugin
.idea/**/mongoSettings.xml

# File-based project format
*.iws

# IntelliJ
out/

# mpeltonen/sbt-idea plugin
.idea_modules/

# JIRA plugin
atlassian-ide-plugin.xml

# Cursive Clojure plugin
.idea/replstate.xml

# Crashlytics plugin (for Android Studio and IntelliJ)
com_crashlytics_export_strings.xml
crashlytics.properties
crashlytics-build.properties
fabric.properties

# Editor-based Rest Client
.idea/httpRequests

# Android studio 3.1+ serialized cache file
.idea/caches/build_file_checksums.ser

### JetBrains+all Patch ###
# Ignores the whole .idea folder and all .iml files
# See https://github.com/joeblau/gitignore.io/issues/186 and https://github.com/joeblau/gitignore.io/issues/360

.idea/

# Reason: https://github.com/joeblau/gitignore.io/issues/186#issuecomment-249601023

*.iml
modules.xml
.idea/misc.xml
*.ipr

# Sonarlint plugin
.idea/sonarlint

### Python ###
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
pytestdebug.log

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/
doc/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
.python-version

# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock

# poetry
#poetry.lock

# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
# .env
.env/
.venv/
env/
venv/
ENV/
env.bak/
venv.bak/
pythonenv*

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# pytype static type analyzer
.pytype/

# operating system-related files
*.DS_Store #file properties cache/storage on macOS
Thumbs.db #thumbnail cache on Windows

# profiling data
.prof
5 changes: 5 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
<div align="center">
<h1>eventual</h1>
<p><strong>a framework for message driven systems</strong> </p>
</div>
<br>
Empty file added docs/guarantees.md
Empty file.
17 changes: 17 additions & 0 deletions docs/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# Welcome to MkDocs

For full documentation visit [mkdocs.org](https://www.mkdocs.org).

## Commands

* `mkdocs new [dir-name]` - Create a new project.
* `mkdocs serve` - Start the live-reloading docs server.
* `mkdocs build` - Build the documentation site.
* `mkdocs -h` - Print help message and exit.

## Project layout

mkdocs.yml # The configuration file.
docs/
index.md # The documentation homepage.
... # Other markdown pages, images and other files.
44 changes: 44 additions & 0 deletions docs/retry_behavior.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# Thoughts on retrying

Suppose a service called A produces a message M. A service called B
wants to send an email for every such message M that it gets.

B uses a REST API to submit letters to be sent. What happens if that API is not available at the moment that B gets M and tries to handle it?

In the client-server model B doesn't get a message from A, but instead A sends a request to B. If B can't handle the request, it indicates failure to A by sending an appropriate HTTP response code. In this case it's on A to retry whatever it's trying to achieve via B at a later time. But how do we deal with these kinds of failures when messages are involved?

One of the great advantages of message driven systems is that A doesn't have to know anything about B. So it would be unwise to involve A in any way in case B fails to handle M.

Furthermore, one of the design goals of this particular library is to never lose a message, which means B can't just drop M and move on. M has to be stored somewhere where B (and only B, because other services might not have failed to handle M) has access to it and then M has to be retrieved and retired by B.

Taking into account that B may actually be deployed as multiple pods there are two such places:

- B's database
- B's queue

One of the primary concerns in designing the retry flow is to make sure that it's not possible to overwhelm the system. If a retry operation can lead to a cascade of errors then the entire system can collapse.

It's also important to actually delay a retry attempt, to give the system a chance to recover. Retrying over and over immediately can also lead to a collapse.

## Queue

Let's explore the queue option. There are two points of interest:

1. Dispatcher checking if M was already handled by B,
2. B marking M as handled with respect to a particular guarantee.

If M was already handled then dispatcher acknowledges the message as if it was handled (because it was) and doesn't in fact dispatch it.

If B raises an exception during the handling of M, M is scheduled to be sent by B at some point in the future. It will be sent as is to everyone, as if it was sent by A again.

If there are other services besides B that failed to handle M, they will resend it to. The case of *several* services failing to handle a message is actually common, that happens when a common dependency is unavailable.

There is always a possiblity that M will be sent to the queue multiple times, this is not directly related to retries. We have idempotency keys and different guarantees exactly for that. Every service that handled M successfully the first time will ignore it. But we have a situation in which a failure can lead from one M message to as many M messages as there are services that failed, so we have to be careful to prevent any possibility that the number failures will grow exponentially.

### Sequential processing

Suppose that B gets M two times (M1 and M2) sequentially meaning first it runs points 1 and 2 for M1 and only then does the same for M2. In that case dispatcher will silently acknowledge M2 and move on.

### Concurrent processing

Suppose now that B gets M two times (M1 and M2) concurrently meaning that before it runs 2 for M1 it runs 1 for M2. Handling M1 and M2 becomes a race. The race is won by the thread that first marks M as handled in the database.
1 change: 1 addition & 0 deletions eventual/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
__version__ = "0.1.0"
Empty file added eventual/infra/__init__.py
Empty file.
3 changes: 3 additions & 0 deletions eventual/infra/exchange/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
from .amqp import AmqpMessage, AmqpMessageExchange
from .concurrent_dispatch import ConcurrentMessageDispatcher
from .relational_storage import RelationalEventStorage
Loading

0 comments on commit f68e687

Please sign in to comment.