-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FI-2921: Support SQLAlchemy database result backend #3
Conversation
@@ -5,6 +5,7 @@ RUN apk add --no-cache ca-certificates tzdata && update-ca-certificates | |||
|
|||
# Install the required packages | |||
RUN pip install --no-cache-dir redis flower | |||
RUN pip install --no-cache-dir "SQLAlchemy>=1.4,<2" "psycopg2-binary>=2.9,<3" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't love that we're choosing a version of these libraries to support when building the docker image; that seems like a task that should be left up to individual services to do. For example, what if one microservice wants to use SQLAlchemy 2.x.x, but another wants to use SQLAlchemy 1.4.x? Because Flower interacts with the same database as gets written by the microservice's own Celery worker, it seems like the service should be managing the SQLAlchemy/Postgres library versions. If we want to adopt this, then we may need to shift our pattern away from building a final production Flower image, and instead having a "base" Flower image that other repos will extend in order to render their own results. Does that sound right?
In the meantime, I don't think we're deviating too far from existing Flower patterns by putting this here. The base branch of Flower already does pip install ... redis ...
. This introduces the exact same potential for dependency conflicts as I described above, but for Redis library versions instead of SQLAlchemy library versions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does that sound right?
Doesn't sound right to me. From what I have seen, Flower is designed to run as a standalone application. This means that whatever service celery backend backend happens to be writing to a Postgres database and Flower happens to be reading from that same Postgres database. In this scenario it shouldn't even matter whether or not Flower is using SQLALchemy. The thing that matters is that the schema that Flower is reading from are the same schemas that Celery is writing to.
it seems like the service should be managing the SQLAlchemy/Postgres library versions
It's more like that the service and Flower need to agree on the underlying Postgres schema representation, which likely means they need to coordinate the same version of Celery.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ship
I'm going to cut this comment short:
I want to touch on matrix testing,
certainly i would like to know with confidence that that shipped code is going to work with the known support target versions
this gets into a broader discussion of "how do we organize the manpower required to support a rolling release" though, especially because our customers (other devs) are going to largely be expecting to get the latest versions of libx,liby from pypi, enforced by dependabot even
for now, this is ok as is, maybe a todo comment to make sure we verify it works on versions matching celery latest, stable, oldstable versions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know its a dead conversation, but the phrase
happens to be writing to a Postgres database and Flower happens to be reading from that same Postgres database
screams red-flag to me, please remind me, why are we not adding api methods to celery that do the database reads? and reading that new api in flower?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why are we not adding api methods to celery that do the database reads? and reading that new api in flower?
We are using the celery API methods through the Celery result backend.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these points are exactly why this change is a red-flag
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI @MaxwellPayne , @dylan-shipwell and I talked about this offline - we're going to move forward here but we need to write up the tradeoffs we're making and have a diagram of how the Flower deployable with these customizations interact with the Celery result backend. I was going to take a first stab at that and the 3 of us can review to make sure we have the risks and issues fully documented and identified and objectively assess as we continue using Celery to ensure the tradeoffs we are making now are still legit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed. I think in this situation, there is no "right answer." So we should disagree and commit, and write down the reasons why we're making the choices. Do you have a ticket for this that we can reference for posterity?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@MaxwellPayne I created https://shipwell.atlassian.net/browse/SHARE-2822, isn't there an epic you have going somewhere for these celery tasks already?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. 🙏 Just updated the ticket with the relevant epic.
@@ -14,12 +14,12 @@ def __init__( | |||
*, | |||
task_id: str, | |||
status: str, | |||
date_done: str, | |||
date_done: str | datetime.datetime, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Convenience change to __init__()
so that different backends can more easily initialize a Result(...)
instance despite the backend subclasses representing timestamps using different data types.
args: Sequence[Any] | None = None, | ||
kwargs: dict[str, Any] | None = None, | ||
args: Any = None, | ||
kwargs: Any = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In theory, args
should always be a Sequence and kwargs
should always be a dict (or at least a Mapping). But I ran into some complexities when it comes to deserializing args/kwargs from binary columns from SQLAlchemy records.
Because results are long-lived, it's possible that a result database table has records that, at the time of creation, used different encoding schemes (e.g. JSON, YAML, pickle, custom). Our result data store makes a best-effort attempt to deserialize these binary values, but ultimately it's very possible that some records could completely fail to deserialize. For example, if someone created a task result row using a "custom" serializer that has since been deleted from the user's Celery codebase, it will be impossible for our result store to deserialize.
To avoid 500 errors, our result stores can gracefully handle these failures by simply returning the raw args
or kwargs
data as it looked when extracted from the data store. Even if we fail to deserialize and you get something ugly like b'some_kwarg : some_value'
, it's still better than nothing. We don't do much manipulation of args
or kwargs
here anyway, so we can afford to sacrifice type specificity for the sake of resilience.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I followed some of this but I need to see it live to really internalize it. But what you're saying makes sense and sounds like you've gone through a good evaluation of the edge cases 👍
from flower.utils.results.stores import AbstractBackendResultsStore | ||
|
||
|
||
class DatabaseBackendResultsStore(AbstractBackendResultsStore[DatabaseBackend]): | ||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes to this class are the meat of the PR.
@@ -23,7 +22,7 @@ def results_by_timestamp(self, limit: int | None = None, reverse: bool = True) - | |||
for key in self.backend.client.scan_iter( | |||
match=task_key_prefix + ("*" if isinstance(task_key_prefix, str) else b"*") | |||
): | |||
result_data: dict[str, Any] = json.loads(self.backend.client.get(key)) | |||
result_data: dict[str, Any] = self.backend.decode_result(self.backend.client.get(key)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This fixes an earlier issue with our Redis result store. Simply invoking the json.loads()
method is incomplete, because it will not use the custom decoding logic known to Celery's deserialization methods. So we should instead use something within Celery-land to perform decoding.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, makes sense!
""" | ||
Extension of `BackendDependentTestCase` that sets a default value for `self.app.conf.database_url` based on the | ||
`TEST_DATABASE_CELERY_RESULT_BACKEND_CONNECTION_STRING` environment variable. If no such environment variable | ||
exists, the setup will assume a localhost connection to Postgres. | ||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I struggle with how to structure our unit tests here, particularly for test classes that require a live backend data store in order to work properly. The more backends we support, the more dependencies on external processes we will encounter. For example, I could see us needing Postgres, MongoDB, Redis, Memcached, RabbitMQ, and on and on in order to run a comprehensive unit test suite. I know that in other projects, we've used tools like embedded RabbitMQ or embedded Postgres so that our unit test suite can run with complete autonomy and no dependencies on docker containers running those dependencies alongside the test process. But installing those seemed like overkill here, given that upstream Flower currently does not have a solution to that issue.
As long as we're ok with adding a bunch of additional containers to docker-compose.yml
, we should be able to run these test successfully. However, I'm not convinced that the upstream Flower project will want to have all those dependencies. Just something to think about for future work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As long as we're ok with adding a bunch of additional containers to docker-compose.yml, we should be able to run these test successfully. However, I'm not convinced that the upstream Flower project will want to have all those dependencies. Just something to think about for future work.
The more sustainable approach is to use docker-py, here's an example or what I mean.
If you go this route it might be better to look into docker on whales, the docker-py library is kind of a shit show, it languished for literal years. Like a full 12mo with a very high security vulnerability active before they fixed it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is at the heart of why I prefer to say "automated tests" not "unit tests"
i want integration tests to be automated, which often means lots of nasty scripting,
scripting that falls outside of the realm of what unittest.TestCase is good at
a great place to start is by documenting manal steps taken to perform an integration test by hand,
for scripting against docker-compose or docker, docker:dind works totally great, i can pair up and give a demo of that any time, its a bit technical but it makes integrating against docker with predictable state possible.
result='all good', | ||
result=Decimal('22.41'), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Modifying this unit test to use a more complicated type for result
exposed an issue with our Redis JSON deserialization, which I later fixed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i have to ask now, is there a test case for verifying the serialization/deserialization of all supported types?
for the record, i am 100% comfortable only supporting string/(non null) byte sequences as these are some of the only large types CFFI can exchange safely.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm actually not sure where to find "all supported types." I figured that I'd trust Celery with that, and by using its own decoders I should have the same coverage that their codebase does.
Over time, we may encounter some types that don't work. But since this has a failsafe to just return the original bytestring, we should be able to detect that and add support when needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file contains the unit tests for DatabaseBackendResultsStore
. A lot of these tests are very similar to those we run against the RedisBackendResultsStore
, which kind of makes sense because we would expect these implementations to produce similar outputs despite their differing data stores.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great stuff and great self-review!
@@ -5,6 +5,7 @@ RUN apk add --no-cache ca-certificates tzdata && update-ca-certificates | |||
|
|||
# Install the required packages | |||
RUN pip install --no-cache-dir redis flower | |||
RUN pip install --no-cache-dir "SQLAlchemy>=1.4,<2" "psycopg2-binary>=2.9,<3" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does that sound right?
Doesn't sound right to me. From what I have seen, Flower is designed to run as a standalone application. This means that whatever service celery backend backend happens to be writing to a Postgres database and Flower happens to be reading from that same Postgres database. In this scenario it shouldn't even matter whether or not Flower is using SQLALchemy. The thing that matters is that the schema that Flower is reading from are the same schemas that Celery is writing to.
it seems like the service should be managing the SQLAlchemy/Postgres library versions
It's more like that the service and Flower need to agree on the underlying Postgres schema representation, which likely means they need to coordinate the same version of Celery.
args: Sequence[Any] | None = None, | ||
kwargs: dict[str, Any] | None = None, | ||
args: Any = None, | ||
kwargs: Any = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I followed some of this but I need to see it live to really internalize it. But what you're saying makes sense and sounds like you've gone through a good evaluation of the edge cases 👍
Because we want to support both extended and non-extended tasks, we need a way to figure out whether the | ||
provided task was _actually_ extended or not at the time it was saved. We can do this by looking at the `name` | ||
field. When a task is saved under `result_extended=True`, then it will have a name referencing the name of the | ||
function. Otherwise, that field will be null and we know that it was `result_extended=False`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clever... how stable do you think this check is?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in general i'd like to advocate against choosing to have the semi-predicate problem, if we can include a bit for is_result_extended, i would much prefer using it, especially when we're in python and have tuples/dicts given to us for free.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's pretty stable. I've dug deep into the Celery implementation, and it seems like when you have extended=True, you get name
, args
, and kwargs
. I figured that name
is the best of those to use, since it may be possible for args
or kwargs
to be null for some reason.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to be null for some reason
I assume the obvious would be something like:
@task
def my_function():
pass
my_function.delay()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, although I'm not sure if it Celery would store NULLs for a no-arg task, or if it would store args=()
and kwargs={}
. My big fear was that this might vary in different scenarios, so I figured name
was a safer choice.
# couldn't deserialize; just fall back to the original byte string | ||
return value |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This feels less useful to me than a default value like "could not deserialize content type whatever/whatever", or some sort of in-your-face informational method that clearly described the problem
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm. I kind of like it because, with human-readable serialization methods like JSON, you get something pretty readable. Instead of the properly-deserialized {"key": "value"}
, you would see b'{"key": "value"}
. As a user I'd rather run into that than completely lack visibility into what the value is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It’s a difference of whether or not you want to paper over the problem, my advice is don’t paper over it to start in order to psychologically influence fixes. If you intentionally hamper it, it becomes a “bug” vs “enhancement”, a bug being more likely to getting resolved and easier to diagnose.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it becomes a “bug” vs “enhancement”, a bug being more likely to getting resolved and easier to diagnose.
Yeah this is a really good point. I'll make the change to still return gracefully, but with some <could not deserialize ...>
error string. I want to make sure that our list views can still render even if there's one bad apple, so raising an exception would be too heavy-handed. But I suppose an error string would force us to fix the bug.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typing out that last comment made me think of something. Because of the way we have Flower set up as a standalone image, it won't be able to unpickle custom classes. If you have a class app.utils.MyThing()
in tempus, and Flower encounters this in args
, it will fail unpickling because app.utils.MyThing()
isn't known to Flower.
Perhaps this is another argument in favor of more tightly coupling Flower with the application that's using it. The way Flower gets invoked is celery -A ... flower
, with the flower
portion being a subcommand. We could pretty easily include shipwell/flower
as a dependency to tempus, then have a tempus ECS task very similar to our celery worker, but that instead runs celery -A ... flower
. It would have access to the tempus codebase, so it could unpickle without issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
great call out
[...] it will fail unpickling because
app.utils.MyThing()
isn't known to Flower.
so, is pickle-as-a-serializer an implementation detail of celery/flower that we don't control?
cause if it's an implementation detail, that would suggest to me that we should only be passing stdlib types like string to celery; serializing our sw data first to a string-safe format like json/xml/toml/yaml/sql/whatever
either way
- we should adapt sw to use celery safe types, use the neutral interface of stdlib types only (dict/tuples/lists of str and int, or one giant string like a json payload)
- or we should adapt celery to be transparent about sw types, by using our own celery middleware interface that will for example pickle-in-pickle sw classes so celery never needs to know about our implementation details end-to-end
the former is probably easier to get visibility on because humans can mostly read json but mostly can't read pickle or pickle in pickle with ease, but the latter might be nicer to make celery transparent to shipwell's existing code.
🤷
[... maybe] include
shipwell/flower
as a dependency to tempus, then have a tempus ECS task very similar to our celery worker, but that instead runscelery -A ... flower
. It would have access to the tempus codebase, so it could unpickle without issue.
I would avoid cohereing oops i mean coupling shipwells/tempus's interfaces and celery's interfaces together. i'd rather these be independ things that are not in scope of each other impacting library upgrades and code developments/rollouts.
it's a little bit funny because it's a round-trip scenario, celery is only the data carrier providing carriage of a customer payload. That combined with it-happens-to-be that both celery and shipwell use python and so our interface could be intermixed and coupled together tightly, but putting flower in scope of needing to understand shipwell types sounds unrealistic to me, we'd never ask AWS or rollbar or sw tennants to be able to deserialize a Shipment class pickle, at best we get strings and json-in-strings as a closet fit. if we ever moved to managed celery, this would be a blocker for us. Go for high cohesion low coupling for all network-transparent services like celery/flower/rest/logs/sql etc.
going on too long ...
drawing experience from other binary format i've worked with,
even something "still-binary" but std-lib safe to decode,
like base64-encoded-gzip-of-cpickle-with-discrimination-headers would work well,
if i had to make one up, like, this kind of thing would probably be forward-safe to operate
pickle.dumps(tuple(
# "SW_BIN_1" is the eight byte file type discriminator.
"magic": struct.pack('!ll', int.from_bytes(b'SW_B'), int.from_bytes(b'IN_1')),
# (dylang) well technically, its double-encoded so it's actually these 4 bytes on py 3.11 b'\x80\x04\x95\x16'
# but these will ver version-dependent bytes like any magic number
# https://github.com/python/cpython/blob/b14f0ab51cb4851b25935279617e388456dcf716/PC/launcher.c#L1271
# b'\x80\x04\x95\x16\x00\x00\x00\x00\x00\x00\x00\x8c\x05magic\x94C\x08helowrld\x94\x86\x94.' ...
# bin compat headers:
"sw_build_id", SW_META.BUILD_ARTIFACT_ID,
"sw_release_version", SW_META.RELEASE_ID,
# data payload
"cannonical_class_name", f"{__module__.__name__}.{klass.__name__}",
"base64_gzip_pickle", base64.b64encode(gzip.compress(cpickle.pickle(klass)))
))
python gives us a ton of flexibility, we could use a dictionary here but ordering is not historically guaranteed with dicts, so a tuple makes the header bytes and headers predictable such that any naive user could identify what data is in this packet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so, is pickle-as-a-serializer an implementation detail of celery/flower that we don't control?
If we choose to use pickle in Celery, then it operates the same way that any usage of Python's pickle
library operates. Which is kinda like java.io.Serializable
. So we don't have control over the implementation, but we do have control over the usage.
cause if it's an implementation detail, that would suggest to me that we should only be passing stdlib types like string to celery; serializing our sw data first to a string-safe format like json/xml/toml/yaml/sql/whatever
Our friend @phillipuniverse probably agrees with you here; and it's not a bad argument. The usage of pickle
is always controversial, and in this case there are certainly downsides when it comes to tight coupling. That said, this is a library so we should likely support pickle
along with other methods. If some Flower user really loves pickle
, they should be able to use that even if they might end up seeing some ugly byte strings on their webpage.
@@ -23,7 +22,7 @@ def results_by_timestamp(self, limit: int | None = None, reverse: bool = True) - | |||
for key in self.backend.client.scan_iter( | |||
match=task_key_prefix + ("*" if isinstance(task_key_prefix, str) else b"*") | |||
): | |||
result_data: dict[str, Any] = json.loads(self.backend.client.get(key)) | |||
result_data: dict[str, Any] = self.backend.decode_result(self.backend.client.get(key)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, makes sense!
""" | ||
Extension of `BackendDependentTestCase` that sets a default value for `self.app.conf.database_url` based on the | ||
`TEST_DATABASE_CELERY_RESULT_BACKEND_CONNECTION_STRING` environment variable. If no such environment variable | ||
exists, the setup will assume a localhost connection to Postgres. | ||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As long as we're ok with adding a bunch of additional containers to docker-compose.yml, we should be able to run these test successfully. However, I'm not convinced that the upstream Flower project will want to have all those dependencies. Just something to think about for future work.
The more sustainable approach is to use docker-py, here's an example or what I mean.
If you go this route it might be better to look into docker on whales, the docker-py library is kind of a shit show, it languished for literal years. Like a full 12mo with a very high security vulnerability active before they fixed it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ship
I don't see anything worth holding this back over, we all know I'm unhappy with our choice direction here as covered in preceding discussions/caveats
Good Work Max
Attempt to deserialize the provided `value` using the available serialization decoders. Celery stores task | ||
`args` and `kwargs` in binary columns, but the proper decoding mechanism for those binary columns is not | ||
immediately obvious. These fields get serialized bsed on whatever the value of `Celery.conf.result_serializer` | ||
is at the time the task result is saved. However, it's possible that the value of that config setting changed | ||
across different Celery processes, and therefore we may be dealing with a database that has co-mingled records | ||
from different serializers. Unfortunately, there is no column in the database schema that records which | ||
serializer was used for each task. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sounds like somewhere there is missing a version/serial-number of what binary was stored, is there any chance we could add that bincompat monotonic number and fail-out when our current bincompat number is larger than the recalled bincompat numbers?
postgres does something i think is smart and they only support the current and precious bincompat numbers, so as long as you run the service once on every release version, it will upgrade the stored data up one version. LTO uses the same pattern
Did this case get discovered in testing?
aside, again, red-flags all over this.
again semi-predicate problem here, which means we don't actually know what went wrong at runtime, which is another red flag
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see your point about the semipredicate problem; we would want some way to know that things are going wrong. Perhaps we could append some sort of (Failed to decode)
prefix to the return value? That allows the user to know that an error took place, but they also have some visibility into the byte string in case it's useful.
Re: serial number. We're basically at the mercy of pickle
, json
, or whatever data encoding gets used. We don't control how Celery translates its args/kwargs/results to Postgres binary objects upon save, so it's not easy for us to introduce versioning or any other type of behavior to the database table.
In 516eca4, changed our args/kwargs deserialization so that it will now show |
ty, that works 👍 |
Change Note(s):
DatabaseBackend
(aka SQLAlchemy) when querying for "Results." A previous PR added support for theRedisBackend
results, however we must supportDatabaseBackend
in order to meet Shipwell use cases.SQLAlchemy
andpsycopg2-binary