-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[#167064659] Optimistic lock on update user metadata #577
base: master
Are you sure you want to change the base?
Changes from all commits
3c73277
6844715
08acc50
8326e46
cc102ae
6df6a4f
7679f2d
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,60 +1,122 @@ | ||
import * as redis from "redis"; | ||
|
||
import { Either, isLeft, isRight, left, right } from "fp-ts/lib/Either"; | ||
import { Either, isLeft, left, right, toError } from "fp-ts/lib/Either"; | ||
import { ReadableReporter } from "italia-ts-commons/lib/reporters"; | ||
import { UserMetadata } from "../../generated/backend/UserMetadata"; | ||
import { User } from "../types/user"; | ||
import { log } from "../utils/logger"; | ||
import { IUserMetadataStorage } from "./IUserMetadataStorage"; | ||
import RedisStorageUtils from "./redisStorageUtils"; | ||
|
||
import { Sema } from "async-sema"; | ||
import { | ||
fromEither, | ||
fromPredicate, | ||
taskify, | ||
tryCatch | ||
} from "fp-ts/lib/TaskEither"; | ||
|
||
const userMetadataPrefix = "USERMETA-"; | ||
export const metadataNotFoundError = new Error("User Metadata not found"); | ||
export const invalidVersionNumberError = new Error("Invalid version number"); | ||
export const concurrentWriteRejectionError = new Error( | ||
"Concurrent write operation" | ||
); | ||
|
||
/** | ||
* Service that manages user metadata stored into Redis database. | ||
*/ | ||
export default class RedisUserMetadataStorage extends RedisStorageUtils | ||
implements IUserMetadataStorage { | ||
private activeClients: Set<string> = new Set(); | ||
private mutex: Sema = new Sema(1); | ||
constructor(private readonly redisClient: redis.RedisClient) { | ||
super(); | ||
} | ||
|
||
/** | ||
* {@inheritDoc} | ||
* | ||
* This method doesn't support atomic operations on concurrency scenario. | ||
* Story https://www.pivotaltracker.com/story/show/167064659 | ||
* This method uses Optimistic Lock to prevent race condition | ||
* during write operations of user metadata | ||
* @see https://github.com/NodeRedis/node_redis#optimistic-locks | ||
*/ | ||
public async set( | ||
user: User, | ||
payload: UserMetadata | ||
): Promise<Either<Error, boolean>> { | ||
const getUserMetadataResult = await this.loadUserMetadataByFiscalCode( | ||
user.fiscal_code | ||
); | ||
if ( | ||
isRight(getUserMetadataResult) && | ||
getUserMetadataResult.value.version !== payload.version - 1 | ||
) { | ||
return left(invalidVersionNumberError); | ||
} | ||
if ( | ||
isLeft(getUserMetadataResult) && | ||
getUserMetadataResult.value !== metadataNotFoundError | ||
) { | ||
return left(getUserMetadataResult.value); | ||
// In order to work properly, optimistic lock needs to be initialized on different | ||
// redis client instances @see https://github.com/NodeRedis/node_redis/issues/1320#issuecomment-373200541 | ||
await this.mutex.acquire(); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I suggest to extract this logic into a kind of client pool class There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. a couple of suggestions on
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
can you please elaborate a little bit more ? do you mean to create (at startup) a pool of N redisclients to reuse and change the mutex(1) to mutex(N) ? do you think we can merge this PR as-is and implement the pooling in another PR ? (after the beta release date) |
||
const hasActiveClient = this.activeClients.has(user.fiscal_code); | ||
// tslint:disable-next-line: no-let | ||
let duplicatedOrOriginalRedisClient = this.redisClient; | ||
if (hasActiveClient === false) { | ||
this.activeClients.add(user.fiscal_code); | ||
} else { | ||
// A duplicated redis client must be created only if the main client is already | ||
// in use for another optimistic lock update on the same key to prevent performance drop | ||
duplicatedOrOriginalRedisClient = this.redisClient.duplicate(); | ||
} | ||
return await new Promise<Either<Error, boolean>>(resolve => { | ||
// Set key to hold the string value. If key already holds a value, it is overwritten, regardless of its type. | ||
// @see https://redis.io/commands/set | ||
this.redisClient.set( | ||
`${userMetadataPrefix}${user.fiscal_code}`, | ||
JSON.stringify(payload), | ||
(err, response) => resolve(this.singleStringReply(err, response)) | ||
); | ||
}); | ||
this.mutex.release(); | ||
const errorOrIsUpdateSuccessful = await taskify( | ||
(key: string, callback: (err: Error | null, value: true) => void) => { | ||
duplicatedOrOriginalRedisClient.watch(key, err => callback(err, true)); | ||
} | ||
)(`${userMetadataPrefix}${user.fiscal_code}`) | ||
.chain(() => | ||
tryCatch( | ||
() => this.loadUserMetadataByFiscalCode(user.fiscal_code), | ||
toError | ||
) | ||
) | ||
.chain(_ => { | ||
if (isLeft(_) && _.value === metadataNotFoundError) { | ||
return fromEither( | ||
right({ | ||
metadata: "", | ||
version: 0 | ||
}) | ||
); | ||
} | ||
return fromEither(_); | ||
}) | ||
.chain( | ||
fromPredicate( | ||
_ => _.version === payload.version - 1, | ||
_ => invalidVersionNumberError | ||
) | ||
) | ||
.chain(() => | ||
taskify( | ||
( | ||
key: string, | ||
data: string, | ||
callback: ( | ||
err: Error | null, | ||
value?: Either<Error, boolean> | ||
) => void | ||
) => { | ||
duplicatedOrOriginalRedisClient | ||
.multi() | ||
.set(key, data) | ||
.exec((err, results) => { | ||
if (err) { | ||
return callback(err); | ||
} | ||
if (results === null) { | ||
return callback(concurrentWriteRejectionError); | ||
} | ||
callback(null, this.singleStringReply(err, results[0])); | ||
}); | ||
} | ||
)(`${userMetadataPrefix}${user.fiscal_code}`, JSON.stringify(payload)) | ||
) | ||
.chain(fromEither) | ||
.run(); | ||
hasActiveClient | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this should call a kind of |
||
? duplicatedOrOriginalRedisClient.end(true) | ||
: this.activeClients.delete(user.fiscal_code); | ||
return errorOrIsUpdateSuccessful; | ||
} | ||
|
||
/** | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
where did you find this info ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've tested locally with the following test and the output was unexpected:
So, I've read the example (second code snippet) here with the keyword another client into the description and with the modification (
const duplicatedRedisClient = this.redisClient.duplicate();
) the result was as expected[ left(Error: Concurrent write operation),right(true),left(Error: Concurrent write operation) ]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kudos, that's a good test
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the test is good, but as far as I understand (and the test seems to confirm it) the problem of concurrent updates happens only when you have mutiple redis clients trying to update the same resource (ie. different instances of the backend nodejs process / pod). so while it's right to use multiple clients inside the test, it is not right to duplicate the client inside the backend code (you just have to call watch before multi). this states even if it is the same client that tries to update the same resource (watch + multi "locks" the resource anyway, you don't have to duplicate the client).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gunzip I think that watch + multi scenario doesn't work on the same redis client instance, so if we don't duplicate the client when we start an Optimistic Lock on a key and a race condition happens on the same process/pod (it can occur) then OL doesn't work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's try to recap:
set()
fails in case the value (in redis) that matches the key${userMetadataPrefix}${user.fiscal_code}
is changed after the call towatch()
and before the call tomulti()
[1]. This means that another call toset()
starts while the first call is till executing. This is the test code:Sorry if I'm nitpicking here, but I wonder, how can you tell from this code if the three calls to
set()
are interleaved in such a way that [1] is satisfied ? I mean, the test may returns[ right(true), right(true), right(true) ]
not because it cannot catch the race condition but because the race condition does not happen at all (or I'm missing something ?)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I understand the point. To be sure of that I'll add a timeout before the
multi
phase, so will be certain that all tree calls happen simultaneously.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here the updated code:
As before, the result was
[ right(true), right(true), right(true) ]
😄There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-DELETED-
Ok lets proceed and merge this PR (I'm still not convinced that there could be issues aith a single client but I'm doing some test myself : )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've a found a comment that confirms what @BurnedMarshal says: redis/node-redis#1320 (comment)
We need one client for every
set()
call indeed which is sub-optimal since it adds some overhead (as stated in the reply).