@clickup/ent-framework / Exports
- Batcher
- Client
- ClientError
- Cluster
- Island
- Loader
- LocalCache
- QueryBase
- QueryPing
- Runner
- Schema
- Shard
- ShardError
- Timeline
- TimelineManager
- Configuration
- IDsCache
- Inverse
- QueryCache
- ShardLocator
- Triggers
- VC
- VCCaches
- VCFlavor
- VCWithStacks
- VCWithQueryCache
- VCTrace
- Validation
- EntAccessError
- EntNotFoundError
- EntNotInsertableError
- EntNotReadableError
- EntNotUpdatableError
- EntUniqueKeyError
- EntValidationError
- CanDeleteOutgoingEdge
- CanReadOutgoingEdge
- CanUpdateOutgoingEdge
- FieldIs
- IncomingEdgeFromVCExists
- Or
- OutgoingEdgePointsToVC
- FuncToPredicate
- IDsCacheReadable
- IDsCacheUpdatable
- IDsCacheDeletable
- IDsCacheCanReadIncomingEdge
- True
- VCHasFlavor
- AllowIf
- DenyIf
- Require
- Rule
- PgClient
- PgClientPool
- PgError
- PgQueryCount
- PgQueryDelete
- PgQueryDeleteWhere
- PgQueryExists
- PgQueryIDGen
- PgQueryInsert
- PgQueryLoad
- PgQueryLoadBy
- PgQuerySelect
- PgQuerySelectBy
- PgQueryUpdate
- PgQueryUpsert
- PgRunner
- PgSchema
- ToolPing
- ToolScoreboard
- ClientOptions
- ClientConnectionIssue
- ClientPingInput
- ClusterOptions
- IslandOptions
- Handler
- LocalCacheOptions
- Loggers
- ClientQueryLoggerProps
- SwallowedErrorLoggerProps
- LocateIslandErrorLoggerProps
- Query
- QueryAnnotation
- SchemaClass
- EntValidationErrorInfo
- ConfigInstance
- ConfigClass
- HelpersInstance
- HelpersClass
- PrimitiveInstance
- Predicate
- RuleResult
- PgClientOptions
- PgClientConn
- PgClientPoolOptions
- ToolPingOptions
- ToolScoreboardOptions
Ƭ ClientRole: "master"
| "replica"
| "unknown"
Role of the Client as reported after the last successful query. If we know for sure that the Client is a master or a replica, the role will be "master" or "replica" correspondingly. If no queries were run by the Client yet (i.e. we don't know the role for sure), the role is assigned to "unknown".
Ƭ ClientErrorPostAction: "rediscover-cluster"
| "rediscover-island"
| "choose-another-client"
| "fail"
The suggested action, what can we do when facing a ClientError.
Ƭ ClientErrorKind: "data-on-server-is-unchanged"
| "unknown-server-state"
Sometimes we need to know for sure, is there a chance that the query failed, but the write was still applied in the database.
src/abstract/ClientError.ts:35
Ƭ WhyClient: Exclude
<TimelineCaughtUpReason
, false
> | "replica-bc-stale-replica-freshness"
| "master-bc-is-write"
| "master-bc-master-freshness"
| "master-bc-no-replicas"
| "master-bc-replica-not-caught-up"
A reason why master or replica was chosen to send the query too. The most noticeable ones are:
- "replica-bc-master-state-unknown": 99% of cases (since writes are rare)
- "master-bc-replica-not-caught-up": happens immediately after each write, until the write is propagated to replica
- "replica-bc-caught-up": must happen eventually (in 0.1-2s) after each write
- "replica-bc-pos-expired": signals that the replication lag is huge, we should carefully monitor this case and make sure it never happens
src/abstract/QueryAnnotation.ts:13
Ƭ TimelineCaughtUpReason: false
| "replica-bc-master-state-unknown"
| "replica-bc-caught-up"
| "replica-bc-pos-expired"
The reason why the decision that this replica timeline is "good enough" has been made.
Ƭ AnyClass: (...args
: never
[]) => unknown
• (...args
): unknown
Name | Type |
---|---|
...args |
never [] |
unknown
Ƭ ShardAffinity<TField
, TF
>: typeof GLOBAL_SHARD
| TField
extends typeof ID
? readonly TF
[] : readonly [TF
, ...TF[]]
Defines Ent Shard collocation to some Ent's field when this Ent is inserted.
- The Shard can always be Shard 0 ("global Shard"), be inferred based on the value in other Ent field during the insertion ("colocation"), or, in case colocation inference didn't succeed, be chosen pseudo-randomly at insertion time ("random Shard").
- E.g. a random Shard can also be chosen in case an empty array is passed to Shard affinity (like "always fallback"), or when a field's value points to a global Shard.
- Passing ID to ShardAffinity is prohibited by TS.
Name | Type |
---|---|
TField |
extends string |
TF |
Exclude <TField , typeof ID > |
Ƭ TriggerInsertInput<TTable
>: Flatten
<InsertInput
<TTable
> & RowWithID
>
Table -> trigger's before- and after-insert input. Below, we use InsertInput and not Row, because before and even after some INSERT, we may still not know some values of the row (they can be filled by the DB in e.g. autoInsert clause). InsertInput is almost a subset of Row, but it has stricter symbol keys: e.g. if some symbol key is non-optional in INSERT (aka doesn't have autoInsert), it will always be required in InsertInput too.
Name | Type |
---|---|
TTable |
extends Table |
Ƭ TriggerUpdateInput<TTable
>: Flatten
<UpdateInput
<TTable
>>
Table -> trigger's before-update input.
Name | Type |
---|---|
TTable |
extends Table |
Ƭ TriggerUpdateNewRow<TTable
>: Flatten
<Readonly
<Row
<TTable
> & { [K in keyof TTable & symbol]?: Value<TTable[K]> }>>
Table -> trigger's before- and after-update NEW row. Ephemeral (symbol) fields may or may not be passed depending on what the user passes to the update method.
Name | Type |
---|---|
TTable |
extends Table |
Ƭ TriggerUpdateOrDeleteOldRow<TTable
>: Flatten
<Readonly
<Row
<TTable
> & Record
<keyof TTable
& symbol
, never
>>>
Table -> trigger's before- and after-update (or delete) OLD row. Ephemeral (symbol) fields are marked as always presented, but "never" typed, so they will be available for dereferencing in newOrOldRow of before/after mutation triggers without guard-checking of op value.
Name | Type |
---|---|
TTable |
extends Table |
Ƭ InsertTrigger<TTable
>: (vc
: VC
, args
: { input
: TriggerInsertInput
<TTable
> }) => Promise
<unknown
>
Triggers could be used to simulate "transactional best-effort behavior" in a non-transactional combination of some services. Imagine we have a relational database and a queue service; each time we change something in the query, we want to schedule the ID to the queue. Queue service is faulty: if a queueing operation fails, we don't want the data to be stored to the DB afterwards. Queries are faulty too, but it's okay for us to have something added to the queue even if the corresponding query failed after it (a queue worker will just do a no-op since it anyway rechecks the source of truth in relational DBs). Queue service is like a write-ahead log for DB which always has not-less records than the DB. In this case, we have the following set of triggers:
- beforeInsert: schedules ID to the queue (ID is known, see below why)
- beforeUpdate: schedules ID to the queue
- afterDelete: optionally schedule ID removal to the queue (notice "after")
Notice that ID is always known in all cases, even in insertBefore triggers, because we split an INSERT operation into gen_id+insert parts, and the triggers are executed in between.
Triggers are invoked sequentially. Any exception thrown in a before-trigger is propagated to the caller, and the DB operation is skipped.
Triggers for beforeInsert and beforeUpdate can change their input parameter, the change will apply to the database.
Naming convention for trigger arguments:
- input: whatever is passed to the operation. Notice that due to us having autoInsert/autoUpdate fields, the set of fields can be incomplete here!
- oldRow: the entire row in the DB which was there before the operation. All the fields will be presented there.
- newRow: a row in the DB as it will looks like after the operation. Notice that it can be non precise, because we don't always reload the updated row from the database! What we do is just field by field application of input properties to oldRow.
Name | Type |
---|---|
TTable |
extends Table |
▸ (vc
, args
): Promise
<unknown
>
Name | Type |
---|---|
vc |
VC |
args |
Object |
args.input |
TriggerInsertInput <TTable > |
Promise
<unknown
>
Ƭ BeforeUpdateTrigger<TTable
>: (vc
: VC
, args
: { newRow
: TriggerUpdateNewRow
<TTable
> ; oldRow
: TriggerUpdateOrDeleteOldRow
<TTable
> ; input
: TriggerUpdateInput
<TTable
> }) => Promise
<unknown
>
Name | Type |
---|---|
TTable |
extends Table |
▸ (vc
, args
): Promise
<unknown
>
Name | Type |
---|---|
vc |
VC |
args |
Object |
args.newRow |
TriggerUpdateNewRow <TTable > |
args.oldRow |
TriggerUpdateOrDeleteOldRow <TTable > |
args.input |
TriggerUpdateInput <TTable > |
Promise
<unknown
>
Ƭ AfterUpdateTrigger<TTable
>: (vc
: VC
, args
: { newRow
: TriggerUpdateNewRow
<TTable
> ; oldRow
: TriggerUpdateOrDeleteOldRow
<TTable
> }) => Promise
<unknown
>
Name | Type |
---|---|
TTable |
extends Table |
▸ (vc
, args
): Promise
<unknown
>
Name | Type |
---|---|
vc |
VC |
args |
Object |
args.newRow |
TriggerUpdateNewRow <TTable > |
args.oldRow |
TriggerUpdateOrDeleteOldRow <TTable > |
Promise
<unknown
>
Ƭ DeleteTrigger<TTable
>: (vc
: VC
, args
: { oldRow
: TriggerUpdateOrDeleteOldRow
<TTable
> }) => Promise
<unknown
>
Name | Type |
---|---|
TTable |
extends Table |
▸ (vc
, args
): Promise
<unknown
>
Name | Type |
---|---|
vc |
VC |
args |
Object |
args.oldRow |
TriggerUpdateOrDeleteOldRow <TTable > |
Promise
<unknown
>
Ƭ BeforeMutationTrigger<TTable
>: (vc
: VC
, args
: { op
: "INSERT"
; newOrOldRow
: Readonly
<TriggerInsertInput
<TTable
>> ; input
: TriggerInsertInput
<TTable
> } | { op
: "UPDATE"
; newOrOldRow
: TriggerUpdateNewRow
<TTable
> ; input
: TriggerUpdateInput
<TTable
> } | { op
: "DELETE"
; newOrOldRow
: TriggerUpdateOrDeleteOldRow
<TTable
> ; input
: Writeable
<TriggerUpdateOrDeleteOldRow
<TTable
>> }) => Promise
<unknown
>
Name | Type |
---|---|
TTable |
extends Table |
▸ (vc
, args
): Promise
<unknown
>
Name | Type |
---|---|
vc |
VC |
args |
{ op : "INSERT" ; newOrOldRow : Readonly <TriggerInsertInput <TTable >> ; input : TriggerInsertInput <TTable > } | { op : "UPDATE" ; newOrOldRow : TriggerUpdateNewRow <TTable > ; input : TriggerUpdateInput <TTable > } | { op : "DELETE" ; newOrOldRow : TriggerUpdateOrDeleteOldRow <TTable > ; input : Writeable <TriggerUpdateOrDeleteOldRow <TTable >> } |
Promise
<unknown
>
Ƭ AfterMutationTrigger<TTable
>: (vc
: VC
, args
: { op
: "INSERT"
; newOrOldRow
: Readonly
<TriggerInsertInput
<TTable
>> } | { op
: "UPDATE"
; newOrOldRow
: TriggerUpdateNewRow
<TTable
> } | { op
: "DELETE"
; newOrOldRow
: TriggerUpdateOrDeleteOldRow
<TTable
> }) => Promise
<unknown
>
Name | Type |
---|---|
TTable |
extends Table |
▸ (vc
, args
): Promise
<unknown
>
Name | Type |
---|---|
vc |
VC |
args |
{ op : "INSERT" ; newOrOldRow : Readonly <TriggerInsertInput <TTable >> } | { op : "UPDATE" ; newOrOldRow : TriggerUpdateNewRow <TTable > } | { op : "DELETE" ; newOrOldRow : TriggerUpdateOrDeleteOldRow <TTable > } |
Promise
<unknown
>
Ƭ DepsBuilder<TTable
>: (vc
: VC
, row
: Flatten
<Readonly
<Row
<TTable
>>>) => unknown
[] | Promise
<unknown
[]>
Name | Type |
---|---|
TTable |
extends Table |
▸ (vc
, row
): unknown
[] | Promise
<unknown
[]>
Name | Type |
---|---|
vc |
VC |
row |
Flatten <Readonly <Row <TTable >>> |
unknown
[] | Promise
<unknown
[]>
Ƭ LoadRule<TInput
>: AllowIf
<TInput
> | DenyIf
<TInput
>
Name | Type |
---|---|
TInput |
extends object |
Ƭ WriteRules<TInput
>: [] | [Require
<TInput
>, ...Require<TInput>[]] | [LoadRule
<TInput
>, Require
<TInput
>, ...Require<TInput>[]] | [LoadRule
<TInput
>, LoadRule
<TInput
>, Require
<TInput
>, ...Require<TInput>[]] | [LoadRule
<TInput
>, LoadRule
<TInput
>, LoadRule
<TInput
>, Require
<TInput
>, ...Require<TInput>[]] | [LoadRule
<TInput
>, LoadRule
<TInput
>, LoadRule
<TInput
>, LoadRule
<TInput
>, Require
<TInput
>, ...Require<TInput>[]]
For safety, we enforce all Require rules to be in the end of the insert/update/delete privacy list, and have at least one of them. In TypeScript, it's not possible to create [...L[], R, ...R[]] type (double-variadic) when both L[] and R[] are open-ended (i.e. tuples with unknown length), so we have to brute-force.
Name | Type |
---|---|
TInput |
extends object |
Ƭ ValidationRules<TTable
>: Object
Name | Type |
---|---|
TTable |
extends Table |
Name | Type |
---|---|
tenantPrincipalField? |
InsertFieldsRequired <TTable > & string |
inferPrincipal |
(vc : VC , row : Row <TTable >) => Promise <VC > |
load |
Validation <TTable >["load" ] |
insert |
Validation <TTable >["insert" ] |
update? |
Validation <TTable >["update" ] |
delete? |
Validation <TTable >["delete" ] |
validate? |
Predicate <InsertInput <TTable >> & EntValidationErrorInfo [] |
Ƭ PrimitiveClass<TTable
, TUniqueKey
, TClient
>: OmitNew
<ConfigClass
<TTable
, TUniqueKey
, TClient
>> & () => PrimitiveInstance
<TTable
>
Name | Type |
---|---|
TTable |
extends Table |
TUniqueKey |
extends UniqueKey <TTable > |
TClient |
extends Client |
src/ent/mixins/PrimitiveMixin.ts:67
Ƭ EntClass<TTable
>: Object
A very shallow interface of Ent class (as a collection of static methods). Used in some places where we need the very minimum from the Ent.
Name | Type |
---|---|
TTable |
extends Table = DesperateAny |
• new EntClass(): Ent
<TTable
>
Ent
<TTable
>
Name | Type |
---|---|
SCHEMA |
Schema <TTable > |
VALIDATION |
Validation <TTable > |
SHARD_LOCATOR |
ShardLocator <Client , TTable , string > |
name |
string |
loadX |
(vc : VC , id : string ) => Promise <Ent <TTable >> |
loadNullable |
(vc : VC , id : string ) => Promise <null | Ent <TTable >> |
loadIfReadableNullable |
(vc : VC , id : string ) => Promise <null | Ent <TTable >> |
count |
(vc : VC , where : CountInput <TTable >) => Promise <number > |
exists |
(vc : VC , where : ExistsInput <TTable >) => Promise <boolean > |
select |
(vc : VC , where : Where <TTable >, limit : number , order? : Order <TTable >) => Promise <Ent <TTable >[]> |
selectChunked |
(vc : VC , where : Where <TTable >, chunkSize : number , limit : number , custom? : {}) => AsyncIterableIterator <Ent <TTable >[]> |
loadByX |
(vc : VC , keys : { [K in string]: Value<TTable[K]> }) => Promise <Ent <TTable >> |
loadByNullable |
(vc : VC , input : { [K in string]: Value<TTable[K]> }) => Promise <null | Ent <TTable >> |
insert |
(vc : VC , input : InsertInput <TTable >) => Promise <string > |
upsert |
(vc : VC , input : InsertInput <TTable >) => Promise <string > |
Ƭ Ent<TTable
>: Object
A very shallow interface of one Ent.
Name | Type |
---|---|
TTable |
extends Table = {} |
Name | Type |
---|---|
id |
string |
vc |
VC |
deleteOriginal |
() => Promise <boolean > |
updateOriginal |
(input : UpdateOriginalInput <TTable >) => Promise <boolean > |
Ƭ UpdateOriginalInput<TTable
>: { [K in UpdateField<TTable>]?: Value<TTable[K]> } & { $literal?
: Literal
; $cas?
: "skip-if-someone-else-changed-updating-ent-props"
| ReadonlyArray
<UpdateField
<TTable
>> | UpdateInput
<TTable
>["$cas"
] }
The input of updateOriginal() method. It supports some additional syntax sugar for $cas property, so to work-around TS weakness of Omit<> & type inference, we redefine this type from scratch.
Name | Type |
---|---|
TTable |
extends Table |
Ƭ PgClientPoolConn: PgClientConn
& { processID?
: number
| null
; closeAt?
: number
}
Our extension to Pool connection which adds a couple props to the connection in on("connect") handler (persistent for the same connection objects, i.e. across queries in the same connection).
Ƭ SelectInputCustom: { ctes?
: Literal
[] ; joins?
: Literal
[] ; from?
: Literal
; hints?
: Record
<string
, string
> } | undefined
This is mostly to do hacks in PostgreSQL queries. Not even exposed by Ent framework, but can be used by PG-dependent code.
Ƭ Literal: (string
| number
| boolean
| Date
| null
| (string
| number
| boolean
| Date
| null
)[])[]
Literal operation with placeholders. We don't use a tuple type here (like
[string, ...T[]]
), because it would force us to use as const
everywhere,
which we don't want to do.
Ƭ RowWithID: Object
{ id: string }
Name | Type |
---|---|
id |
string |
Ƭ SpecType: typeof Boolean
| typeof Date
| typeof ID
| typeof Number
| typeof String
| { dbValueToJs
: (dbValue
: DesperateAny
) => unknown
; stringify
: (jsValue
: DesperateAny
) => string
; parse
: (str
: string
) => unknown
}
Spec (metadata) of some field.
Ƭ Spec: Object
{ type: ..., ... } - one attribute spec.
Name | Type |
---|---|
type |
SpecType |
allowNull? |
true |
autoInsert? |
string |
autoUpdate? |
string |
Ƭ Table: Object
{ id: Spec, name: Spec, ... } - table columns.
▪ [K: string
| symbol
]: Spec
Ƭ Field<TTable
>: keyof TTable
& string
A database table's field (no symbols). In regards to some table structure, there can be 3 options:
- Field: only DB-stored attributes, no ephemeral symbols
- keyof TTable: both real and ephemeral attributes
- keyof TTable & symbol: only "ephemeral" attributes available to triggers
By doing & string
, we ensure that we select only regular (non-symbol)
fields.
Name | Type |
---|---|
TTable |
extends Table |
Ƭ FieldAliased<TTable
>: Field
<TTable
> | { field
: Field
<TTable
> ; alias
: string
}
Same as Field, but may optionally hold information about of "alias value
source" for a field name (e.g. { field: "abc", alias: "$cas.abc" }
).
Name | Type |
---|---|
TTable |
extends Table |
Ƭ FieldOfPotentialUniqueKey<TTable
>: { [K in Field<TTable>]: TTable[K] extends Object ? K : never }[Field
<TTable
>]
(Table) -> "field1" | "field2" | ... where the union contains only fields which can potentially be used as a part of unique key.
Name | Type |
---|---|
TTable |
extends Table |
Ƭ FieldOfIDType<TTable
>: { [K in Field<TTable>]: K extends string ? TTable[K] extends Object ? K : never : never }[Field
<TTable
>]
Table -> "user_id" | "some_id" | ...
Name | Type |
---|---|
TTable |
extends Table |
Ƭ FieldOfIDTypeRequired<TTable
>: InsertFieldsRequired
<TTable
> & FieldOfIDType
<TTable
>
Table -> "user_id" | "some_id" | ...
Name | Type |
---|---|
TTable |
extends Table |
Ƭ ValueRequired<TSpec
>: TSpec
["type"
] extends typeof Number
? number
: TSpec
["type"
] extends typeof String
? string
: TSpec
["type"
] extends typeof Boolean
? boolean
: TSpec
["type"
] extends typeof ID
? string
: TSpec
["type"
] extends typeof Date
? Date
: TSpec
["type"
] extends { dbValueToJs
: (dbValue
: never
) => infer TJSValue } ? TSpec
["type"
] extends { stringify
: (jsValue
: TJSValue
) => string
; parse
: (str
: string
) => TJSValue
} ? TJSValue
: never
: never
SpecType -> Value deduction (always deduces non-nullable type).
Name | Type |
---|---|
TSpec |
extends Spec |
Ƭ Value<TSpec
>: TSpec
extends { allowNull
: true
} ? ValueRequired
<TSpec
> | null
: ValueRequired
<TSpec
>
Spec -> nullable Value or non-nullable Value.
Name | Type |
---|---|
TSpec |
extends Spec |
Ƭ Row<TTable
>: RowWithID
& { [K in Field<TTable>]: Value<TTable[K]> }
Table -> Row deduction (no symbols).
Name | Type |
---|---|
TTable |
extends Table |
Ƭ InsertFieldsRequired<TTable
>: { [K in keyof TTable]: TTable[K] extends Object ? never : TTable[K] extends Object ? never : K }[keyof TTable
]
Insert: Table -> "field1" | "field2" | ... deduction (required).
Name | Type |
---|---|
TTable |
extends Table |
Ƭ InsertFieldsOptional<TTable
>: { [K in keyof TTable]: TTable[K] extends Object ? K : TTable[K] extends Object ? K : never }[keyof TTable
]
Insert: Table -> "created_at" | "field2" | ... deduction (optional fields).
Name | Type |
---|---|
TTable |
extends Table |
Ƭ InsertInput<TTable
>: { [K in InsertFieldsRequired<TTable>]: Value<TTable[K]> } & { [K in InsertFieldsOptional<TTable>]?: Value<TTable[K]> }
Insert: Table -> { field: string, updated_at?: Date, created_at?: Date... }. Excludes id Spec entirely and makes autoInsert/autoUpdate Specs optional.
Name | Type |
---|---|
TTable |
extends Table |
Ƭ UpdateField<TTable
>: Exclude
<keyof TTable
, keyof RowWithID
>
Update: Table -> "field1" | "created_at" | "updated_at" | ... deduction.
Name | Type |
---|---|
TTable |
extends Table |
Ƭ UpdateInput<TTable
>: { [K in UpdateField<TTable>]?: Value<TTable[K]> } & { $literal?
: Literal
; $cas?
: { [K in UpdateField<TTable>]?: Value<TTable[K]> } }
Update: Table -> { field?: string, created_at?: Date, updated_at?: Date }.
- Excludes id Spec entirely and makes all fields optional.
- If $literal is passed, it will be appended to the list of updating fields (engine specific).
- If $cas is passed, only the rows whose fields match the exact values in $cas will be updated; the non-matching rows will be skipped.
Name | Type |
---|---|
TTable |
extends Table |
Ƭ UniqueKey<TTable
>: [] | [FieldOfPotentialUniqueKey
<TTable
>, ...FieldOfPotentialUniqueKey<TTable>[]]
Table -> ["field1", "field2", ...], list of fields allowed to compose an unique key on the table; fields must be allowed in insert/upsert.
Name | Type |
---|---|
TTable |
extends Table |
Ƭ LoadByInput<TTable
, TUniqueKey
>: TUniqueKey
extends [] ? never
: { [K in TUniqueKey[number]]: Value<TTable[K]> }
(Table, UniqueKey) -> { field1: number, field2: number, field3: number }. loadBy operation is allowed for exact unique key attributes only.
Name | Type |
---|---|
TTable |
extends Table |
TUniqueKey |
extends UniqueKey <TTable > |
Ƭ SelectByInput<TTable
, TUniqueKey
>: LoadByInput
<TTable
, TuplePrefixes
<TUniqueKey
>>
(Table, UniqueKey) -> { field1: number [, field2: number [, ...] ] }. selectBy operation is allowed for unique key PREFIX attributes only.
Name | Type |
---|---|
TTable |
extends Table |
TUniqueKey |
extends UniqueKey <TTable > |
Ƭ Where<TTable
>: { $and?
: ReadonlyArray
<Where
<TTable
>> ; $or?
: ReadonlyArray
<Where
<TTable
>> ; $not?
: Where
<TTable
> ; $literal?
: Literal
; $shardOfID?
: string
} & { id?
: TTable
extends { id
: unknown
} ? unknown
: string
| string
[] } & { [K in Field<TTable>]?: Value<TTable[K]> | ReadonlyArray<Value<TTable[K]>> | Object | Object | Object | Object | Object | Object | Object }
Table -> { f: 10, [$or]: [ { f2: "a }, { f3: "b""} ], $literal: ["x=?", 1] }
Name | Type |
---|---|
TTable |
extends Table |
Ƭ Order<TTable
>: ReadonlyArray
<{ [K in Field<TTable>]?: string } & { $literal?
: Literal
}>
Table -> [["f1", "ASC"], ["f2", "DESC"]] or [ [{[$literal]: ["a=?", 10]}, "ASC"], ["b", "DESC"] ]
Name | Type |
---|---|
TTable |
extends Table |
Ƭ SelectInput<TTable
>: Object
Table -> { where: ..., order?: ..., ... }
Name | Type |
---|---|
TTable |
extends Table |
Name | Type |
---|---|
where |
Where <TTable > |
order? |
Order <TTable > |
custom? |
{} |
limit |
number |
Ƭ CountInput<TTable
>: Where
<TTable
>
Table -> { f: 10, [$or]: [ { f2: "a }, { f3: "b""} ], $literal: ["x=?", 1] }
Name | Type |
---|---|
TTable |
extends Table |
Ƭ ExistsInput<TTable
>: Where
<TTable
>
Table -> { f: 10, [$or]: [ { f2: "a }, { f3: "b""} ], $literal: ["x=?", 1] }
Name | Type |
---|---|
TTable |
extends Table |
Ƭ DeleteWhereInput<TTable
>: { id
: string
[] } & Omit
<Where
<TTable
>, typeof ID
>
Table -> { id: ["1", "2", "3"], ... }
Name | Type |
---|---|
TTable |
extends Table |
• Const
MASTER: typeof MASTER
Master freshness: reads always go to master.
• Const
STALE_REPLICA: typeof STALE_REPLICA
Stale replica freshness: reads always go to a replica, even if it's stale.
• Const
GLOBAL_SHARD: "global_shard"
The table is located in the global Shard (0).
• Const
GUEST_ID: "guest"
Guest VC: has minimum permissions. Typically if the user is not logged in, this VC is used.
• Const
OMNI_ID: "omni"
Temporary "omniscient" VC. Any Ent can be loaded with it, but this VC is replaced with lower-pri VC as soon as possible. E.g. when some Ent is loaded with omni VC, its ent.vc is assigned to either this Ent's "owner" VC (accessible via VC pointing field) or, if not detected, to guest VC.
• Const
ID: "id"
Primary key field's name is currently hardcoded for simplicity. It's a convention to have it named as "id".
▸ BaseEnt<TTable
, TUniqueKey
, TClient
>(cluster
, schema
): HelpersClass
<TTable
, TUniqueKey
, TClient
>
This is a helper function to create new Ent classes. Run once per each Ent+Cluster on app boot. See examples in tests/TestObjects.ts and EntTest.ts.
Since all Ent objects are immutable (following the modern practices),
- Ent is not a DataMapper pattern;
- Ent is not an ActiveRecord;
- At last, Ent is not quite a DAO pattern too.
We assume that Ents are very simple (we don't need triggers or multi-ent touching mutations), because we anyway have a GraphQL layer on top of it.
Finally, a naming decision has been made: we translate database field names directly to Ent field names, no camelCase. This has proven its simplicity benefits in the past: the less translation layers we have, the easier it is to debug and develop.
Name | Type |
---|---|
TTable |
extends Table |
TUniqueKey |
extends UniqueKey <TTable > |
TClient |
extends Client |
Name | Type |
---|---|
cluster |
Cluster <TClient , any > |
schema |
Schema <TTable , TUniqueKey > |
HelpersClass
<TTable
, TUniqueKey
, TClient
>
▸ buildUpdateNewRow<TTable
>(oldRow
, input
): TriggerUpdateNewRow
<TTable
>
Simulates an update for a row, as if it's applied to the Ent.
Name | Type |
---|---|
TTable |
extends Table |
Name | Type |
---|---|
oldRow |
Row <TTable > |
input |
UpdateInput <TTable > |
TriggerUpdateNewRow
<TTable
>
▸ CacheMixin<TTable
, TUniqueKey
, TClient
>(Base
): PrimitiveClass
<TTable
, TUniqueKey
, TClient
>
Modifies the passed class adding VC-stored cache layer to it.
Name | Type |
---|---|
TTable |
extends Table |
TUniqueKey |
extends UniqueKey <TTable > |
TClient |
extends Client |
Name | Type |
---|---|
Base |
PrimitiveClass <TTable , TUniqueKey , TClient > |
PrimitiveClass
<TTable
, TUniqueKey
, TClient
>
src/ent/mixins/CacheMixin.ts:20
▸ ConfigMixin<TTable
, TUniqueKey
, TClient
>(Base
, cluster
, schema
): ConfigClass
<TTable
, TUniqueKey
, TClient
>
Modifies the passed class adding support for Ent configuration (such as: Cluster, table schema, privacy rules, triggers etc.).
Name | Type |
---|---|
TTable |
extends Table |
TUniqueKey |
extends UniqueKey <TTable > |
TClient |
extends Client |
Name | Type |
---|---|
Base |
() => {} |
cluster |
Cluster <TClient , any > |
schema |
Schema <TTable , TUniqueKey > |
ConfigClass
<TTable
, TUniqueKey
, TClient
>
src/ent/mixins/ConfigMixin.ts:86
▸ HelpersMixin<TTable
, TUniqueKey
, TClient
>(Base
): HelpersClass
<TTable
, TUniqueKey
, TClient
>
Modifies the passed class adding convenience methods (like loadX() which throws when an Ent can't be loaded instead of returning null as it's done in the primitive operations).
Name | Type |
---|---|
TTable |
extends Table |
TUniqueKey |
extends UniqueKey <TTable > |
TClient |
extends Client |
Name | Type |
---|---|
Base |
PrimitiveClass <TTable , TUniqueKey , TClient > |
HelpersClass
<TTable
, TUniqueKey
, TClient
>
src/ent/mixins/HelpersMixin.ts:138
▸ PrimitiveMixin<TTable
, TUniqueKey
, TClient
>(Base
): PrimitiveClass
<TTable
, TUniqueKey
, TClient
>
Modifies the passed class adding support for the minimal number of basic Ent operations. Internally, uses Schema abstractions to run them.
Name | Type |
---|---|
TTable |
extends Table |
TUniqueKey |
extends UniqueKey <TTable > |
TClient |
extends Client |
Name | Type |
---|---|
Base |
ConfigClass <TTable , TUniqueKey , TClient > |
PrimitiveClass
<TTable
, TUniqueKey
, TClient
>
src/ent/mixins/PrimitiveMixin.ts:196
▸ evaluate<TInput
>(vc
, input
, rules
, fashion
): Promise
<{ allow
: boolean
; results
: RuleResult
[] ; cause
: string
}>
This is a hearth of permissions checking, a machine which evaluates the rules chain from top to bottom (one after another) and makes the decision based on the following logic:
- ALLOW immediately allows the chain, the rest of the rules are not checked. It's an eager allowance.
- DENY immediately denies the chain, the rest of the rules are not checked. It's an eager denial.
- TOLERATE delegates the decision to the next rules; if it's the last decision in the chain, then allows the chain. I.e. it's like an allowance, but only if everyone else is tolerant.
- SKIP also delegates the decision to the next rules, but if it's the last rule in the chain (i.e. nothing to skip to anymore), denies the chain. I.e. it's "I don't vote here, please ask others".
- An empty chain is always denied.
Having TOLERATE decision may sound superfluous, but unfortunately it's not. The TOLERATE enables usage of the same machinery for both read-like checks (where we typically want ANY of the rules to be okay with the row) and for write-like checks (where we typically want ALL rules to be okay with the row). Having the same logic for everything simplifies the code.
If parallel argument is true, all the rules are run at once in concurrent promises before the machine starts. This doesn't affect the final result, just speeds up processing if we know that there is a high chance that most of the rules will likely return TOLERATE and we'll anyway need to evaluate all of them (e.g. most of the rules are Require, like in write operations). As opposed, for read operation, there is a high chance for the first rule (which is often AllowIf) to succeed, so we evaluate the rules sequentially, not in parallel (to minimize the number of DB queries).
Example of a chain (the order of rules always matters!):
- new Require(new OutgoingEdgePointsToVC("user_id"))
- new Require(new CanReadOutgoingEdge("post_id", EntPost))
Example of a chain:
- new AllowIf(new OutgoingEdgePointsToVC("user_id"))
- new AllowIf(new CanReadOutgoingEdge("post_id", EntPost))
Example of a chain:
- new DenyIf(new UserIsPendingApproval())
- new AllowIf(new OutgoingEdgePointsToVC("user_id"))
Name | Type |
---|---|
TInput |
extends object |
Name | Type |
---|---|
vc |
VC |
input |
TInput |
rules |
Rule <TInput >[] |
fashion |
"parallel" | "sequential" |
Promise
<{ allow
: boolean
; results
: RuleResult
[] ; cause
: string
}>
▸ isBigintStr(str
): boolean
It's hard to support PG bigint type in JS, so people use strings instead. THis function checks that a string can be passed to PG as a bigint.
Name | Type |
---|---|
str |
string |
boolean
▸ default(): AsyncGenerator
<unknown
>
AsyncGenerator
<unknown
>
src/pg/benchmarks/batched-inserts.benchmark.ts:29
▸ buildShape(sql
): string
Extracts a "shape" from some commonly built SQL queries. This function may be used from the outside for logging/debugging, so it's here, not in tests.
Name | Type |
---|---|
sql |
string |
string
src/pg/helpers/buildShape.ts:52
▸ escapeIdent(ident
): string
Optionally encloses a PG identifier (like table name) in "".
Name | Type |
---|---|
ident |
string |
string
src/pg/helpers/escapeIdent.ts:4
▸ escapeLiteral(literal
): string
Builds a part of SQL query using ?-placeholders to prevent SQL Injection. Everywhere where we want to accept a piece of SQL, we should instead accept a Literal tuple.
The function converts a Literal tuple [fmt, ...args] into a string, escaping the args and interpolating them into the format SQL where "?" is a placeholder for the replacing value.
Name | Type |
---|---|
literal |
Literal |
string