-
-
Notifications
You must be signed in to change notification settings - Fork 311
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: n historical states #6008
Conversation
ensureDir, | ||
}; | ||
|
||
const CHECKPOINT_STATES_FOLDER = "./unfinalized_checkpoint_states"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why write in a random directory instead of writing into the db?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
at the beginning, I started with writing checkpoint state to fs
to make it easier to debug, also was worried about db size increase because 2 states persisted per epoch.
Now I switched to persisting 1 state per epoch, make 2 persistent option as db
and fs
and make it configurable through cli option. Seems persisting to db
does not increase its size a lot so will make db
option default
ready for review, would like to confirm the approach before I separate to smaller PRs to make it easier to review |
@@ -35,8 +33,8 @@ import {from} from "multiformats/dist/types/src/bases/base.js"; | |||
*/ | |||
export class PersistentCheckpointStateCache implements CheckpointStateCache { | |||
private readonly cache: MapTracker<string, CachedBeaconStateAllForks | StateFile>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While typing the Map value as CachedBeaconStateAllForks | string
, you could be more explicit defining a wrapper object. The amount of items in this map is low (< 1000) so the wrapper does not add any performance penalty but helps readibility and manteinance.
Something like:
private readonly cache: MapTracker<string, StateCacheItem>;
type StateCacheItem =
| {type: StateCacheItemType.filePath, filepath: string}
| {type: StateCacheItemType.cachedBeaconState, value: CachedBeaconStateAllForks}
Then in the consumer code you can be explicit about what to do in each case
this.maxEpochsInMemory = opts.maxEpochsInMemory; | ||
// Specify different persistentApis for testing | ||
this.persistentApis = persistentApis ?? FILE_APIS; | ||
this.shufflingCache = shufflingCache; | ||
this.getHeadState = getHeadState; | ||
this.inMemoryKeyOrder = new LinkedList<string>(); | ||
this.inMemoryEpochs = new Set(); | ||
void ensureDir(CHECKPOINT_STATES_FOLDER); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function can throw an error, don't void
* Get a state from cache, it will reload from disk. | ||
* This is expensive api, should only be called in some important flows: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* Get a state from cache, it will reload from disk. | |
* This is expensive api, should only be called in some important flows: | |
* Get a state from cache, it may reload from disk. | |
* This is an expensive api, should only be called in some important flows: |
this.logger.debug("Error reloading state from disk", {persistentKey}, e as Error); | ||
return null; | ||
} | ||
return null; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unreachable code
*/ | ||
export async function getStateForAttestationVerification( | ||
export async function getShufflingForAttestationVerification( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens if this function is called twice at the same time for a checkpoint that requires regen? What happens if we ever raise the concurrency of regen queue to more than 1?
I think there should be an added layer of caching in the shuffling cache tracking active regen requests as pending promises. When multiple callers want the same pending shuffle, return the promise to all to always guarantee that there will never be repeated regen for attestation verification.
Also this code is extremely sensitive and can blow up Lodestar easily. Placing this code in the /validation folder as a random function hides how critical it is. This code should live closer to regen and have the locks explained in the paragraph above backed into it, and comments explaining their cost.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let shufflingDependentRoot: RootHex; | ||
if (blockEpoch === attEpoch) { | ||
// current shuffling, this is equivalent to `headState.currentShuffling` | ||
// given blockEpoch = attEpoch = n | ||
// epoch: (n-2) (n-1) n (n+1) | ||
// |-------|-------|-------|-------| | ||
// attHeadBlock ------------------------^ | ||
// shufflingDependentRoot ------^ | ||
shufflingDependentRoot = chain.forkChoice.getDependentRoot(attHeadBlock, EpochDifference.previous); | ||
} else if (blockEpoch === attEpoch - 1) { | ||
// next shuffling, this is equivalent to `headState.nextShuffling` | ||
// given blockEpoch = n-1, attEpoch = n | ||
// epoch: (n-2) (n-1) n (n+1) | ||
// |-------|-------|-------|-------| | ||
// attHeadBlock -------------------^ | ||
// shufflingDependentRoot ------^ | ||
shufflingDependentRoot = chain.forkChoice.getDependentRoot(attHeadBlock, EpochDifference.current); | ||
} else if (blockEpoch < attEpoch - 1) { | ||
// this never happens with default chain option of maxSkipSlots = 32, however we still need to handle it | ||
// check the verifyHeadBlockAndTargetRoot() function above | ||
// given blockEpoch = n-2, attEpoch = n | ||
// epoch: (n-2) (n-1) n (n+1) | ||
// |-------|-------|-------|-------| | ||
// attHeadBlock -----------^ | ||
// shufflingDependentRoot -----^ | ||
shufflingDependentRoot = attHeadBlock.blockRoot; | ||
// use lodestar_gossip_attestation_head_slot_to_attestation_slot metric to track this case | ||
} else { | ||
// blockEpoch > attEpoch | ||
// should not happen, handled in verifyAttestationTargetRoot | ||
throw Error(`attestation epoch ${attEpoch} is before head block epoch ${blockEpoch}`); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move all of this to a separate function, like getShufflingDependentRoot. What you have an if else assignment pattern, it's always a good indication to move the code to a function to use the return flow control
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
addressed in #6030 👍
*/ | ||
export function getAttestationDataSigningRoot(config: BeaconConfig, data: phase0.AttestationData): Uint8Array { | ||
const slot = computeStartSlotAtEpoch(data.target.epoch); | ||
// previously, we call `domain = config.getDomain(state.slot, DOMAIN_BEACON_ATTESTER, slot)` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not convinced this change is correct
@@ -59,6 +59,7 @@ export enum Bucket { | |||
// 54 was for bestPartialLightClientUpdate, allocate a fresh one | |||
// lightClient_bestLightClientUpdate = 55, // SyncPeriod -> LightClientUpdate // DEPRECATED on v1.5.0 | |||
lightClient_bestLightClientUpdate = 56, // SyncPeriod -> [Slot, LightClientUpdate] | |||
allForks_checkpointState = 57, // Root -> allForks.BeaconState |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use the lowest available bucket
@tuyennhv continued the review on a plane without internet, dumping inline comments taken on the code, please excuse the different format lion@lion-tp:~/code/chainsafe/lodestar$ git diff
diff --git a/packages/beacon-node/src/chain/prepareNextSlot.ts b/packages/beacon-node/src/chain/prepareNextSlot.ts
index 42003052da..34e91bbc1f 100644
--- a/packages/beacon-node/src/chain/prepareNextSlot.ts
+++ b/packages/beacon-node/src/chain/prepareNextSlot.ts
@@ -84,6 +84,9 @@ export class PrepareNextSlotScheduler {
clockSlot,
});
// It's important to still do this to get through Holesky unfinality time of low resouce nodes
+ // This sentence needs much more context, just saying it's important is not enough. Link to an issue or
+ // PR with the full context and summarize in 1 sentence here. Explain why this function appears two
+ // times in the same function.
await this.prunePerSlot(clockSlot);
return;
}
@@ -176,6 +179,9 @@ export class PrepareNextSlotScheduler {
}
// do this after all as it's the lowest priority task
+ // LION: Calling this function does not have anything to do really with "prepareNextSlot". The scope creep of this function
+ // is bad, basically being a dump of every action that must be performed routinely of-slot. This class a whole is kind of
+ // useless it holds no state, and just subscribes to an event to run this function.
await this.prunePerSlot(clockSlot);
} catch (e) {
if (!isErrorAborted(e) && !isQueueErrorAborted(e)) {
@@ -191,10 +197,12 @@ export class PrepareNextSlotScheduler {
* However, it's not likely `inMemoryEpochs` is configured as 0, and this scenario rarely happen
* since we only use `inMemoryEpochs = 0` for testing, if it happens it's a good thing because it helps us test the reload flow
*/
+ // This function only prunes checkpointStates, call it pruneCheckpointStateCache, rename to a generic "pruneX" latter when multiuse
private prunePerSlot = async (clockSlot: Slot): Promise<void> => {
// a contabo vpss can have 10-12 holesky epoch transitions per epoch when syncing, stronger node may have more
// it's better to prune at the last 1/3 of every slot in order not to cache a lot of checkpoint states
// at synced time, it's likely we only prune at the 1st slot of epoch, all other prunes are no-op
+ // LION: Where do you enfore that this function runs on the 1/3 of the slot?
const pruneCount = await this.chain.regen.pruneCheckpointStateCache();
this.logger.verbose("Pruned checkpoint state cache", {clockSlot, pruneCount});
};
diff --git a/packages/beacon-node/src/chain/stateCache/persistentCheckpointsCache.ts b/packages/beacon-node/src/chain/stateCache/persistentCheckpointsCache.ts
index 7ddf57eb2b..5f9b43799a 100644
--- a/packages/beacon-node/src/chain/stateCache/persistentCheckpointsCache.ts
+++ b/packages/beacon-node/src/chain/stateCache/persistentCheckpointsCache.ts
@@ -143,10 +143,13 @@ export class PersistentCheckpointStateCache implements CheckpointStateCache {
this.logger.verbose("Reload: read state failed", {persistentKey});
return null;
}
+ // Add white space to this function, space code out with comments where necessary
this.logger.verbose("Reload: read state successfully", {persistentKey});
this.metrics?.stateRemoveCount.inc({reason: RemovePersistedStateReason.reload});
+ // Why is the clock maybe nullish? Why does it use the current slot? If some value is nullish, what's the point of observing zero?
this.metrics?.stateReloadSecFromSlot.observe(this.clock?.secFromSlot(this.clock?.currentSlot ?? 0) ?? 0);
const closestState = findClosestCheckpointState(cp, this.cache) ?? this.getHeadState?.();
+ // Is this a possible state? If getHeadState always return a value, not
if (closestState == null) {
throw new Error("No closest state found for cp " + toCheckpointKey(cp));
}
@@ -171,6 +174,9 @@ export class PersistentCheckpointStateCache implements CheckpointStateCache {
// don't prune from memory here, call it at the last 1/3 of slot 0 of an epoch
return newCachedState;
} catch (e) {
+ // This is not true, and this broad try catch for so many different steps is very bad.
+ // There should probably not be a try catch here at all. If it must be, do something meaningful,
+ // and wrap individual steps. Not all code from `loadCachedBeaconState` until end of function.
this.logger.debug("Error reloading state from disk", {persistentKey}, e as Error);
return null;
}
@@ -279,6 +285,7 @@ export class PersistentCheckpointStateCache implements CheckpointStateCache {
// sort epochs in descending order, only consider epochs lte `epoch`
const epochs = Array.from(this.epochIndex.keys())
.sort((a, b) => b - a)
+ // filter first
.filter((e) => e <= maxEpoch);
for (const epoch of epochs) {
if (this.epochIndex.get(epoch)?.has(rootHex)) {
@@ -288,6 +295,7 @@ export class PersistentCheckpointStateCache implements CheckpointStateCache {
return state;
}
} catch (e) {
+ // Why would this fail? Blind try catches are not okay
this.logger.debug("Error get or reload state", {epoch, rootHex}, e as Error);
}
}
@@ -316,6 +324,7 @@ export class PersistentCheckpointStateCache implements CheckpointStateCache {
pruneFinalized(finalizedEpoch: Epoch): void {
for (const epoch of this.epochIndex.keys()) {
if (epoch < finalizedEpoch) {
+ // This can overwhelm the underlying persistent API by requesting many delete operations at once
this.deleteAllEpochItems(epoch).catch((e) =>
this.logger.debug("Error delete all epoch items", {epoch, finalizedEpoch}, e as Error)
);
@@ -385,6 +394,7 @@ export class PersistentCheckpointStateCache implements CheckpointStateCache {
}
if (firstEpoch === undefined) {
// should not happen
+ // LION: why not? explain
throw new Error("No epoch in memory");
}
// first loop to check if the 1st slot of epoch is a skipped slot or not
@@ -408,6 +418,7 @@ export class PersistentCheckpointStateCache implements CheckpointStateCache {
// if not found firstSlotBlockRoot, first slot of state is skipped, we should persist the Previous Root Checkpoint State, where the root
// is the last block slot root of pervious epoch. In this case Previous Root Checkpoint State would become the justified/finalized state.
for (const rootHex of this.epochIndex.get(firstEpoch) ?? []) {
+ // rewrite this with direct assignments to toPersist and toDelete, no let
let toPersist = false;
let toDelete = false;
if (firstSlotBlockRoot === undefined) {
@@ -487,6 +498,9 @@ export function findClosestCheckpointState(
cp: CheckpointHex,
cache: Map<string, CachedBeaconStateAllForks | PersistentKey>
): CachedBeaconStateAllForks | null {
+ // LION: This function must handle checkpoints on different branches
+ // Just picking the state with the closest epoch does not work in really bad cases
+ // Must add unit tests for this
let smallestEpochDiff = Infinity;
let closestState: CachedBeaconStateAllForks | null = null;
for (const [key, value] of cache.entries()) {
diff --git a/packages/state-transition/src/util/loadState.ts b/packages/state-transition/src/util/loadState.ts
index 18eac53e6f..5d6a096585 100644
--- a/packages/state-transition/src/util/loadState.ts
+++ b/packages/state-transition/src/util/loadState.ts
@@ -32,6 +32,7 @@ export function loadState(
const fieldRanges = stateType.getFieldRanges(dataView, 0, stateBytes.length);
const allFields = Object.keys(stateType.fields);
const validatorsFieldIndex = allFields.indexOf("validators");
+ // This call for a mainnet state is not free, why is it necessary to initialize all vectors to zero?
const migratedState = stateType.defaultViewDU();
// validators is rarely changed
const validatorsRange = fieldRanges[validatorsFieldIndex];
@@ -42,7 +43,9 @@ export function loadState(
);
// inactivityScores
// this takes ~500 to hashTreeRoot while this field is rarely changed
+ // LION: 500 what? use units
const fork = getForkFromStateBytes(config, stateBytes);
+ // rename to seedStateFork
const seedFork = config.getForkSeq(seedState.slot);
let loadedInactivityScores = false;
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
posting some comments so far, will continue looking over things
@@ -68,6 +68,7 @@ export type StateCacheItem = { | |||
/** Unix timestamp (ms) of the last read */ | |||
lastRead: number; | |||
checkpointState: boolean; | |||
persistentKey?: string; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add a jsdoc comment?
@@ -0,0 +1,41 @@ | |||
import fs from "node:fs"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can get promise versions of fs functions by using "node:fs/promises"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The file API has to go IMO. Managing files as part of the beacon node package going around the DB is a dangerous anti-pattern which: can break with permissions issues, is not measured, it's not proven to be more efficient than the level db. Definitely it's a bit of scope creep to introduce this in a sensitive change like this PR. Best to:
- implement the core functionality of this PR re-using existing db mechanisms
- proof that using level DB for temporal states is a problem
- proof that persisting states as files is more efficient in disk usage
if (finalizedStateOrBytes instanceof Uint8Array) { | ||
const slot = getStateSlotFromBytes(finalizedStateOrBytes); | ||
await this.db.stateArchive.putBinary(slot, finalizedStateOrBytes); | ||
this.logger.verbose("Archived finalized state bytes", {finalizedEpoch: finalized.epoch, slot, root: rootHex}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Log context is different from the one below
this.logger.verbose("Archived finalized state bytes", {finalizedEpoch: finalized.epoch, slot, root: rootHex}); | |
this.logger.verbose("Archived finalized state bytes", {epoch: finalized.epoch, slot, root: rootHex}); |
} else { | ||
// state | ||
await this.db.stateArchive.put(finalizedStateOrBytes.slot, finalizedStateOrBytes); | ||
this.logger.verbose("Archived finalized state", {epoch: finalized.epoch, root: rootHex}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could consider logging slot here (same as above)
return toHexString(root); | ||
} | ||
|
||
async remove(persistentKey: PersistentKey): Promise<boolean> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why does this return a boolean instead of just void? It can never return false
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's to conform to theFilePersistentApis
, will check if it's really necessary to return boolean
this.logger.verbose("Reload: read state successfully", {persistentKey}); | ||
this.metrics?.stateRemoveCount.inc({reason: RemovePersistedStateReason.reload}); | ||
this.metrics?.stateReloadSecFromSlot.observe(this.clock?.secFromSlot(this.clock?.currentSlot ?? 0) ?? 0); | ||
const closestState = findClosestCheckpointState(cp, this.cache) ?? this.getHeadState?.(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO: consider a better strategy than picking closest state based on epoch diff
for (const i of modifiedValidators) { | ||
migratedState.validators.set( | ||
i, | ||
ssz.phase0.Validator.deserializeToViewDU( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
try to reuse the unchanged properties of the old validator
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At least re-use pubkey and withdrawal_credentials which take the most memory of all
const migratedState = stateType.defaultViewDU(); | ||
// validators is rarely changed | ||
const validatorsRange = fieldRanges[validatorsFieldIndex]; | ||
const modifiedValidators = loadValidators( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add benchmark result, how do we come up with this strategy
config: ChainForkConfig, | ||
seedState: BeaconStateAllForks, | ||
stateBytes: Uint8Array | ||
): MigrateStateOutput { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider moving the details of type manipulation to the package types/utils/container.ts
const migratedState = deserializeContainerIgnoreFields(stateBytes, ["validators", "inactivityScores"]);
// validators is rarely changed
const validatorsBytes = sliceContainerField(stateType, "validators", stateBytes)
const modifiedValidators = loadValidators(
migratedState,
seedState,
validatorsBytes,
);
* ║ ║ ^ ^ ║ | ||
* ╚════════════════════════════════════╝═══════════════╝ | ||
*/ | ||
export class PersistentCheckpointStateCache implements CheckpointStateCache { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
write unit tests for the weird case where we do multiple epoch transitions, no blocks in the middle
} | ||
} | ||
|
||
// if found firstSlotBlockRoot it means it's Current Root Checkpoint State and we should only persist that checkpoint as it's the state |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO: even there is Current Root Checkpoint State, it could be from an orphaned block so maybe we still have to store both checkpoints but could improve by:
- store Previous Root Checkpoint State with state bytes
- store Current Root Checkpoint State with root of Previous Root Checkpoint State and block at slot 0 of epoch, and do a replay when we need it
} else { | ||
this.epochIndex.set(epoch, new Set([key])); | ||
} | ||
this.keyOrder.unshift(key); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
insert at 2nd position, 1st position for head
const {epoch} = fromCheckpointKey(key); | ||
if (isPersistentKey(stateOrPersistentKey)) { | ||
persistCount++; | ||
memoryEpochs.add(epoch); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
memoryEpochs.add(epoch); | |
persistentEpochs.add(epoch); |
this.logger.verbose("Processed shuffling for next epoch", {parentEpoch, blockEpoch, slot: block.message.slot}); | ||
|
||
// This is the real check point state per spec because the root is in current epoch | ||
// it's important to add this to cache, when chain is finalized we'll query this state later | ||
const checkpointState = postState; | ||
const cp = getCheckpointFromState(checkpointState); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
put this inside if (block.message.slot % SLOTS_PER_EPOCH === 0) {
condition below, otherwise will get "Block error slot=452161 error=Checkpoint state slot must be first in an epoch" error as monitored on test branch
// this is usually added when we prepare for next slot or validate gossip block | ||
// then when we process the 1st block of epoch, we don't have to do state transition again | ||
// This adds Previous Root Checkpoint State to the checkpoint state cache | ||
// This may becomes the "official" checkpoint state if the 1st block of epoch is skipped | ||
const checkpointState = postState; | ||
const cp = getCheckpointFromState(checkpointState); | ||
checkpointStateCache.add(cp, checkpointState); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
checkpointStateCache.add(cp, checkpointState); | |
if (shouldReload) { | |
checkpointStateCache.add(cp, checkpointState); | |
} |
replaced by #6250 |
Motivation
gc
calls (specifically state transition function)Description
New
loadState
api to load state bytes into an existing TreeBacked stateCheckpoint state cache
maxEpochsInMemory
(default = 2) epochs. If more than that persist the states to disk. Right now we persist 2 checkpoint states (spec + non-spec) to disk per epochfs
throughFilePersistentApis
ordb
throughDbPersistentApis
, cleanup at startupdb
CachedBeaconState
or a persistent keyMemoryCheckpointStateCache
andCheckpointStateCache
interfaceState cache
StateContextCache
andBlockStateCache
interfaceRegen
Feature flag:
nHistoricalStates
flag to use the newPersistentCheckpointStateCache
andLRUBlockStateCache
MemoryCheckpointStateCache
andStateContextCache
Shuffling cache
Test result on Holesky
This branch was able to survive under non-finality time of holesky with default heap size config
Used heap is reduced based on
maxEpochsInMemory
(for checkpoint cache) andmaxStates
(for state cache)Some down sides with this approach
part of #5968
TODOs
subscribeAllSubnets
flag2 epochs - 32 maxStates
seems to be the best default configloadState
to bypass this check