Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

measures for evaluating event detection #865

Open
dkrako opened this issue Oct 23, 2024 · 0 comments
Open

measures for evaluating event detection #865

dkrako opened this issue Oct 23, 2024 · 0 comments
Labels
enhancement New feature or request

Comments

@dkrako
Copy link
Contributor

dkrako commented Oct 23, 2024

Description of the problem

When running event detection algorithms we want to have some quality measure of the resulting event data.
Algorithm and parameter selection potentially depends on the eye tracker, recording setup, ambient conditions and overall data quality. Event detection that works well on one dataset is thus not guarenteed to transfer to other datasets.

Usually there's no annotated ground truth event data available, so we are limited to heuristic measures that expose potential issues in detected events.

Description of a solution

I propose the following heuristic measures for event detection quality that are agnostic to stimulus data:

  • peak / average velocity [dva/s] of fixations
  • peak / average dispersion [dva or px] of fixations
  • average durations [ms] of fixations and saccades
  • R2 for saccade main sequence
  • proportion of unclassified samples

It should be possible to aggregate these measures on several levels:

  • trial
  • file
  • session
  • subject
  • dataset

If taking account of stimulus data like AOIs, following measures are possible:

  • proportion of fixations in the background (not on explicit AOI)

Further, experiments can include validation before trials, where participants are instructed to fixate a sequence of points. If raw data is available during these validation sequences, the locations and timings of the presented points can be assumed as ground truth (taking reaction delay of participant into account)

  • use validation points as ground truth

We could also use outlier detection on the properties of the events and see if we have lots of outliers.

Minimum acceptance criteria

Sample Code

  • velocity, dispersion, duration are basic measures for events that can be computed simply by:
dataset.compute_event_properties(["peak_velocity", "dispersion"])

This functionality is only implemented for Dataset but missing from the GazeDataFrame.
See issue #871 for implementing GazeDataFrame.compute_event_properties()

@dkrako dkrako added the enhancement New feature or request label Oct 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant