You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When running event detection algorithms we want to have some quality measure of the resulting event data.
Algorithm and parameter selection potentially depends on the eye tracker, recording setup, ambient conditions and overall data quality. Event detection that works well on one dataset is thus not guarenteed to transfer to other datasets.
Usually there's no annotated ground truth event data available, so we are limited to heuristic measures that expose potential issues in detected events.
Description of a solution
I propose the following heuristic measures for event detection quality that are agnostic to stimulus data:
peak / average velocity [dva/s] of fixations
peak / average dispersion [dva or px] of fixations
average durations [ms] of fixations and saccades
R2 for saccade main sequence
proportion of unclassified samples
It should be possible to aggregate these measures on several levels:
trial
file
session
subject
dataset
If taking account of stimulus data like AOIs, following measures are possible:
proportion of fixations in the background (not on explicit AOI)
Further, experiments can include validation before trials, where participants are instructed to fixate a sequence of points. If raw data is available during these validation sequences, the locations and timings of the presented points can be assumed as ground truth (taking reaction delay of participant into account)
use validation points as ground truth
We could also use outlier detection on the properties of the events and see if we have lots of outliers.
Minimum acceptance criteria
Sample Code
velocity, dispersion, duration are basic measures for events that can be computed simply by:
This functionality is only implemented for Dataset but missing from the GazeDataFrame.
See issue #871 for implementing GazeDataFrame.compute_event_properties()
The text was updated successfully, but these errors were encountered:
Description of the problem
When running event detection algorithms we want to have some quality measure of the resulting event data.
Algorithm and parameter selection potentially depends on the eye tracker, recording setup, ambient conditions and overall data quality. Event detection that works well on one dataset is thus not guarenteed to transfer to other datasets.
Usually there's no annotated ground truth event data available, so we are limited to heuristic measures that expose potential issues in detected events.
Description of a solution
I propose the following heuristic measures for event detection quality that are agnostic to stimulus data:
It should be possible to aggregate these measures on several levels:
If taking account of stimulus data like AOIs, following measures are possible:
Further, experiments can include validation before trials, where participants are instructed to fixate a sequence of points. If raw data is available during these validation sequences, the locations and timings of the presented points can be assumed as ground truth (taking reaction delay of participant into account)
We could also use outlier detection on the properties of the events and see if we have lots of outliers.
Minimum acceptance criteria
Sample Code
This functionality is only implemented for
Dataset
but missing from theGazeDataFrame
.See issue #871 for implementing
GazeDataFrame.compute_event_properties()
The text was updated successfully, but these errors were encountered: