Eye tracking sounds precise from the outside: a system predicts where a person is looking, and we turn that signal into attention data. In practice, the story is messier. Accuracy can change from one participant to another, from one room to another, and sometimes from one calibration attempt to the next.

This is especially true for webcam-based eye tracking. Traditional commercial eye trackers are built for measurement, but browser-based tools such as WebGazer are designed around accessibility. That accessibility is powerful: no special hardware, lower cost, easier deployment, and the possibility of running studies inside a normal web application. But it also means we need to be honest about signal quality.

Several factors can shift the result: lighting, camera angle, screen size, head movement, glasses, calibration quality, browser behavior, and whether the user is sitting consistently throughout the task. A gaze point that looks acceptable in one setup may be noisy in another. If we ignore that, the downstream analysis can become fragile.

That is why I built the Web-Based Eye Tracker Accuracy Finder. The goal was not to create another black-box eye-tracking demo. The goal was to create a measurement layer: before using gaze data in a research or AI pipeline, the system should estimate whether the current gaze signal is accurate enough.

The idea is simple. First, the user completes a browser calibration flow. Then the system shows a sequence of known target cells on the screen. While the user follows these targets, the application records gaze coordinates and timestamps from WebGazer. The backend maps each gaze point to a 7x7 screen grid and checks whether the gaze falls into the expected target cell during each five-second window.

Instead of only saving raw x-y coordinates, the system produces an interpretable accuracy score. It can answer questions like: during this time window, how much of the gaze data landed in the target cell? Did the signal drift into neighboring cells? Was the distribution concentrated or scattered?

I also added fixation-processing utilities, including I-VT and I-DT style methods. This matters because raw gaze streams include fast transitions, noise, and unstable samples. Fixation filtering helps reduce the effect of saccade-like movement and makes the result closer to the stable visual attention we actually care about.

The output includes pie-chart diagnostics for target windows. These are not just decorative figures; they make the failure mode visible. If the expected cell should dominate but the distribution is scattered across many cells, that tells us the calibration or environment may not be reliable enough for downstream use.

This kind of validation is important for any web-based eye-tracking project, but it becomes even more important in sensitive applications. If gaze features will later be used in cognitive modeling, health-related screening support, or machine learning, the pipeline should not blindly trust the input signal. Data quality is part of the model.

For me, this project is a small but important engineering lesson: measurement comes before interpretation. Before asking what gaze data means, we should first ask whether the eye tracker is measuring well enough to support that question.