Autism Spectrum Disorder assessment is not a single-button problem. Traditional evaluation usually requires detailed interviews, behavioral observation, communication assessment, clinical experience, and time. That is exactly why engineering can be useful here, but only if it is humble about its role.

EASIEST was built around that boundary. The project does not try to replace clinicians. It asks whether a web-based system can collect quantitative gaze evidence and turn it into a supportive signal for ASD screening. In other words, the system is a clinical decision-support idea, not a standalone medical authority.

The motivation came from two practical limitations. First, traditional assessments can be time-consuming and resource-intensive. Second, eye-tracking has real value for studying attention and visual behavior, but commercial eye trackers can be expensive and hard to deploy broadly. A webcam-based approach changes the accessibility story.

The distinguishing idea of EASIEST is the browser-based gaze pipeline. There are many AI projects that attempt autism classification, but this project was framed as a first-of-its-kind web-based eye-tracking system for ASD screening support. A patient completes visual tasks in a browser, the camera-based tracker records gaze behavior, and the system converts that raw movement into features for machine learning.

The project connects several layers of computer engineering. On the frontend, users register, doctors manage patients, and patients complete test screens. On the tracking side, WebGazer-style webcam eye tracking captures gaze points during tasks. On the data side, the system stores patient and test information with PostgreSQL through a Flask/SQLAlchemy backend. On the ML side, gaze data is processed into features and passed into prediction logic.

A key technical part is fixation processing. Raw gaze points are noisy: people blink, move, and produce saccades between fixations. The report compares fixation-detection strategies such as velocity threshold, hidden Markov models, dispersion threshold, minimum spanning tree, and area-of-interest approaches. EASIEST uses an I-DT-style fixation filtering approach to identify stable fixation groups and remove saccade points before feature generation.

That preprocessing step matters because the model is only as useful as the signal it receives. In the report's comparison figures, I-DT filtering changes the distribution of detected cells by removing saccade-like points and producing cleaner gaze summaries. That is a very engineering-heavy lesson: AI performance is not only about the classifier, it is also about how the input signal is cleaned, structured, and represented.

The other important part is that EASIEST was a full software system, not just a notebook. The report includes use-case diagrams, class diagrams, sequence diagrams, activity diagrams, data-flow diagrams, database design, UI screens, implementation notes, and testing plans. That kind of documentation matters because health-related software has workflows, actors, failure modes, and privacy constraints.

Testing was also treated as part of the project. Selenium was used to simulate user interaction across the website, while Pytest was used for unit and integration tests. The fixation generation logic was tested with predetermined gaze pairs to check whether the algorithm correctly identifies fixation groups. This is the part I like most as an engineer: the ML idea is embedded in a system that can be checked.

The user testing section is especially important. Feedback from a healthcare professional was realistic: eye tracking alone is not enough for ASD diagnosis, and it should not replace traditional clinical evaluation. But it can provide useful quantitative evidence, especially as a screening or supplementary tool. That is the right framing for this kind of system.

So the value of EASIEST is not that it magically diagnoses autism from a webcam. The value is that it shows how a difficult health-tech idea can be engineered end to end: accessible data collection, gaze processing, feature generation, ML support, doctor/patient workflows, database design, privacy constraints, and testing.

I am especially grateful to Assist. Prof. Dr. Sukru Eraslan and Prof. Dr. Yeliz Yesilada for supervising the project, sharing datasets and domain knowledge, and supporting the research direction. Their guidance made it possible to turn the idea from a rough health-tech concept into a serious computer engineering project.