Developing adaptive eye-tracking tools for children with cerebral visual impairment has specific challenges. Jacob Farrow describes his progress so far.
Earlier this year, I presented a poster at ACCU 2025 [ACCU] titled ‘Tracking Success: Enhancing Visual Tracking Skills in Children with Cerebral Visual Impairment (CVI) through Interactive Digital Tools’. The project explores whether gaze-tracking technology can be meaningfully adapted for children with CVI – an often-overlooked neurological condition that affects the brain’s ability to process visual information.
I was thrilled to see people stop by and engage with the poster. Some had experience with assistive tech, others wanted to know how eye-tracking can be made more inclusive. We discussed head pose, side-eyeing, glare from glasses, and real-time feedback loops. It was an encouraging reminder that sometimes niche research can strike a chord with a wide audience.
Problem statement
CVI is now the leading cause of visual impairment in children in the UK. Unlike traditional eye problems, it affects how the brain interprets visual input – even if the eyes themselves are healthy. CVI manifests in many (often contradicting) ways. Children with CVI may use peripheral vision instead of central, avoid eye contact, or struggle to recognise moving/static objects. This makes it difficult for traditional educational tools – and standard eye-tracking systems – to interpret what these children are seeing or focusing on.
So, full of the hubris of an engineering student, I created my final-year project and set out to change that. I wanted to build an eye-tracking system that could cope with diverse gaze behaviours, and provide real-time feedback to help practitioners understand how children with CVI engage with visual stimuli.
The research had both academic and real-world legs. Academically, it formed the core of my Software Engineering degree project at The University of Bradford [Bradford]. Professionally, I developed it as Lead Software Engineer at SpaceKraft Ltd [SpaceKraft] – a company that researches and develops sensory solutions for children with disabilities. I saw a chance to make a practical tool that could be deployed in classrooms and therapy spaces, not just written about in reports.
What I built
The core of the system is an interactive game that asks users to follow moving objects across a screen. A camera tracks the user’s eye movements, estimating gaze position in real-time. The game then uses this data to adjust the size and speed of the object based on user accuracy in order to give performance feedback.
But building a working prototype meant figuratively wrestling with a long list of edge cases. Many standard eye-tracking libraries assume a clear, frontfacing gaze. Children with CVI often present anything but. They may ‘side-eye’, tilt their heads, look ‘through’ objects, or glance briefly before disengaging.
Here’s where the system had to adapt:
- Face & Eye Detection: I used dlib [dlib] for facial landmark detection, reinforced by CLAHE (Contrast Limited Adaptive Histogram Equalisation) [Wikipedia] to enhance image clarity under varied lighting.
- Glare Reduction: Glasses introduced major glare issues, especially with sensory room lighting. I applied inpainting and thresholding to mask bright regions, along with techniques inspired by polarization filtering.
- Calibration & Gaze Mapping: I stored pupil and eye corner data, along with head pose matrices, during calibration. This was mapped to screen coordinates using a combination of linear regression and data-driven mapping.
- Feedback & Logging: Engagement data (accuracy and session metrics) was logged securely for practitioner review – while respecting strict privacy standards.
The whole system runs on a streamlined Linux build on a 32″ touchscreen with a USB camera, booting directly into the app for plug-and-play simplicity. It’s developed in C++ with OpenCV, OpenGL, and Dear ImGui, compiled using Ninja and CMake.
![]() |
The main menu UI rendered using Dear ImGUI. |
Figure 1 |
What I learned
Accuracy isn’t everything. Most eye-tracking systems measure fixations, saccades, and dwell time to infer engagement. But children with CVI don’t necessarily exhibit those behaviours in expected ways. Instead of focusing purely on metrics, my system focuses on responsiveness. If the child interacts – however briefly or obliquely – that counts as meaningful engagement.
Practitioner input is vital. This wasn’t a solo coding exercise. I collaborated closely with educators and specialists, who gave continual feedback during development. They helped me understand not just how the system works, but how it might actually be used in a therapeutic setting.
Adaptive design beats one-size-fits-all. Customisation was key. Children needed different contrast levels, movement speeds, and calibration sensitivities. This led to a settings system that could be tuned per user – an arena I’d like to expand further.
![]() |
Gaze point being rendered to screen while tracking the rocket [Art]. |
Figure 2 |
Feedback from ACCU
People at ACCU had great questions – some of which caught me off guard in the best way. One asked whether the system could learn from individual users over time. Another wondered about the potential of integrating into VR environments. A few developers had worked on gaze estimation themselves and were curious about how I approached noisy data, partial occlusion, and hardware constraints.
It was validating to hear how many people saw potential of this kind of tech beyond high-end labs or gaming setups. One even said, “I’ve never seen eye-tracking used for kids before – especially not like this.”
Next steps
The current prototype has laid the groundwork, but there’s a long way to go. Planned improvements include:
- Dynamic calibration that adjusts on-the-fly during gameplay, reducing setup time and improving accuracy without user effort.
- Multiple game modes, including shifting gaze tasks and noisy backgrounds to test visual attention more thoroughly.
- Gaze heatmap visualisation, offering real-time and session-based insight for practitioners to understand focus zones and avoidances.
- Deeper analytics, including attention duration, latency, and object tracking success over time.
![]() |
Session-based heatmaps could offer practitioners valuable insights. |
Figure 3 |
The project will be entering a new phase of weekly testing with a cohort of children with CVI at a partner school. The feedback will guide further iteration and help define the long-term viability of the tool in classroom environments.
Final thoughts
Software isn’t just about solving problems – it’s about solving the right problems. This project gave me the opportunity to design something that may help children who are often underserved by mainstream tech. It challenged me technically, but also reminded me why I got into this field in the first place.
The real test will be whether children engage with it, learn from it, and enjoy using it. If they do – even just one of them – then this project has already been a success.
Acknowledgements
I wish to express my profound gratitude to John Kopelciw and Chris Morton of SpaceKraft for their invaluable guidance, encouragement, and professional insights throughout the course of this project. Their dedication to creating innovative and impactful solutions has been both inspiring and pivotal to the development of this work.
I am equally indebted to Dr. Rachel Pilling of the University of Bradford, whose expertise and thoughtful advice have been instrumental in ensuring the relevance and effectiveness of this project in addressing the needs of children with Cerebral Visual Impairment (CVI). Her support has been critical in shaping the academic and practical contributions of this research.
Finally, I extend my heartfelt thanks to Dr. Ci Lei of the University of Bradford, who always encouraged me to look at things from a different angle. His perspective has profoundly influenced the innovative aspects of this project and has inspired me to think more critically and creatively.
The completion of this project would not have been possible without their collective expertise and generosity in sharing their time and knowledge, for which I am deeply thankful.
Glossary
CLAHE | Contrast Limited Adaptive Histogram Equalisation – used to enhance image contrast in low-light or uneven lighting conditions. |
CVI | Cerebral Visual Impairment – a condition where the brain struggles to process visual information. |
Dear ImGUI | An immediate-mode GUI library used for rendering fast, dynamic user interfaces in graphical applications. |
Dlib | An open-source machine learning and computer vision library used for facial landmark detection. |
Fixation | When the eyes are stationary and focused on a single visual point. |
Gaze Heatmap | A visual representation of where the user looked most frequently or for the longest duration. |
Inpainting | An image-processing method that fills in missing or obscured parts of an image. |
Saccades | Rapid, ballistic eye movements between fixation points. |
OpenCV | Open Source Computer Vision Library: A widely-used library for real-time computer vision. |
OpenGL | Open Graphics Library: A graphics API used to render interactive elements on the screen in real time. |
References
[ACCU] ACCU 2025 Conference: https://accuconference.org/.
[Art] Artist of the Portal Illustrations kasej.portalillustrations@gmail.com.
[Bradford] The University of Bradford: https://www.bradford.ac.uk/external/.
[dlib] dlib C++ Library: https://dlib.net/.
[SpaceKraft] SpaceKraft Ltd: http://www.spacekraft.co.uk.
[Wikipedia] CLAHE: https://en.wikipedia.org/wiki/Adaptive_histogram_equalization.
is the Lead Software Engineer at SpaceKraft Ltd and a final-year Software Engineering student at the University of Bradford. He leads the development of sensory solutions used in special education around the globe, specializing in computer vision and real-time interaction. An Engineering Leaders Scholar with RAENG, he contributes to inclusive design frameworks and mentors young engineers.