Physical documents (such as books or magazines) and digital information (such as contextualised references, descriptions and multimedia contents) are usually read or visualised in a disconnected way. This project aims at providing a link between physical and digital objects so that both sources of information can be accessible jointly. We propose the fusion of data obtained from a low cost eye-tracker device and the Google Glass camera in order to provide the Goggle Glass display with specific information about the part of the physical document that is being observed.
We propose to implement this approach in the context of an augmented-reading experience: Information provided on the display will be changing on-the-fly depending on the word that is being read. Particularly, we aim at the qualitative and quantitative validation of the research outcome for the case of efficient word translation in an English language-learning framework. Achieving basic competences in English is one of the main requirements in a competitive job market, and people from all ages are demanding efficient methods for achieving the needed skills that could help them to find a job in a context of high levels of unemployment, especially in countries such as Spain. The validation of this research will be performed in a Living Lab with actual English learners into an on-going funded project for a Smart Library  in Barcelona, Spain.
This project has been awarded with the 2014 Google Research Award.
- A Feature-Based Gaze Estimation Algorithm for Natural Light Scenarios”. In Proc. of IbPRIA 2015. Santiago de Compostela, Spain.
- Arcadia Llanza. “Eye tracking configuration based on a webcam: Implementation of a novel algorithm for gaze estimation”. Final degree work. February 2015.
- . At IbPRIA 2015 demo context. Santiago de Compostela, Spain.