For validly analysing human visual attention, it is often necessary to proceed from computer-based desktop set-ups to more natural real-world settings. However, the resulting loss of control has to be counterbalanced by increasing participant and/or item count. Together with the effort required to manually annotate the gaze-cursor videos recorded with mobile eye trackers, this
renders many studies unfeasible.
We tackle this issue by minimizing the need for manual annotation of mobile gaze data. Our approach combines geometric modelling with inexpensive 3D marker tracking to align virtual proxies with the real-world objects. This allows us to classify fixations on objects of interest automatically while supporting a
completely free moving participant.
The video is suppplementary to our ETRA 2014 paper and presents the EyeSee3D method. Please check the original paper submission at the ACM Digital Library: [ Ссылка ]
![](https://i.ytimg.com/vi/wCSZgt-QQ8I/maxresdefault.jpg)