Skip to content

Commit

Permalink
Update index.html
Browse files Browse the repository at this point in the history
  • Loading branch information
ehhall authored Dec 23, 2024
1 parent 4128276 commit 3c0ddc9
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion index.html
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ <h2> preprints </h2>
PsyArXiv, 2024. <a style="text-decoration: none;" href="https://github.com/ehhall/objects-in-focus"> &nbsp; Github </a> <br>
A paper documenting our process to segment 2.8k objects across 100 real-world scenes! We share our thoughts on the "best way" to segment objects, and analyses showing that image size and perspective has a big impact on the distribution of fixations. Full tutorial coming soon! </P>

<P class="blocktext2"> <a href="https://osf.io/preprints/psyarxiv/72np4"> <img src="images/classifyimage.jpg" alt="" class="left"> </a><B> <a style="text-decoration: none;" href=https://osf.io/preprints/psyarxiv/72np4">Eye gaze during route learning in a virtual task</B> </a> <br>
<P class="blocktext2"> <a href="https://osf.io/preprints/psyarxiv/72np4"> <img src="images/grandTheft.jpg" alt="" class="left"> </a><B> <a style="text-decoration: none;" href=https://osf.io/preprints/psyarxiv/72np4">Eye gaze during route learning in a virtual task</B> </a> <br>
<I> Martha Forloines*, Elizabeth H. Hall*, John M. Henderson, Joy J. Geng</I> *co-first author<br>
PsyArXiv, 2024. <br>
Participants studied an avatar navigating a route in Grand Theft Auto V while we tracked their eye movements. Using these, we trained a classifier to identify whether they learned the route in natural or scrambled temporal order. Those under natural conditions fixated more on the avatar and path ahead, while scrambled viewers focused more on scene landmarks like buildings and signs. </P>
Expand Down

0 comments on commit 3c0ddc9

Please sign in to comment.