Skip to content

Commit

Permalink
Update index.html
Browse files Browse the repository at this point in the history
  • Loading branch information
ehhall authored Dec 23, 2024
1 parent e373992 commit 19333e5
Showing 1 changed file with 7 additions and 4 deletions.
11 changes: 7 additions & 4 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,8 @@ <h2> preprints </h2>
A paper documenting our process to segment 2.8k objects across 100 real-world scenes! We share our thoughts on the "best way" to segment objects, and analyses showing that image size and perspective has a big impact on the distribution of fixations. Full tutorial coming soon! </P>

<P class="blocktext2"> <a href="https://osf.io/preprints/psyarxiv/72np4"> <img src="images/grandTheft.png" alt="" class="left"> </a><B> <a style="text-decoration: none;" href=https://osf.io/preprints/psyarxiv/72np4">Eye gaze during route learning in a virtual task</B> </a> <br>
<I> Martha Forloines*, Elizabeth H. Hall*, John M. Henderson, Joy J. Geng</I> &nbsp; *co-first<br>
<I> Martha Forloines*, Elizabeth H. Hall*, John M. Henderson, Joy J. Geng</I> <br>
*co-first authors <br>
PsyArXiv, 2024. <br>
Participants studied an avatar navigating a route in Grand Theft Auto V while we tracked their eye movements. Using these, we trained a classifier to identify whether they learned the route in natural or scrambled temporal order. Those under natural conditions fixated more on the avatar and path ahead, while scrambled viewers focused more on scene landmarks like buildings and signs. </P>

Expand All @@ -118,7 +119,8 @@ <h2> publications </h2>
We found that attending to small objects in scenes lead to significantly more boundary contraction in memory, even when other image properties were kept constant. This supports the idea that the extension/contraction in memory may reflect a bias towards an optimal viewing distance!</P>

<P class="blocktext2"> <a href="https://link.springer.com/article/10.3758/s13423-023-02286-2"> <img src="images/candace.jpg" alt="candace" class="left"> </a><B> <a style="text-decoration: none;" href="https://link.springer.com/article/10.3758/s13423-023-02286-2">Objects are selected for attention based upon meaning during passive scene viewing</B> </a> <br>
<I> Candace Peacock*, Elizabeth H. Hall*, John M. Henderson</I> &nbsp; *co-first <br>
<I> Candace Peacock*, Elizabeth H. Hall*, John M. Henderson</I> <br>
*co-first authors <br>
Psychonomic Bulletin & Review, 2023. <a style="text-decoration: none;" href="https://osf.io/preprints/psyarxiv/fqtvx"> &nbsp; Preprint </a> <a style="text-decoration: none;" href="https://osf.io/egry6/"> &nbsp; Stimuli </a> <br>
We looked at whether fixations were more likely to land on high-meaning objects in scenes. We found that fixations are more likely to be directed to high meaning objects than low meaning objects regardless of object salience.</P>

Expand All @@ -129,9 +131,10 @@ <h2> publications </h2>

<P class="blocktext2"> <a href="https://link.springer.com/article/10.1007/s00426-022-01694-8"> <img src="images/zoe.JPG" alt="zoe" class="left"> </a><B> <a style="text-decoration: none;" href="https://link.springer.com/article/10.1007/s00426-022-01694-8">Working memory control predicts fixation duration in scene-viewing
</B> </a> <br>
<I> Zoe Loh*, Elizabeth H. Hall, Deborah A. Cronin, John M. Henderson</I> &nbsp; *undergrad supervised by me <br>
<I> Zoe Loh*, Elizabeth H. Hall, Deborah A. Cronin, John M. Henderson</I> <br>
*undergrad supervised by me <br>
Psychological Research, 2022. <br>
We fit scene-viewing fixation data to a Ex-Guassian distribution to look at individual differences in distribution means, deviation, and skew. We found that the worse a participant's working memory control was, the more likely they were to have some very long fixations when encoding scene detail into memory.</P>
We fit scene-viewing fixation data to a Ex-Guassian distribution to look at individual differences in memory. We found that the worse a participant's memory control was, the more likely they were to have some very long fixations when encoding scene detail into memory.</P>

<P class="blocktext2"> <a href="https://www.tandfonline.com/doi/full/10.1080/09658211.2021.2010761#.YcDAavk9X3E.twitter"> <img src="images/multicat.jpg" alt="multicat" class="left"> </a><B> <a style="text-decoration: none;" href="https://www.tandfonline.com/doi/full/10.1080/09658211.2021.2010761#.YcDAavk9X3E.twitter">Highly similar and competing visual scenes lead to diminished object but not spatial detail in memory drawings </B> </a> <br>
<I> Elizabeth H. Hall, Wilma A. Bainbridge, Chris I. Baker </I> <br>
Expand Down

0 comments on commit 19333e5

Please sign in to comment.