Zoë Ashwood

PhD student in Computer Science at Princeton University

Email: zashwood at cs dot princeton dot edu

I am a PhD student in the Computer Science Department at Princeton University advised by Jonathan Pillow. During my time at Princeton, my projects have focussed on using machine learning to characterize the behavior of black-box systems, such as animals making decisions. I am interested in exploring the connection between animal and machine learning, and am excited about curriculum learning and automatic curriculum generation.

Keywords: Machine Learning, Reinforcement Learning, Computational Neuroscience and Cognitive Science

Experience

I completed my undergraduate degree (MPhys) in Mathematics and Theoretical Physics at the University of St Andrews in Scotland. Prior to coming to Princeton, I studied as a Robert T. Jones scholar at Emory University and I worked for two years as a Research Fellow for Professor Daniel Ho at Stanford University. At Stanford, we were interested in assessing policy efficacy by carefully designing Randomized-Controlled Trials and applying appropriate statistical methods to measure a policy’s effect.

Selected Projects

Inferring learning rules from animal decision-making

In this project, I worked with Nick Roy, Ji Hyun Bak, the International Brain Laboratory and Jonathan Pillow to develop a model to characterize the behavior of mice and rats learning to perform perceptual decision-making tasks. Our model tracked trial-to-trial changes in the animals’ choice policies, and separated changes into components explainable by a Reinforcement Learning rule, and components that remained unexplained. While the standard REINFORCE learning rule was only able to explain 30% of animals’ trial-to-trial policy updates, REINFORCE with baseline was capable of explaining 92% of updates used by mice learning the “IBL” task. Understanding the rules that underpin animal learning not only provides neuroscientists with insight into their animals, but also provides concrete examples of biological learning algorithms to the machine learning community. The paper associated with this project was accepted to NeurIPS 2020, and is linked to above.


Mice alternate between discrete strategies during perceptual decision-making

For this project, I worked with Nick Roy, Iris Stone, the International Brain Laboratory, Anne Churchland, Alex Pouget and Jonathan Pillow to implement an Input-Output HMM (IO-HMM) and apply it to mouse decision-making data. We found that discrete latent states underpin mouse choice behavior: with the IO-HMM, we achieved a 5% increase in predictive accuracy of animals’ choices (baseline accuracy was already ∼70%). We presented this work at Cosyne 2020.

Teaching Experience

Leadership, Service and Outreach

Awards