NSF Research Traineeship (NRT)

Past practicum projects:

2019-20 Traineeship

Students participated in a one-semester course consisting of three one-month long modules, entitled "Methods In Data- Enabled Research Into Human Behavior And Its Cognitive And Neural Mechanisms." This year consisted of the following three course modules:

  1. Computational Neuroscience: The Neural Basis of Probabilistic Computations taught by  Professor Ralf Haefner. This module focused on the question of how inference might be implemented in the brain. By simulating a simple biophysical model of an individual neuron, students showed how neurons can the support linear-nonlinear operations that are at the core of deep neural networks in machine learning (programming assignment 1). Next, students learned how early visual processing in the brain can be understood as performing inference in a probabilistic model that has learnt the statistics of the visual inputs. Both learning and inference in such a model using an MCMC-sampling-based algorithm gives rise to a neural network (in the biological sense) with neural response properties similar to those found neurophysiologically (implemented in programming assignment 2).
  2. Robot Intelligence taught by Professor Thomas Howard.  The objectives of this module are to introduce, learn, and apply algorithms and models for robot perception, planning, mapping, localization, and human-robot interaction that form the basis of robot intelligence with a focus on data-driven models.
  3. Language Processing taught by Professor Lenhart Schubert. This module is an introduction to natural language processing, with emphasis on computational techniques for deriving the structure and meaning of natural language. Topics include English phrase structure, dependency structure, parsing with traditional algorithms and with neural nets, representing meaning and knowledge, and a brief introduction to inference and dialogue.

The following semester, the students participated in "Practicum In Data-Enabled Research into Human Behavior and Its Cognitive & Neural Mechanisms".  In this interdisciplinary project course, trainees work in mixed teams of computer science, data science, and brain and cognitive science students to develop an artifact that addresses a research question and/or infrastructure need. The team will also learn principles of design by participating in the stages of brainstorming, specification, initial design, prototyping, refinement, and evaluation. The artifacts created by this course could include online showcases, demonstrations, tutorials, blogs, scientific papers, and software components to support further research. This year, the course was led by Zhen Bai, faculty member of the Department of Computer Science.

  • EMG-controlled computer interface for stroke rehabilitation (Project advisor: Dr. Ania Busza, Medical Center)
    • This study’s research aim is to improve Runner’s potential for rehabilitating stroke patients and to implement performance metrics that capture motor control behavior and online-learning in stroke survivors.
  • Semi-Neg-Raising Inferences (Project advisor: Dr. Aaron White, Linguistics)
    • The goal of this project would be to test (a) whether veridicality and neg-raising combined somehow can explain the distribution of verbs in MegaAcceptability better, and (b) whether semi-neg-raising properties have any special influence on this.
  • Exploring simulation-based inference methods for model fitting (Project advisor: Dr. Ralf Haefner, BCS)
    • This project aims at exploring cutting-edge simulation-based inference methods to infer posteriors over model parameters given data.
  • Neural Networks vs Neural Sampling: Posterior Analysis (Project advisor: Dr. Ralf Haefner, BCS)
    • The aim of the study is to compare the quality of posterior estimation by Neural Networks and traditional statistical posterior estimation techniques.

Students in this cohort attended different conferences to enhance their learning such as the Neural Information Processing Systems (NeurIPS). Travel to conferences were supported by the NRT grant.

Members of this cohort were also required to take a series of 5 workshops focused on scientific communication led by Dr. Whitney Gegg-Harrison from the Writing, Speaking, and Argument Program at the University of Rochester. The emphasis of these workshops has been on both written and spoken communication skills. Workshop topics included:

  • Spoken presentation skills: Students critiqued examples of slides, and practiced their speaking skills, including exercises that involved modifying their speech rate while describing their work at varying levels of detail to another student.
  • Rhetorical genre analysis as a tool for understanding the structure of scientific articles: Students analyzed and critiqued examples of published research articles from computer science, neuroscience, and psycholinguistics, including articles that the students were currently reading as part of their own research, and focused on identifying the rhetorical “moves” being made in each section of the paper, and how those moves were realized differently in different types of papers.
  • Incorporating storytelling strategies into scientific communication: Students were asked to describe their own research using traditional storytelling frameworks, and then discuss how those frameworks would translate into grant proposals and research articles for both broad and narrow audiences. There will be two more workshops in this series.
  • Producing writing that “flows”: Students are shown how they can use the insights from research in psycholinguistics to on creating better writing for their readers.
  • “Translating” research into everyday language for non-expert audiences

Bios of the 2019-2020 Class Members

UR's mascot Rocky.

Himanshu Ahuja

Himanshu is currently a first-year PhD candidate in the Brain and Cognitive Sciences Department, advised by Prof. Ralf Haefner and Prof. Manuel Gomez-Ramirez. His research interests are in haptic perception and computational neuroscience.

Website

Project contributions:  In addition to representing Linux systems in the data exploration and analysis, I raised a number of methodological issues with the experimental design and statistical analysis of fMRI.

What did I learn from the course? Not only did this course act as a bridge between topics in computer science and neuroscience but supplemented that with seminars with topics in academic writing and communication in science.

How might this course affect my career?  The consolidated articulation of the course is a great interdisciplinary experience that introduced me to methods and tools that I can employ in my current research in neuroscience to increase its value in practical applications.

What was surprising or the greatest challenge?  The greatest challenge was to work through the extremely rich and rare interdisciplinary curriculum provided by this course, which helped me grow a lot as an independent learner. There was always ample support from the instructors which made the whole process smooth and enjoyable.

UR's mascot Rocky.

Mia Anthony

Mia is currently a first year PhD candidate in the Brain and Cognitive Sciences Department, advised by Duje Tadin and and Vankee Lin. Her research interests include using fMRI and machine learning approaches to predict cognitive trajectories based on functional connectivity brain profiles of cognitive aging and Alzheimer’s.

Current Research Project: Applying machine learning approaches to functional brain connectivity to develop a quantitative proxy for cognitive reserve, a cognitive aging theory.

What did I learn from the course? How different fields conceptualize and model similar concepts and challenges. The course provided insight into the ways in which computer, data, and cognitive sciences approach brain-machine relationships and how to communicate with others beyond area-specific terminology.

How might this course affect my career?  I think the exposure to computer/data science will make future collaborations easier and more successful.

William Gantt

William Gantt

Will is a first-year PhD student in the Computer Science Department, advised by Dan Gildea. His research interests are in natural language processing, and more specifically in machine translation and natural language understanding. Before Rochester, Will earned his B.A. in computer science from Bowdoin College in Maine, after which he worked for two years as a software engineer at Okta in San Francisco.

Website

What did I learn from the course? The course provided me with a general overview of major topics in robotics and also taught me about Bayesian reasoning and computational neuron models.

How might this course affect my career?  This early into my grad school career, it’s hard to know how the course may impact my work after Rochester, but I think it’s valuable to have basic knowledge of current research in fields adjacent to my own.

What was surprising or the greatest challenge?  I was surprised to learn about the diversity of disciplines robotics seems to draw on apart from computer science—like physics, optics, neuroscience, and linguistics.

UR's mascot Rocky.

Kurtis Haut

Kurtis is a first-year PhD candidate in the Department of Computer Science advised by Dr. Ehsan Hoque. His research interests include brain-computer/brain-machine interfaces, HCI in the affective computing space, computational neuroscience, and AI (machine learning and machine vision). Kurtis received his BA in computer science as well as his BA in business from the University of Rochester in May 2018.

What did I learn from the course? The course consisting of 3 modules taught me the fundamentals of natural language processing, robot intelligence and computational neuroscience.

How might this course affect my career?  Keeping in mind that it is practically impossible to exactly predict how this course will affect my future career, broadly speaking, I believe that the unique experiences resulting from taking this course will be helpful when having conversations with other scientists throughout my career. E.g. let's say that in the future I am at a conference and find myself conversing with a scientist whose lifework is modeling the human visual system at a neuronal level. I could say something like "I am certainly no expert on the subject, however, I did get the chance to learn some of the basics from this computational neuroscience module. Really interesting stuff!"

What was surprising or the greatest challenge? Due to the course consisting of 3 modules, the greatest challenge was picking up a topic that you have never seen before and by the time you were truly understanding the material, the module was over and it was time to start the next module and face the same challenge! 

Ruoyang Hu

Ruoyang Hu

Ruoyang is a first year PhD candidate studying experimental psychology, advised by Professor Robert Jacobs. She is interested in machine learning, visual working memory, imagistic reasoning, computational modeling. She graduated with a BS in computer science and a BA in cognitive science at Rutgers, the State University of New Jersey.

Email

Project Contributions: Currently studying clustering effect with semantic coherence in visual working memory.

What did I learn from the course? Different perspectives of brain and cognitive science including natural language understanding, computational modeling and robotics.

How might this course affect my career?  I was only interested in the area of vision and after I know more about other areas through the NRT courses and talking with other students in the program, I am more open to other fields such as language and robotics.

What was surprising or the greatest challenge? The NRT course content was rotating on three different topics so we had time relatively shorter than a regular semester to fathom the topic. The fast pace of learning opened my mind but also required a great amount of work and actively learning from and sharing with other students in the program.

UR's mascot Rocky.

Ben Kane

Ben is a first-year computer science PhD student advised by Professor Lenhart Schubert. He is interested in developing conversational agents which are capable of holding realistic and meaningful dialogues with people, as well as collaborative planning, with an example application being conversational practice systems. He received his bachelor’s degrees in computer science and economics from the University of Rochester in 2019.

Website

What did I learn from the course? In the first module, I learned statistical and non-statistical methods for phrase-structure parsing and dependency parsing of language, and converting between the two types of parses. In the second module, I learned various data-driven models essential to robotics, such as perception, motion planning, localization, and mapping. In the third module, I learned about Bayesian inference and how this can be used to model neurons in the primary visual cortex.

How might this course affect my career?  Learning about modern research in robotics helped make me more aware of some of the parallel goals with projects that I have been working on and got me thinking about how to design dialogue systems and commonsense reasoners with potential robotic applications in mind.

What was surprising or the greatest challenge? There was a lot of overlap between some of the concepts in the modules and concepts I'm familiar with from CS (particularly with Bayesian models), but the terminology/presentation was different. It took some effort to recognize these instances and digest the information into my mental map, but it helped with developing a more holistic understanding of these models.

Shizhao Liu

Shizhao Liu

Shizhao is a first-year student in the Department of Brain and Cognitive Sciences advised by Professor Adam Snyder. She received her Bachelors in biomedical engineering and psychology from Tsinghua University in Beijing. Right now her study mainly focus on using latent variable approaches to understand how primate cortex process visual information.

Email

What did I learn from the course? I learned about basic principles of natural language processing and computational neuroscience.

How might this course affect my career?  I might use computational models in the future to study cognitive neuroscience questions. The computational neuroscience module gave me an introduction to this area.

What was surprising or the greatest challenge? Natural language processing, especially the linguistics rules and principles.

Purvanshi Mehta

Purvanshi Mehta

Purvanshi is a Master's candidate at the Goergen Institute for Data Science. Her primary research interests are Multimodal Deep Learning and Natural language understanding.

Website

Email

What did I learn from the course? I learned about the research that can be done at the intersection of cognitive neuroscience and Deep Learning. It was very insightful to learn about the statistical model of human cognition. I also attended NeurIPS 2019 as a part of the program.

How might this course affect my career?  This course helped me lay the basic foundations of computational neuroscience. It made me understand the high-level similarity and dissimilarity between neuroscience and Deep Learning models.

What was surprising or the greatest challenge? The greatest challenge was to study with the interdisciplinary groups and get different perspectives on problems.

Zoe Stearns

Zoe Stearns

Zoe is a first-year graduate student in the Brain and Cognitive Sciences PhD program working with Dr. Martina Poletti of the Active Perception Lab. She received a BA in mathematics from the University of Oklahoma, primarily focusing on applied mathematics and philosophy of mind. Zoe is interested in promoting an enactive framework for studying perception. By employing visual psychophysics and eye-tracking tools, she investigates the efficient control of oculomotor behavior and visual attention. 

What did I learn from the course? In the natural language processing module I learned the technical difficulty and challenges that machine translators face in academia and industry. The robotics module taught me interesting motion planning and localization algorithms. In the computational neuroscience module we focused on probabilistic inference, neural sampling algorithms, and models of early visual processing.

How might this course affect my career?  This course gave me great insight into other fields besides neuroscience and cognitive science that are also interested in the workings of the human brain and behavior. In particular, I look forward to applying the ideas I gained for the Robot Intelligence module to better investigate coordinated behavior in humans.

What was surprising or the greatest challenge? The most challenging experience for me was finding ways to collaborate and work with the other cohort members. Creating, developing, and finishing projects within a team is a huge component of academic and industrial work. I look forward to improving my communication and team-building skills during the second half of the training program.

Linghao Xu

Linghao Xu

Linghao is a first-year PhD candidate in the Department of Brain and Cognitive Sciences doing rotations in Professor Ralf Haefner, Greg DeAnglies and Manuel Gomez-Ramirez's lab. His background is in Mathematics and Statistics. Generally, I'm interested in how our brain makes perceptual inference (eg. motion perception) and how the neural coding and sampling mechanism under-lied carries out these computations.

What did I learn from the course? I have a better understanding about how engineering way is different from the science way to ask and solve questions. I also got exposure to Natural Language Processing NLP and Machine Learning ML which is very interesting and helpful.

How might this course affect my career?  This course offers an amazing opportunity for me to get exposure to the interdisciplinary areas between computer science and neuroscience and robotics. Brain and AI, to some extent, are very similar (similar computational problems). This course inspired me to think more about how I can be benefited from computer scientists (engineering way) to solve computational problems.

What was surprising or the greatest challenge? Natural Language Processing is a little bit challenging in the beginning and it takes time to understand linguistics.