NSF Research Traineeship (NRT) 2015-16

Students participated in a one­ semester course consisting of three one ­month long modules, entitled "Methods In Data­ Enabled Research Into Human Behavior And Its Cognitive And Neural Mechanisms". This course ran for the first time in Fall 2015 and consisted of three, month ­long modules that were designed to expose students to a mixture of methods and approaches in data science and cognitive science. This year's modules were:

  1. Interpreting fMRI led by Professor Rajeev Raizada. This module focused on applying data science approaches to neuroimaging (functional MRI) data from humans.
  2. Computational Approaches to Natural Language Processing led by Professor Dan Gildea.
  3. Applications on "Sensing in the Wild" led by Professor Ehsan Hoque. This module focused on applications of computer science techniques to problems in designing human­ computer interfaces, crowd computing, and wearable computing.

The following semester, students participated in “Practicum in Data­ Enabled Research into Human Behavior and its Cognitive and Neural Mechanisms”, in which trainees work in mixed teams of CS and BCS PhD students to create an artifact that brings together work in cognitive science and computer science.  In this year’s course, taught by Professor Henry Kautz, the students created a from ­scratch implementation of an deep learning system for the game Go, modeled after the Google’s AlphaGo system.  In addition to learning much about neural network programming, the students learned methods for team ­based software development.

2015-16 Practicum Projects

A replication of AlphaGo

In this course, Practicum In Data-Enabled Research Into Human Behavior And Its Cognitive & Neural Mechanisms, we started off with a discussion of a variety of projects/artifacts the students are interested in. One interesting publication [1] caught our eyes. Many had thought that computer master of the game of go was decades away, so the sudden success of Google DeepMind's AlphaGo system came as a shock. The paper used deep reinforcement learning, which has influences from the area of Brain and Cognitive Science. 

[1] Silver D, Huang A, Maddison C J, et al. Mastering the game of Go with deep neural networks and tree search[J]. Nature, 2016, 529(7587): 484-489.

Project Website

Bios of the 2015-16 Class Members

Chris Bates

Chris started his PhD work at University of Rochester's Brain and Cognitive Sciences department in 2015, under Professor Robert Jacobs. He is broadly interested in computational modeling of cognition and perception, including physical reasoning and multisensory perception. He received his B.S. in mechanical engineering from Purdue University.

Project contributionsHis main contributions in the AlphaGo replication project pertain to the reinforcement learning policy, value network, and programming the game-play logic. 

What did I learn from the course? "This course project has exposed me to coding best-practices and use of Github as a collaborative coding tool. It has also exposed me to programming with neural networks on GPUs and use of computing clusters. I have been surprised at how doable the project is and the eagerness of community collaborators on Github."

Update 2021: postdoctoral fellow at Department of Psychology, Harvard University since Sept. 2020

 

Iris Yuping Ren

Iris Ren is a PhD student in the Audio Information Retrieval lab, Electrical and Computer Engineering department. She received her B.S. in Statistics, B.M.S in Culture Industry Management from Shandong University in 2013, double Masters in Complex System Science from the University of Warwick and Ecole Polytechnique in 2015. She plays the violin and piano, sings in choirs, and learning ukulele and percussion. Her website can be found here.

Project contributionsShe worked on parts of the game logic, benchmarking, hdf5 file concatenation, game utils, etc. 

What did I learn from the course? The framework of the AlphaGo system. Brainstorming and teamwork. Github version control. Techniques in python. 

How might this course affect my career? "I am now more interested in a career relevant to machine learning." 

What was surprising or the greatest challenge? Writing quality code and optimization

Richard Lange

Richard is a second-year PhD student pursuing a dual degree in Computer Science and Brain and Cognitive Science. His interests span neural coding, machine learning, cognitive science, and more.

What did I learn from the course? "In this course, I took on my first significant project leadership role. In addition to getting valuable hands-on experience with modern neural network techniques, I learned how to better explain technical concepts, and how to delegate work effectively."

How might this course affect my caree? "There is a lot that brain scientists can learn from neural networks, and there is no substitute for hands-on experience. I hope to take what I learned from this project as a springboard into using them in my research."

What was surprising or the greatest challenge? "This course challenged us to break our habits formed working on software alone, taking time to blueprint and write a good open-source project."

Louis Marti

Louis graduated from the University of Maryland, College Park with a double degree in Computer Science and Psychology. He worked as a professional software developer for six years before deciding to pursue a PhD in Cognitive Science. He is now studying meta-cognition and certainty with the goal of learning how humans come to believe they know what they believe they know.

Project contributions

Louis worked on bench-marking and optimization, along with the web server

Tyler Trine

Tyler is an undergraduate data science student. His plans for next year are to join the workforce. Longer term, he is passionate about furthering the field of artificial intelligence and using it to improve people's lives.

Project contributionsHe helped interpret and implement parts of AlphaGo's training pipeline, based on DeepMind's originial paper in Nature. 

What did I learn from the course? "I've learned more from this class than any other I've taken in college! In hindsight, we took on a very ambitious project. But throughout the process of learning how AlphaGo works, I became familiar with state-of-the-art deep learning techniques. If we hadn't reached for such a lofty goal, I may not have learned as much."

How might this course affect my career? "Very positively! This class has developed my understanding of machine learning immensely. The techniques we used are general; they have a broad range of applications. Their use is so widespread, in fact, that I feel I would have an edge in both industry and academia."

What was surprising or the greatest challenge? "The greatest challenge was managing complexity. AlphaGo has so many moving parts, each with a distinct purpose. Furthermore, these parts combine to form an idiosyncratic and, at times, unintuitive whole. Thankfully Richard, who studied computer science in his undergraduate, showed us how to build the project scalably from the ground up!"

Yue Wang

University of Rochester, CS MS student (entered 2015).

Email: ywang214@ur.rochester.edu

Project contributionsShe replicated AlphaGo's Monte Carlo tree search.

What did I learn from the course? Learned about many things, including: game tree search, convolutional neural networks, reinforcement learning, supervised training, data structures, and algorithms. 

How might this project affect my career? 

"The biggest takeaway of this course is that it inspired me to learn lots of new things, such as reinforcement learning, convnets, ResNet, GoogleNet, Deep Dream, recurrent neural networks, LSTM, SGD, momentum, Adagrad, and so on. I find them very interesting, and enjoy studying them. I will keep learning about CNN even after the course ends. I find it most helpful on a personal interest level."

What was surprising or the greatest challenge?

"Replicating AlphaGo is a great project to work on in terms of combining the knowledge I learned from my machine learning course and other related courses, and having an idea of how they can be used in the real world. It is pretty impressive that AlphaGo brings convnets to go-playing, by training all those layers of this large network with human players' data and self-play data where the network plays against itself. The project is very challenging, but it is worth working on, regardless of whether we are able to finish the entire project within a few months or not."