The ArticuLab is looking for undergraduate and graduate (masters or PhD) interns for summer 2017. This is a great research experience for those who want to go on to graduate school, or who may want to work in Human-Computer Interaction, in Learning Sciences, in Language Technologies, Psychology, Linguistics, or in Information Systems (as well as other branches of Computer Science). You will work as a member of a team, and be a part of research that brings together rigorous social science with cutting edge technology development. Some jobs target interns with a technical background (statistics, machine learning, AI, natural language processing) and other jobs require no technical background, although psychology, linguistics or education research experience is a plus.

How to Apply

If you are interested in working with us, please contact the lab via the contact page, by selecting the subject “Job at the ArticuLab“. In the “Message” body, please indicate the projects or jobs that interest you most from the list below, or from our **projects page**. We will reply and request your latest CV/resume, listing all relevant coursework and research experience.


2016 ArticuLab: we had 26 summer interns, some of whom are shown here,
as well as our lab director, graduate students, postdocs and staff.

Currently Available Positions

InMind/SARA Project 3D Character Animation Programmer (Full-Time)

We seek a Research Programmer to join a major research effort to develop integrated intelligent software assistants operating on mobile devices. Supported by a $10M gift from Yahoo! The InMind project is now in its fourth year, and is developing a working prototype of future mobile software agents. We are about to field it to an increasingly large opt-in user community, enabling us to research and evaluate new paradigms for software agents of the future. This project serves as an opportunity to build on significant ongoing research at CMU in artificial intelligence, machine learning, human-computer interaction, computer perception, natural language processing and other fields.

The successful candidate for this research programmer position will be part of the team that is developing a human-looking animated virtual assistant front-end to the Yahoo InMind system for the Android platform. He/she will work specifically on Android user interface development to integrate a 3D rendered character and information graphics with the back-end dialogue system. The successful candidate will work under the direct supervision of Prof. Justine Cassell to ensure that the animated character is aligned with the vision of the project. The detailed requirements are as follows.

  • Duties and Responsibilities:
    • To integrate a virtual agent character on Unity3D/Android
    • To synchronize the character’s movements with the text-to- speech engine
    • To design and implement the user interface
  • Qualifications
    • Minimum Education and Competences:
    • Bachelors degree in computer science or related field
    • 3+ years of relevant work experience (or Masters degree in a related field)
    • Development skills:
      • Programming languages: C# and Java
      • Unity3D game engine and IDE
      • 3D human character development on Unity3D
      • Producing clear documentation
  • Desirable Competences:
    • Experience with graphic design
    • Experience with 3D character modeling and animation
    • Experience with Agile development process
    • Experience with mobile development (iOS, Android)
    • Back-end/integration technologies (databases, networking)
  • Contract terms: one year contract, renewable based on availability of funding, and performance.
  • Department URL:
  • Job Function: Research Programming
  • Primary Location: United States-Pennsylvania-Pittsburgh
  • Time Type: Full Time
  • Minimum Education Level: Bachelor’s Degree or equivalent
  • Salary: Negotiable

If you’re interested, please apply via the CMU official job application form. For further questions about this position, please contact with Oscar J. Romero ( and Yoichi Matsuyama (

Summer 2017 Internship

RAPT Project

Developing Intelligent Computer Tutors that Build Relationships with Students

Intelligent conversational agents are all around us—from Siri to Alexa, and many more. At the Articulab, we study human behavior through the lens of computational linguistics and machine learning and build virtual agents that can learn how to respond to people in increasingly more natural, social ways. What if you could learn from those agents, and what if they could learn over time how to teach you better? In the RAPT (Rapport-Aligned Peer Tutor) project, we study how the interpersonal closeness, or rapport, between people improves their learning, and we use computational tools to detect the verbal and nonverbal behaviors that contribute to that rapport. For more details about the project, see the project description.

This summer, we’re looking for students with backgrounds in computer science, natural language processing, machine learning, psychology, and/or HCI to help us build and improve the design of a rapport-building virtual agent, as part of an intelligent tutoring system. If you’re interested in applying for a research internship at the Articulab working on the RAPT project this summer, please send an email to the project lead, Michael Madaio, at, and copy the lab manager Lauren Smith ( Please include with your message your resume and a cover letter expressing your interest in the position.

SCIPR Project

Developing an Intelligent Virtual Child and Interactive Tabletop that Raise Curiosity in Small Group Science Learning

Curiosity inspires you to study your favorite subjects, and stay up late nights reading about topics you’re passionate about. Sadly, curiosity is becoming less common in elementary and middle schools in our increasingly test-oriented society. Our lab is developing an intelligent virtual child and interactive tabletop that will evoke curiosity, exploration, and self-efficacy through collaboration in a playful learning environment. For more details about the project, see the project description on our website.

We are looking for several interns in computer science and related majors to take part in developing the front-end and back-end modules of the (1) intelligent virtual child using multimodal technologies, and (2) interactive tabletop using augmented and tangible technologies. In particular, we look for students with the following skills:

  • Hands-on experience in system development, machine learning and natural language processing
  • Strong programming skills with Java (required), Python (required), and Unity (preferred)
  • Hands-on experience in game, augmented reality, and/or tangible user interface development (preferred)

Interested students should send a resume/CV to Dr. Zhen Bai ( and Lauren Smith ( in an email with the subject heading “SCIPR Internship”.

SARA/InMind Project

Developing Core Components for Social AI Framework

We are inventing a personal assistant of the future – SARA, Socially-Aware Robot Assistant that can build a relationship with a user, and then employ that relationship to better achieve the user’s goals. SARA is an embodied intelligent personal assistant that analyses the user’s visual (head and face movement), vocal (acoustic features) and verbal (conversational strategies) behaviors to estimate, in real time, the level of interpersonal closeness that the user feels for the system, and then uses its own appropriate nonverbal behaviors (body movements synthesized on an intelligent agent), vocal (acoustic features) and verbal behaviors (language)  to maintain or increase interpersonal closeness so that the user is willing to disclose information that will allow the system to better serve the user’s goals. SARA was presented at the World Economic Forum 2017. For more details about the project, see the project description.

We are looking for students to build a conversational interaction based personal assistant agent together. For this position, some combinations of following technical skills/experiences are preferable.

  • Natural language processing (e.g. natural language understanding, natural language generation)
  • Deep neural network (e.g. sequential modeling, multimodal machine learning)
  • Computer graphics / robotics (e.g. character animation synthesis)
  • Software engineering and development (e.g. large-scale software architecture, Unity development, Android development)
  • Cognitive science (e.g. social brain, cognitive architecture)
  • Data visualization (e.g. visualizing large-scale data for discovery and understanding)

If you’re interested, please send your resume and cover letter referring to the SARA project to our lab manager Lauren Smith (, and project lead Dr. Yoichi Matsuyama (