Category: Multimodal

When to generate hedges in peer-tutoring interactions

Abulimiti, A., Clavel, C. & Cassell, J. (2023). When to generate hedges in peer-tutoring interactions. In Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 572–583, Prague, Czechia. Association for Computational Linguistics.

  • Computational Models of Behavior
  • Conversation
  • Education
  • Machine Learning
  • Multimodal
  • Natural Language Generation (NLG)
  • Peer-Tutoring
  • Peers
  • Virtual Peers

How About Kind of Generating Hedges using End-to-End Neural Models?

Abulimiti, A., Clavel, C. & Cassell, J. (2023). How About Kind of Generating Hedges using End-to-End Neural Models?. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 877–892, Toronto, Canada. Association for Computational Linguistics.

  • Computational Models of Behavior
  • Conversation
  • Education
  • Machine Learning
  • Multimodal
  • Natural Language Generation (NLG)
  • Peer-Tutoring
  • Peers
  • Virtual Peers

A novel multimodal approach for studying the dynamics of curiosity in small group learning

Sinha, T., Bai, Z., & Cassell, J. (2021). A novel multimodal approach for studying the dynamics of curiosity in small group learning. In International Conference on Spoken Dialog System Technology. https://doi.org/10.35542/osf.io/rfxwg

  • Computational Models of Behavior
  • Education
  • Multimodal
  • Peers

Curious Minds Wonder Alike: Studying Multimodal Behavioral Dynamics to Design Social Scaffolding of Curiosity

Sinha, T., Bai, Z., Cassell, J. (2017, September), “Curious Minds Wonder Alike: Studying Multimodal Behavioral Dynamics to Design Social Scaffolding of Curiosity”, In proceedings of 12th European Conference on Technology Enhanced Learning (pp 270-285). Springer International Publishing

  • Computational Models of Behavior
  • Conversation
  • Education
  • Multimodal
  • Peers
  • Verbal and Nonverbal

A New Theoretical Framework for Curiosity for Learning in Social Contexts

Sinha, T., Bai, Z., Cassell, J. (2017, September), “A New Theoretical Framework for Curiosity for Learning in Social Contexts”, In proceedings of 12th European Conference on Technology Enhanced Learning (pp 254-269). Springer International Publishing   [*Best Paper Nominee (Top 4.2% of all paper submissions)*]

  • Computational Models of Behavior
  • Conversation
  • Education
  • Multimodal
  • Peers
  • Verbal and Nonverbal

Using Temporal Association Rule Mining to Predict Dyadic Rapport in Peer Tutoring

Madaio, M., Ogan, A., Cassell, J. (2017). Using Temporal Association Rule Mining to Predict Dyadic Rapport in Peer Tutoring. In Proceedings of the 10th International Conference on Educational Data Mining, 2017. (in press).

  • Education
  • Embodied Conversational Agents
  • Machine Learning
  • Multimodal
  • Peer-Tutoring
  • Peers
  • Rapport
  • Social Skills
  • Verbal and Nonverbal

The Impact of Peer Tutors’ Use of Indirect Feedback and Instructions

Madaio, M. A., Cassell, J., & Ogan, A. (2017, June). The Impact of Peer Tutors’ Use of Indirect Feedback and Instructions. In Proceedings of the Twelfth International Conference of Computer-Supported Collaborative Learning, 2017[*Best Student Paper Nominee*]

  • Conversation
  • Direction-Giving
  • Education
  • Multimodal
  • Peer-Tutoring
  • Peers
  • Rapport
  • Social Skills
  • Verbal and Nonverbal

Socially-Aware Animated Intelligent Personal Assistant Agent

Matsuyama, M., Bhardwaj, A., Zhao, R., Romero, O., Akoju, S., Cassell, J. (2016, September). Socially-Aware Animated Intelligent Personal Assistant Agent, 17th Annual SIGDIAL Meeting on Discourse and Dialogue

  • Architectures
  • Computational Models of Behavior
  • Embodied Conversational Agents
  • Implementation
  • Machine Learning
  • Multimodal
  • Natural Language Generation (NLG)
  • Rapport
  • Verbal and Nonverbal

Socially-Aware Virtual Agents: Automatically Assessing Dyadic Rapport from Temporal Patterns of Behavior

Zhao, R., Sinha, T., Black, A., & Cassell, J. (2016, September). “Socially-Aware Virtual Agents: Automatically Assessing Dyadic Rapport from Temporal Patterns of Behavior”, 16th International Conference on Intelligent Virtual Agents (IVA) [*Best Student Paper*]

  • Computational Models of Behavior
  • Conversation
  • Education
  • Machine Learning
  • Multimodal
  • Peer-Tutoring
  • Rapport
  • Verbal and Nonverbal

Automatic Recognition of Conversational Strategies in the Service of a Socially-Aware Dialog System

Zhao, R., Sinha, T., Black, A., & Cassell, J. (2016, September). “Automatic Recognition of Conversational Strategies in the Service of a Socially-Aware Dialog System”, 17th Annual SIGDIAL Meeting on Discourse and Dialogue

  • Computational Models of Behavior
  • Conversation
  • Education
  • Machine Learning
  • Multimodal
  • Peer-Tutoring
  • Rapport
  • Verbal and Nonverbal

Exploring Socio-Cognitive Effects of Conversational Strategy Congruence in Peer Tutoring

Sinha, T., Zhao, R., & Cassell, J. (2015, November). Exploring Socio-Cognitive Effects of Conversational Strategy Congruence in Peer Tutoring. In Proceedings of 2015 Workshop on Modeling Interpersonal Synchrony, 17th ACM International Conference on Multimodal Interaction (ICMI). ACM.

  • Computational Models of Behavior
  • Conversation
  • Education
  • Multimodal
  • Peer-Tutoring
  • Rapport

We Click, We Align, We Learn: Impact of Influence and Convergence Processes on Student Learning and Rapport Building

Sinha, T., & Cassell, J. (2015, November). We Click, We Align, We Learn: Impact of Influence and Convergence Processes on Student Learning and Rapport Building. In Proceedings of 2015 Workshop on Modeling Interpersonal Synchrony, 17th ACM International Conference on Multimodal Interaction (ICMI). ACM.

  • Computational Models of Behavior
  • Conversation
  • Education
  • Multimodal
  • Peer-Tutoring
  • Rapport

Multimodal Prediction of Psychological Disorder: Learning Verbal and Nonverbal Commonality in Adjacency Pairs

Yu,Z., Scherer,S., Devault,D., Gratch,J., Stratou,G., Morency,L. and Cassell,J.(2013), “Multimodal Prediction of Psychological Disorder: Learning Verbal and Nonverbal Commonality in Adjacency Pairs”, in Proceding of 17th Workshop Series on the Semantics and Pragmatics of Dialogue, Dec 2013,Amsterdam, Netherland

  • Computational Models of Behavior
  • Multimodal

Automatic Prediction of Friendship via Multi-model Dyadic Features

Yu,Z., Gerritsen,D., Ogan,A., Black,A. and Cassell,J.(2013) “Automatic Prediction of Friendship via Multi-model Dyadic Features”. in Proceedings of the 14th annual SIGdial Meeting on Discourse and Dialogue.Aug 22-24 2013, Metz,France.

  • Computational Models of Behavior
  • Multimodal
  • Rapport

Investigating the Influence of Virtual Peers as Dialect Models on Students’ Prosodic Inventory

Finkelstein, S., Scherer, S., Ogan, A., Morency, L.P., Cassell, J. (2012) “Investigating the Influence of Virtual Peers as Dialect Models on Students’ Prosodic Inventory”. in Proceedings of WOCCI (Workshop on Child-Computer Interfaces) at INTERSPEECH 2013, September 14-15, 2012, Portland, OR.

  • Culture
  • Dialect
  • Embodied Conversational Agents
  • Multimodal
  • Virtual Peers

The Role of Embodiment and Perspective in Direction-Giving Systems

Hasegawa, D., Cassell, J., Araki, K. (2010) “The Role of Embodiment and Perspective in Direction-Giving Systems” in Proceedings of AAAI Fall Workshop on Dialog with Robots. Nov 11-13, Arlington, VA

  • Direction-Giving
  • Embodied Conversational Agents
  • Multimodal

Knowledge Representation for Generating Locating Gestures in Route Directions

Striegnitz, K., Tepper, P., Lovett, A. & Cassell, J. (2008) “Knowledge Representation for Generating Locating Gestures in Route Directions” In K.R. Coventry, T. Tenbrink & J. Bateman (Eds.), Spatial Language and Dialogue (Explorations in Language and Space). Oxford: Oxford University Press.

  • Computational Models of Behavior
  • Direction-Giving
  • Multimodal

Reactive Redundancy and Listener Comprehension in Direction-Giving

Baker, R., Gill, A. & Cassell, J. (2008). “Reactive Redundancy and Listener Comprehension in Direction-Giving” Proceedings of SIGDIAL, June 19-20, Columbus, Ohio.

  • Computational Models of Behavior
  • Direction-Giving
  • Multimodal

Coordination in Conversation and Rapport

Cassell, J., Gill, A. & Tepper, P. (2007) Coordination in Conversation and Rapport. Proceedings of the Workshop on Embodied Natural Language, Association for Computational Linguistics . June 24-29, Prague, CZ.

  • Computational Models of Behavior
  • Multimodal
  • Rapport

Body Language: Lessons from the Near-Human

Cassell, Justine (2007) “Body Language: Lessons from the Near-Human”. In J. Riskin (ed.) Genesis Redux : Essays in the History and Philosophy of Artificial Intelligence . Chicago: University of Chicago Press., pp 346-374.

  • Computational Models of Behavior
  • Embodied Conversational Agents
  • Multimodal

Trading Spaces: How Humans and Humanoids use Speech and Gesture to Give Directions

Cassell, Justine, Kopp, Stefan, Tepper, Paul, Ferriman, Kim & Striegnitz, K . (2007) “Trading Spaces: How Humans and Humanoids use Speech and Gesture to Give Directions.” In T. Nishida (ed.) Conversational Informatics. New York: John Wiley & Sons, pp. 133-160

  • Computational Models of Behavior
  • Embodied Conversational Agents
  • Multimodal
  • Natural Language Generation (NLG)

Is it Self-Administration if the Computer Gives you Encouraging Looks?

Cassell, Justine & Miller, Peter (2007) “Is it Self-Administration if the Computer Gives you Encouraging Looks?” In F.G. Conrad & M.F. Schober (Eds.), Envisioning the Survey Interview of the Future. New York: John Wiley & Sons, pp. 161-178.

  • Embodied Conversational Agents
  • Multimodal
  • Survey Interviewing

Towards Integrated Microplanning of Language and Iconic Gesture for Multimodal Output

Kopp, Stefan, Tepper, Paul and Cassell, Justine. (2004). “Towards Integrated Microplanning of Language and Iconic Gesture for Multimodal Output.”Proceedings of the International Conference on Multimodal Interfaces (ICMI) 2004. Oct. 14-15, Penn State University, State College, PA.

  • Embodied Conversational Agents
  • Multimodal
  • Natural Language Generation (NLG)

Knowledge Representation for Generating Locating Gestures in Route Directions

Striegnitz, Kristina, Tepper, Paul, Lovett, Andrew, & Cassell, Justine (2005) “Knowledge Representation for Generating Locating Gestures in Route Directions” . In Proceedings of Workshop on Spatial Language and Dialogue (5th Workshop on Language and Space). October 23-25, 2005; Delmenhorst, Germany

  • Computational Models of Behavior
  • Direction-Giving
  • Multimodal
  • Natural Language Generation (NLG)

Content in Context: Generating Language and Iconic Gesture without a Gestionary

Tepper, P., Kopp, S., Cassell, J. (2004) “Content in Context: Generating Language and Iconic Gesture without a Gestionary”Proceedings of the Workshop on Balanced Perception and Action in ECAs at AAMAS ’04.

  • Embodied Conversational Agents
  • Gesture
  • Multimodal
  • Natural Language Generation (NLG)

Negotiated Collusion: Modeling Social Language and its Relationship Effects in Intelligent Agents

Cassell, J., Bickmore, T. (2003) “Negotiated Collusion: Modeling Social Language and its Relationship Effects in Intelligent Agents”User Modeling and User-Adapted Interaction 13(1-2): 89-132

  • Architectures
  • Computational Models of Behavior
  • Direction-Giving
  • Embodied Conversational Agents
  • Evaluation
  • Multimodal
  • Trust

Towards a Model of Face-to-Face Grounding

Nakano, Y., Reinstein, G., Stocky, T., Cassell, J. (2003) “Towards a Model of Face-to-Face Grounding”Proceedings of the Annual Meeting of the Association for Computational Linguistics. July 7-12, Sapporo, Japan.

  • Conversation
  • Embodied Conversational Agents
  • Multimodal