157 - Diyi Yang: Socially Aware Large Language Models

157 - 第一羊:具有社会意识的巨型语言模型

Stanford Psychology Podcast

2025-10-03

42 分钟

单集简介 ...

In this episode, Su chats with Diyi Yang, an assistant professor in the Computer Science Department at Stanford University, affiliated with the Stanford NLP Group, Stanford Human Computer Interaction Group, Stanford AI Lab, and Stanford Human-Centered Artificial Intelligence. She is also leading the Social and Language Technologies Lab, where they study Socially Aware Natural Language Processing. Her research goal is to better understand human communication in social context and build socially aware language technologies via methods of NLP, deep learning, and machine learning as well as theories in social sciences and linguistics, to support human-human and human-computer interaction. In today's episode, we discuss her interdisciplinary approach to research, along with her recent paper "Social Skill Training with Large Language Models," which introduces a new framework that supports making social skill training more available, accessible, and inviting. Diyi’s paper: https://arxiv.org/abs/2404.04204 Diyi’s lab website: https://cs.stanford.edu/~diyiy/group.html  Diyi’s personal website: https://cs.stanford.edu/~diyiy/index.html  Su’s Twitter: @sudkrc Podcast Twitter: @StanfordPsyPod Podcast Bluesky: @stanfordpsypod.bsky.social Podcast Substack: https://stanfordpsypod.substack.com/ Let us know what you thought of this episode, or of the podcast! :) stanfordpsychpodcast@gmail.com This episode was recorded on February 5, 2025.
更多

单集文稿 ...

该单集暂无文稿,联系我们制作?