Young-Ho Kim, PhD - HCI Researcher & Builder
Case Study of CLOVA CareCall: Benefits and Challenges of Deploying Large-Language-Model-driven Chatbots for Public Health Intervention
Members
Eunkyung Jo
Daniel A. Epstein
Hyunhoon Jung
Young-Ho Kim
Keywords
Large Language Model
Conversational AI
Public Health
Case Study
Interviews
Qualitative Analysis

Recent large language models (LLMs) have advanced the quality of open-ended conversations with chatbots. Although LLM-driven chatbots have the potential to support public health interventions by monitoring populations at scale through empathetic interactions, their use in real-world settings is underexplored. We thus examine the case of CareCall, an open-domain chatbot that aims to support socially isolated individuals via check-up phone calls and monitoring by teleoperators. Through focus group observations and interviews with 34 people from three stakeholder groups, including the users, the teleoperators, and the developers, we found that CareCall offered a holistic understanding of each individual while offloading the public health workload and helped mitigate loneliness and emotional burdens. However, our findings highlight that traits of LLM-driven chatbots led to challenges in supporting public and personal health needs. In the paper, we also discuss considerations of designing and deploying LLM-driven chatbots for public health intervention, including tensions among stakeholders around system expectations.

10-min Presentation Video

Acknowledgments

  • Eunkyung conducted this work as a research intern at NAVER AI Lab (mentored by Young-Ho Kim).

Publication

Best Paper Award
Understanding the Benefits and Challenges of Deploying Conversational AI Leveraging Large Language Models for Public Health Intervention
Eunkyung Jo,
Daniel A. Epstein,
Hyunhoon Jung,
and Young-Ho Kim
ACM CHI 2023 (Full Paper)
AVscript: an Accessible Video Editing Tool Leveraging Audio-Visual Scripts
Members
Mina Huh
Saelyne Yang
Yi-Hao Peng
Xiang 'Anthony' Chen
Young-Ho Kim
Amy Pavel
Keywords
Video editing
blind people
a11y
web

Although sighted and blind and low vision (BLV) creators alike use videos to communicate with broad audiences, video editing remains inaccessible to BLV creators. To migitate the barriers they encounter, we designed and developed AVscript, an accessible text-based video editor. AVscript enables users to edit their video using a script that embeds the video's visual content, visual errors (e.g., dark or blurred footage), and speech. Users can also efficiently navigate between scenes and visual errors or locate objects in the frame or spoken words of interest. In the paper, we report on our formative study that identifies the needs of BLV creators and a series of user studies with BLV creators experiencing AVscript.

System Pipeline

30-Sec Teaser Video

Acknowledgments

  • Mina conducted part of this work as a research intern at NAVER AI Lab (mentored by Young-Ho Kim) and the University of California, Los Angeles (mentored by Anthony Chen).

Publication

AVscript: Accessible Video Editing with Audio-Visual Scripts
Mina Huh,
Saelyne Yang,
Yi-Hao Peng,
Xiang 'Anthony' Chen,
Young-Ho Kim,
and Amy Pavel
ACM CHI 2023 (Full Paper)
MyMove: Collecting In-Situ Activity Labels on a Smartwatch + Speech with Older Adults
Members
Young-Ho Kim
Diana Chou
Bongshin Lee
Margaret Danilovich
Amanda Lazar
David E. Conroy
Hernisa Kacorri
Eun Kyoung Choe
Keywords
Activity tracking
activity labeling
speech
multimodal interaction
smartwatch
a11y

Current activity recognition technologies are less accurate when used by older adults (e.g., counting steps in slower gait speed) and rarely support recognizing types of activities they engage in and care about (e.g., gardening, vacuuming). To build activity trackers for older adults, it is crucial to collect training data with them. To this aim, we built MyMove, a speech-based smartwatch app to facilitate the in-situ labeling with a low capture burden. With MyMove, we explored the feasibility and challenges with older adults in collecting activity labels by leveraging speech.

Demo Video

Funding

Publication

MyMove: Facilitating Older Adults to Collect In-Situ Activity Labels on a Smartwatch with Speech
Young-Ho Kim,
Diana Chou,
Bongshin Lee,
Margaret Danilovich,
Amanda Lazar,
David E. Conroy,
Hernisa Kacorri,
and Eun Kyoung Choe
ACM CHI 2022 (Full Paper)
Data@Hand: Multimodal Data Exploration of Personal Data
Members
Young-Ho Kim
Bongshin Lee
Arjun Srinivasan
Eun Kyoung Choe
Keywords
Personal data visualization
visual data exploration
speech
multimodal interaction
smartphone

Data@Hand is a cross-platform smartphone app that facilitates visual data exploration leveraging both speech and touch interactions. To overcome the smartphones' limitations such as small screen size and lack of precise pointing input, Data@Hand leverages the synergy of speech and touch; speech-based interaction takes little screen space and natural language is flexible to cover different ways of specifying dates and their ranges (e.g., "October 7th", "Last Sunday", "This month"). Currently, Data@Hand supports displaying the Fitbit data (e.g., step count, heart rate, sleep, and weight) for navigation and temporal comparisons tasks.

Demo Video

Funding:

  • National Science Foundation award #1753452 (CAREER: Advancing Personal Informatics through Semi-Automated and Collaborative Tracking, PI: Dr. Eun Kyoung Choe).
  • Young-Ho Kim was in part supported by Basic Science Research Program through the National Research Foundation in Korea, funded by the Ministry of Education (NRF2019R1A6A3A12031352).

Publication

Honorable Mention Award
Data@Hand: Fostering Visual Exploration of Personal Data on Smartphones Leveraging Speech and Touch Interaction
Young-Ho Kim,
Bongshin Lee,
Arjun Srinivasan,
and Eun Kyoung Choe
ACM CHI 2021 (Full Paper)
FoodScrap: Capturing Rich Food Contexts with Speech
Members
Yuhan Luo
Young-Ho Kim
Bongshin Lee
Naeemul Hassan
Eun Kyoung Choe
Keywords
Food journaling
Speech input
Smartphone
OmniTrack

The factors influencing people’s food decisions, such as one’s mood and eating environment, are important information to foster selfreflection and to develop personalized healthy diet. But, it is difficult to consistently collect them due to the heavy data capture burden. In this work, we examine how speech input supports capturing everyday food practice through a week-long data collection study.

Using OmniTrack for Research, we deployed FoodScrap, a speech-based food journaling app that allows people to capture food components, preparation methods, and food decisions. Using speech input, participants detailed their meal ingredients and elaborated their food decisions by describing the eating moments, explaining their eating strategy, and assessing their food practice. Participants recognized that speech input facilitated self-reflection, but expressed concerns around rerecording, mental load, social constraints, and privacy.

Funding

Publication

FoodScrap: Promoting Rich Data Capture and Reflective Food Journaling Through Speech Input
Yuhan Luo,
Young-Ho Kim,
Bongshin Lee,
Naeemul Hassan,
and Eun Kyoung Choe
ACM DIS 2021 (Full Paper)
OmniTrack for Research: A Research Platform for Streamlining Mobile-based In-Situ Data Collection
Members
Young-Ho Kim
Bongshin Lee
Jinwook Seo
Eun Kyoung Choe
Keywords
In-situ data collection
research toolkit
mobile
web
OmniTrack

OmniTrack for Research (O4R) is a research platform for mobile-based in-situ data collection, which streamlines the implementation and deployment of a mobile data collection tool. O4R enables researchers to rapidly translate their study design into a study app, deploy the app remotely, and monitor the data collection, all without requiring any coding.

In-situ data collection studies (e.g., diary study, experience sampling) are commonly used in HCI and UbiComp research to capture people's behaviors, contexts, and self-report measures. To implement such studies, researchers either rely on commercial platforms or build custom tools, which can be inflexible, costly, or time consuming. O4R minds this gap between available tools and researchers' needs.

Research Papers used OmniTrack for Research

OmniTrack for Research was used by us or other researchers to conduct studies at peer-reviewed venues. Here is the list of the publication used OmniTrack for Research:

  1. Yuhan Luo, Bongshin Lee, Young-Ho Kim, and Eun Kyoung Choe
    NoteWordy: Investigating Touch and Speech Input on Smartphones for Personal Data Capture
    PACM HCI (ISS) 2022 Link
  2. Ge Gao, Jian Zheng, Eun Kyoung Choe, and Naomi Yamashita
    Taking a Language Detour: How International Migrants Speaking a Minority Language Seek COVID-Related Information in Their Host Countries
    PACM HCI (CSCW) 2022 Link
  3. Ryan D. Orth, Juyoen Hur, Anyela M. Jacome, Christina L. G. Savage, Shannon E. Grogans, Young-Ho Kim, Eun Kyoung Choe, Alexander J. Shackman, and Jack J. Blanchard
    Understanding the Consequences of Moment-by-Moment Fluctuations in Mood and Social Experience for Paranoid Ideation in Psychotic Disorders
    Schizophrenia Bulletin Open (Oct 2022) Link
  4. Yuhan Luo, Young-Ho Kim, Bongshin Lee, Naeemul Hassan, and Eun Kyoung Choe
    FoodScrap: Promoting Rich Data Capture and Reflective Food Journaling Through Speech Input
    ACM DIS 2021 Link
  5. Eunkyung Jo, Austin L. Toombs, Colin M. Gray, and Hwajung Hong
    Understanding Parenting Stress through Co-designed Self-Trackers
    ACM CHI 2020 Link
  6. Young-Ho Kim, Eun Kyoung Choe, Bongshin Lee, and Jinwook Seo
    Understanding Personal Productivity: Know Knowledge Workers Define, Evaluate, and Reflect on Their Productivity
    ACM CHI 2019 Link
  7. Sung-In Kim, Eunkyung Jo, Myeonghan Ryu, Inha,Cha, Young-Ho Kim, Heejeong Yoo, and Hwajung Hong
    Toward Becoming a Better Self: Understanding Self-Tracking Experiences,of Adolescents with Autism Spectrum Disorder Using Custom Trackers
    EAI PervasiveHealth 2019 Link

Funding

OmniTrack: A Flexible Self-Tracking App for Semi-Automated Tracking
Members
Young-Ho Kim
Jae Ho Jeon
Bongshin Lee
Eun Kyoung Choe
Jinwook Seo
Keywords
Flexible self-tracking
semi-automated tracking
mobile
OmniTrack

OmniTrack is a mobile self-tracking app that enables self-trackers to construct their own trackers and customize tracking items to meet their individual needs. OmniTrack was designed based on the semi-automated tracking concept: People can build a tracker by combining both automated and manual tracking methods to keep a balance between capture burden and tracking feasibility. Under this notion, OmniTrack allows people to combine input fields to define the input schema of a tracker and attach external sensing services, such as Fitbit, to feed sensor data to individual data fields. People can use Triggers to let the system to initiate data entry in a fully automated way.

Demo Video

Funding

  • National Science Foundation under award number CHS-1652715.
  • National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No. NRF-2016R1A2B2007153).

Publication

OmniTrack: A Flexible Self-Tracking Approach Leveraging Semi-Automated Tracking
Young-Ho Kim,
Jae Ho Jeon,
Bongshin Lee,
Eun Kyoung Choe,
and Jinwook Seo
PACM IMWUT (UbiComp 2017) (Journal Article)
Diary Study to Investigate Knowledge Workers' Holistic Nature of Productivity
Members
Young-Ho Kim
Eun Kyoung Choe
Bongshin Lee
Jinwook Seo
Keywords
Productivity
Diary study
Qualitative Analysis
OmniTrack

Existing productivity tracking tools are usually not sufficiently designed to capture the diverse and nebulous nature of individuals' activities: For example, screen time trackers such as RescueTime does not support capturing work activities that do not involve digital devices. As the distinction between work and life has become fuzzy, we need a more holistic understanding of how knowledge workers conceptualize their productivity in both work and non-work contexts. Such knowledge would inform the design of productivity tracking technologies.

We conducted a mobile diary study using OmniTrack for Research, where participants captured their productive activities and the rationale of productivity. From the study, we identified six themes of productivity that participants consider when evaluating their productivity. Participants reported a wide range of productive activities beyond typical desk-bound work, ranging from having a personal conversation with dad to getting a haircut. We learned the way people assess productivity was more diverse and complex than we thought, and the concept of productivity is highly individualized, calling for personalization and customization approaches in productivity tracking.

Funding

Publication

Understanding Personal Productivity: How Knowledge Workers Define, Evaluate, and Reflect on Their Productivity
Young-Ho Kim,
Eun Kyoung Choe,
Bongshin Lee,
and Jinwook Seo
ACM CHI 2019 (Full Paper)
TimeAware: Leveraging Framing Effects to Enhance Personal Productivity
Members
Young-Ho Kim
Jae Ho Jeon
Eun Kyoung Choe
Bongshin Lee
KwonHyun Kim
Jinwook Seo
Keywords
Productivity
Personal data visualization
desktop
ambient display
framing effects

Screen time tracking is now prevalent, but we have little knowledge on how to design effective feedback on the screen time information. To help people enhance their personal productivity by providing effective feedback, we designed and developed TimeAware, a self-monitoring system for capturing and reflecting on personal computer usage behaviors. TimeAware employs an ambient widget to promote self-awareness and to lower the feedback access burden, and web-based information dashboard to visualize people’s detailed computer usage. To examine the effect of framing on individual’s productivity, we compared two versions of TimeAware, each with a different framing setting—one emphasizing productive activities and the other emphasizing distracting activities. We found a significant effect of framing on participants’ productivity: only participants in the negative framing condition improved their productivity. The ambient widget seemed to help sustain engagement with data and enhance self-awareness.

Funding

  • National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No. NRF2014R1A2A2A03006998).

Publication

TimeAware: Leveraging Framing Effects to Enhance Personal Productivity
Young-Ho Kim,
Jae Ho Jeon,
Eun Kyoung Choe,
Bongshin Lee,
KwonHyun Kim,
and Jinwook Seo
ACM CHI 2016 (Full Paper)