Human-Computer Interaction Research | CHI 2026 – Research Impact & Leadership
Skip to content
About
Experts 📊
papers & sessions 📊
Awards 🏆
Research Highlights
Faculty Voices
faculty and partners
ALL RESEARCH
Sections
About
Experts 📊
Papers & Sessions 📊
Awards 🏆
Research Highlights
Faculty Voices
Faculty and Partners
ALL RESEARCH
Human-computer interaction research
at Georgia Tech focuses on experiences that empower people and transforms how they live, work, and connect through technology.
Learn about new Georgia Tech research at CHI 2026
Explore Now
Georgia Tech @ CHI 2026
Georgia Tech is a leading contributor to CHI 2026, the Association for Computing Machinery’s
CHI conference on Human Factors in Computing Systems
CHI (pronounced kai) takes place April 13-17, 2026, in Barcelona.
GEORGIA TECH
HIGHLIGHTS:
Nine award papers: 2 Best Papers, 7 Honorable Mentions
52 Papers across 41 Sessions
The College of Computing has experts from all five of its schools contributing, and it leads Tech’s contributions in the program.
Tech has 110+ experts with research
.* Twenty-six experts have two or more (2+) contributions in the technical program.
EXPERTS grouped by number of contributions
(scroll to the next section to discover the people)
* White hexagons are GT experts with a dual affiliation (e.g. GT and Google)
Georgia Tech @ CHI 2026
Georgia Tech is a leading contributor to CHI 2026, the Association for Computing Machinery’s
CHI conference on Human Factors in Computing Systems
CHI (pronounced kai) takes place April 13-17, 2026, in Barcelona.
HIGHLIGHTS:
Nine award papers: 2 Best Papers, 7 Honorable Mentions
52 Papers across 41 Sessions
The College of Computing has experts from all five of its schools contributing, and it leads Tech’s contributions in the program.
Tech has 110+ experts with research
.* Twenty-six experts have two or more (2+) contributions in the technical program.
EXPERTS grouped by number of contributions
(scroll to the next section to discover the people)
* White hexagons are GT experts with a dual affiliation (e.g. GT and Google)
Meet Our Experts
Discover the full range of new Georgia Tech research in human-computer interaction at CHI 2026.
Through the interactive data story, meet the faculty, students, and partners advancing the field, and explore details of their work.
EXPLORE now
Explore Papers by Session
Discover the Georgia Tech papers and teams by session at CHI 2026.
Clicking on a session changes the view to display the papers and people in the session.
EXPLORE now
Research Highlights
Transformer Explainer Shows How AI is More Math than Human
Georgia Tech researchers are making AI easier to understand through their work on Transformer Explainer. The free, online tool shows non-experts how ChatGPT, Claude, and other large language models (LLMs) process language.
Transformer Explainer is easy to use and runs on any web browser. It quickly went viral after its debut, reaching 150,000 users in its first three months. More than 563,000 people worldwide have used the tool so far.
Researchers Look to Bolster Technology Support for Menopause
Women in need of supportive maternal and menstrual healthcare in patriarchal societies have increasingly found outlets for disclosure in online communities.
That support, however, begins to disappear in these restrictive cultures once women reach menopause, according to new research from Georgia Tech.
New Study Shows Explainability is a Must for Older Adults to Trust AI
Voice-activated, conversational artificial intelligence (AI) agents must provide clear explanations for their suggestions, or older adults aren’t likely to trust them.
That’s one of the main findings from a study by AI Caring on what older adults expect from explainable AI (XAI).
This New Tool Makes AI’s Role in Student Writing Visible
DraftMarks, a new open‑source tool developed by Georgia Tech and Stanford researchers, makes the writing process itself visible.
Instead of trying to assess how much of a finished document was written by AI, DraftMarks shows where a student iterated with AI prompts, what is fully AI, and how a piece evolved — illuminating the often-invisible collaboration between human writers and AI.
Award Papers
Best Paper authors
Tanmaie Kailash
and
Cindy Lin
conducted an empirical study of data centers in Singapore.
They discovered
that the country’s approach to data center development diverges from the commonplace notion that data centers must rely on vast amounts of land, energy, and water. Public and private actors in Singapore strategically portray data centers as a critical resource that enable the country’s digital leadership and sustainable infrastructure, while driving its growth with a highly skilled workforce.
Best Paper author
Andrea Parker
, as a visiting faculty researcher at Google Research, worked with members of marginalized communities,
studying how AI in health can help or hinder care for minority populations
First authors of GT-led award papers:
Best paper author Tanmaie Kailash with (clockwise) Honorable Mention paper authors Lingqing Wang, Alexandra Teixeira Riggs, Richmond Wong, and Grace Barkhuff.
GT-Led Research
listed by percent of GT contribution, then team size
🏅 Honorable Mention
Futuring Social Assemblages: How Enmeshing AIs into Social Life Challenges the Individual and the Interpersonal
Lingqing Wang, Yingting Gao, Chidimma Anyi, Ashok Goel
🏆 Best Paper
Localized Imaginaries, Global Assets: Sociotechnical Imaginaries and the Assetization of Data Centers in Singapore
Tanmaie Kailash, Cindy Lin
🏅 Honorable Mention
Reconfiguring through Ruptures: Material Reconfigurations and Un/Making as Tangible Tactics for Queering AI-Generated Histories
Alexandra Teixeira Riggs, Noura Howell
🏅 Honorable Mention
Situated Imaginaries: Designing AI Futures with Computer Science Teaching Assistants
Grace Barkhuff, Ian Pruitt, Vyshnavi Namani, William Johnson, Anu Bourgeois, Ellen Zegura, Rodrigo Borela, Ben Shapiro
🏅 Honorable Mention
Reflections Towards an Ecology of Internet Connectivity: Three Speculative Scenarios Involving Foot Pedals
Richmond Wong, Nick Merrill, Robert Soden
Partner-Led Research
listed by percent of GT contribution, then team size
🏅 Honorable Mention
Noondawind: Co-Designed Dashboard for Indigenous Data Access and Environmental Policy Implementation
Julia McKenna, Gabriela Buraglia, Jahanvi Kolakaluri, Rachel Baker-Ramos, Samantha Carter, Joe Graveen, Jonathan Gilbert, James Rasmussen, Brandon Byrne, Darren Vogt, Josiah Hester, Kim Marion Suiseeya, Alex Cabral
🏆 Best Paper
Promise or Peril? Exploring Black Adults’ Perspectives on the Use of Artificial Intelligence in Health Contexts
Andrea Parker, Laura Vardoulakis, Christina Harrington
🏅 Honorable Mention
Can LLM-Simulated Practice and Feedback Upskill Human Counselors? A Randomized Study with 90+ Novice Counselors
Ryan Louie, Ifdita Hasan Orney, Juan Pablo Pacheco, Raj Shah, Emma Brunskill, Diyi Yang
🏅 Honorable Mention
From Future of Work to Future of Workers: Addressing Asymptomatic AI Harms to Foster Dignified Human-AI Interaction
Upol Ehsan, Samir Passi, Koustuv Saha, Todd McNutt, Mark Riedl, Sara Alcorn
FACULTY VOICES
What’s Next for the Future of HCI
Each faculty expert is listed with their CHI 2026 research teams, one per row.
Teams
➡️ grouped by
GT experts
and
Partner experts
; Sorted by number of GT experts.
My hope is that the future of HCI continues the best of its critical and reflexive practices: not to naively embrace every new technology but instead to ask substantial, and often difficult, questions about the possibilities and consequences of computational systems to society.
My hope is that the future of HCI continues the best of its critical and reflexive practices: not to naively embrace every new technology but instead to ask substantial, and often difficult, questions about the possibilities and consequences of computational systems to society.
Carl DiSalvo
Professor,
Interactive Computing
Moving beyond productivity and efficiency, I envision human-centered technology that supports our humanity by empowering people to engage more deeply with their values, strengths, and lived experiences, while embracing diversity rather than replacing it.
Moving beyond productivity and efficiency, I envision human-centered technology that supports our humanity by empowering people to engage more deeply with their values, strengths, and lived experiences, while embracing diversity rather than replacing it.
Jennifer Kim
Asst. Professor,
Interactive Computing
The field of HCI continues to grow rapidly, expanding into new, specialized venues that prioritize a human-centric approach to computing. As the world contends with ever more knowns and unknowns surrounding the promises and perils of emerging technologies, HCI as a field affords the scrutiny and reflexivity that are so urgently needed.
The field of HCI continues to grow rapidly, expanding into new, specialized venues that prioritize a human-centric approach to computing. As the world contends with ever more knowns and unknowns surrounding the promises and perils of emerging technologies, HCI as a field affords the scrutiny and reflexivity that are so urgently needed.
Neha Kumar
Professor,
Interactive Computing
I envision a future of CHI that centers human agency in intelligence, embraces unconventional materials as computational possibilities, and engages broader communities as co-designers of computing systems, fostering more inclusive, imaginative, and transformative ways of relating to technology.
I envision a future of CHI that centers human agency in intelligence, embraces unconventional materials as computational possibilities, and engages broader communities as co-designers of computing systems, fostering more inclusive, imaginative, and transformative ways of relating to technology.
HyunJoo Oh
Asst. Professor,
Interactive Computing & Design
In studying the role that computing technologies should play in human life, HCI will increasingly study issues related to social values and ethics, consider systems of social power, explore new forms of communication and expression, and pursue meaningful alternatives to the status quo.
In studying the role that computing technologies should play in human life, HCI will increasingly study issues related to social values and ethics, consider systems of social power, explore new forms of communication and expression, and pursue meaningful alternatives to the status quo.
Richmond Wong
Asst. Professor,
Literature, Media, and Communication
Faculty Experts
Explore faculty by unit.
College of Computing researchers are the largest cohort of experts in the program.
Computational Science and Engineering
Duen Horng “Polo” Chau
Computer Science
Amanda Meng
Ellen Zegura
Computing Instruction
Rodrigo Borela
Cybersecurity and Privacy
Taesoo Kim
Michael Specter
Interactive Computing
Rosa Arriaga
Shaowen Bardzell
Cindy Xiong Bearfield
Munmun De Choudhury
Betsy DiSalvo
Carl DiSalvo
Alex Endert
Ashok Goel
Sehoon Ha
Josiah Hester
Sylvia Janicki
Naveena Karusala
Jennifer Kim
Neha Kumar
Cindy Lin
Christopher MacLellan
HyunJoo Oh
Andrea Parker
Mark Riedl
Alan Ritter
Jessica Roberts
Agata Rozga
Wei Xu
Yalong Yang
Literature, Media, and Communication
Noura Howell
Richmond Wong
Partner Organizations
More than 80 organizations have experts working with GT paper authors at CHI 2026.
Explore Orgs
1854 Treaty Authority ○ Aarhus University ○ Carnegie Mellon University ○ City University of New York ○ Columbia University ○ Cornell University ○ Decatur Makers ○ Duke University ○ Emory University ○ Ewha Womans University ○ Fudan University ○ Future University Hakodate ○
Georgia Tech
○ Georgia State University ○ Google ○ Great Lakes Indian Fish and Wildlife Commission ○ Gwangju Institute of Science and Technology ○ Harvard University ○ Hokkaido Information University ○ Hong Kong University of Science and Technology ○ IBM ○ Indraprastha Institute of Information Technology Delhi ○ Institut Polytechnique de Paris ○ Johns Hopkins University ○ JPMorgan Chase ○ KAIST ○ Kilimanjaro Blind Trust Africa ○ KTH Royal Institute of Technology ○ Lac du Flambeau Band of Lake Superior Chippewa Indians ○ Lahore University of Management Sciences ○ Louisiana State University ○ LPA ○ Massachusetts Institute of Technology ○ Mercari ○ Microsoft ○ Monash University ○ Nara Institute of Science and Technology ○ New York University ○ North Carolina State University ○ Northeastern University ○ Northwestern University ○ Ochanomizu University ○ Princeton University ○ Rutgers University ○ Samsung ○ Short Stature Society of Kenya ○ Simon Fraser University ○ Singapore Management University ○ Spelman College ○ Stanford University ○ Stockholm University ○ Sungkyunkwan University ○ Tsinghua University ○ Ulu Lāhui Foundation ○ Université de Toulouse ○ Université Paris-Saclay ○ University of Adelaide ○ University of Biological and Allied Sciences ○ University of California, Berkeley ○ University of California, Irvine ○ University of California, Los Angeles ○ University of California, Santa Barbara ○ University of Colorado Boulder ○ University of Illinois Chicago ○ University of Illinois Urbana-Champaign ○ University of Konstanz ○ University of Minnesota ○ University of Notre Dame ○ University of Queensland ○ University of San Francisco ○ University of South Australia ○ University of Southern Denmark ○ University of Stuttgart ○ University of Tennessee ○ University of Tokyo ○ University of Toronto ○ University of Tsukuba ○ University of Virginia ○ University of Washington ○ University of Waterloo ○ University of Wisconsin–Madison ○ Urban Institute ○ West Atlanta Watershed Alliance ○ Yonsei University ○
Featured Research
School of Computational Science and Engineering
Transformer Explainer Shows How AI is More Math than Human
Transformer Explainer Shows How AI is More Math than Human
By Bryant Wine
Share story
Transformer Explainer authors (l to r) Grace Kim, Alec Helbling, Aeree Cho, and Seongmin Lee, with Professor Polo Chau. Not pictured: Alex Karpekov, Ben Hoover, and Zijie (Jay) Wang.
While people use search engines, chatbots, and generative artificial intelligence tools every day, most don’t know how they work. This sets unrealistic expectations for AI and leads to misuse. It also slows progress toward building new AI applications.
Georgia Tech researchers are making AI easier to understand through their work on Transformer Explainer. The free, online tool shows non-experts how ChatGPT, Claude, and other large language models (LLMs) process language.
Transformer Explainer
is easy to use and runs on any web browser. It quickly went viral after its debut, reaching 150,000 users in its first three months. More than 563,000 people worldwide have used the tool so far.
Global interest in Transformer Explainer continues when the team presents the tool at the 2026 Conference on Human Factors in Computing Systems (
CHI 2026
). CHI, the world’s most prestigious conference on human-computer interaction, will take place in Barcelona, April 13-17.
“There are moments when LLMs can seem almost like a person with their own will and personality, and that misperception has real consequences. For example, there have been cases where teenagers have made poor decisions based on conversations with LLMs,” said Ph.D. student
Aeree Cho
“Understanding that an LLM is fundamentally a model that predicts the probability distribution of the next token helps users avoid taking its outputs as absolute. What you put in shapes what comes out, and that understanding helps people engage with AI more carefully and critically.”
A transformer is a neural network architecture that changes data input sequence into an output. Text, audio, and images are forms of processed data, which is why transformers are common in generative AI models. They do this by learning context and tracking mathematical relationships between sequence components.
Transformer Explainer demystifies how transformers work. The platform uses visualization and interaction to show, step by step, how text flows through a model and produces predictions.
There are moments when LLMs can seem almost like a person with their own will and personality, and that misperception has real consequences. What you put in shapes what comes out, and that understanding helps people engage with AI more carefully and critically.
Aeree Cho
Ph.D. student, Machine Learning
Georgia Tech
There are moments when LLMs can seem almost like a person with their own will and personality, and that misperception has real consequences. What you put in shapes what comes out, and that understanding helps people engage with AI more carefully and critically.
Aeree Cho
Ph.D. student, Machine Learning
Georgia Tech
Using this approach, Transformer Explainer impacts the AI landscape in four main ways:
It counters hype and misconceptions surrounding AI by showing how transformers work.
It improves AI literacy among users by removing technical barriers and lowering the entry for learning about AI.
It expands AI education by helping instructors teach AI mechanisms without extensive setup or computing resources.
It influences future development of AI tools and educational techniques by providing a blueprint for interpretable AI systems.
“When I first learned about transformers, I felt overwhelmed. A transformer model has many parts, each with its own complex math. Existing resources typically present all this information at once, making it difficult to see how everything fits together,” said
Grace Kim
, a dual B.S./M.S. computer science student.
“By leveraging interactive visualization, we use levels of abstraction to first show the big picture of the entire model. Then users click into individual parts to reveal the underlying details and math. This way, Transformer Explainer makes learning far less intimidating.”
Many users don’t know what transformers are or how they work. The Georgia Tech team found that people often misunderstand AI. Some label AI with human-like characteristics, such as creativity. Others even describe it as working like magic.
Furthermore, barriers make it hard for students interested in transformers to start learning. Tutorials tend to be too technical and overwhelm beginners with math and code. While visualization tools exist, these often target more advanced AI experts.
Transformer Explainer overcomes these obstacles through its interactive, user-focused platform. It runs a familiar GPT model directly in any web browser, requiring no installation or special hardware.
Users can enter their own text and watch the model predict the next word in real time. Sankey-style diagrams show how information moves through embeddings, attention heads, and transformer blocks.
The platform also lets users switch between high-level concepts and detailed math. By adjusting temperature settings, users can see how randomness affects predictions. This reveals how probabilities drive AI outputs, rather than creativity.
“Millions of people around the world interact with transformer-driven AI. We believe that it is crucial to bridge the gap between day-to-day user experience and the models’ technical reality, ensuring these tools are not misinterpreted as human-like or seen as sentient,” said Ph.D. student
Alex Karpekov
“Explaining the architecture helps users recognize that language generated by models is a product of computation, leading to a more grounded engagement with the technology.”
Cho, Karpekov, and Kim led the development of Transformer Explainer. Ph.D. students
Alex Helbling
Seongmin Lee
Ben Hoover
, and alumnus
Zijie (Jay) Wang
assisted on the project.
Professor
Polo Chau
supervised the group and their work. His lab focuses on data science, human-centered AI, and visualization for social good.
Acceptance at CHI 2026 stems from the team winning the best poster award at the 2024 IEEE Visualization Conference. This recognition from one of the top venues in visualization research highlights Transformer Explainer’s effectiveness in teaching how transformers work.
“Transformer Explainer has reached over half a million learners worldwide,” said Chau, a faculty member in the School of Computational Science and Engineering.
“I’m thrilled to see it extend Georgia Tech’s mission of expanding access to higher education, now to anyone with a web browser.”
Back to News
School of Interactive Computing
Researchers Look to Bolster Technology Support for Menopause
Researchers Look to Bolster Technology Support for Menopause
Women in need of supportive maternal and menstrual healthcare in patriarchal societies have increasingly found outlets for disclosure in online communities.
That support, however, begins to disappear in these restrictive cultures once women reach menopause, according to new research from Georgia Tech.
By Nathan Deen
Share story
Umme Ammara
Naveena Karusala
, an assistant professor in Georgia Tech’s School of Interactive Computing, and master’s student
Umme Ammara
are working toward improving existing technologies and designing new ones for a demographic they believe has been neglected.
Karusala and Ammara co-authored a paper based on a study they conducted with women in urban Pakistan experiencing menopause.
“Women’s health is understudied in general, but menopause is more neglected than other women’s health issues,” Karusala said. “Our choice to focus on menopause is motivated by expanding how we holistically think about women’s well-being across their lifespan.”
Karusala and Ammara will present their paper in April at the 2026 ACM Conference on Human Factors in Computing Systems (CHI) in Barcelona.
Masking Symptoms
Menopause is diagnosed after 12 consecutive months without a period, vaginal bleeding, or spotting. The transition to menopause, called perimenopause, usually happens over two to eight years.
Hormone changes may cause symptoms such as irregular periods, vaginal dryness, hot flashes, night sweats, trouble sleeping, mood swings, and brain fog.
These symptoms can be debilitating in some cases and affect daily life. However, Ammara said women are pressured to remain silent, maintain appearances, and regulate their emotions to meet social expectations.
“Understanding menopause is important because a woman would be experiencing all these symptoms, and people will not understand those as actual symptoms,” Ammara said. “There’s been resistance to the idea of the medicalization of menopause. People don’t view it as an illness, but as a life transition and something that happens naturally.”
Women’s health is understudied in general, but menopause is more neglected than other women’s health issues. Our choice to focus on menopause is motivated by expanding how we holistically think about women’s well-being across their lifespan.
Naveena Karusala
Asst. Professor, School of Interactive Computing
Georgia Tech
Our choice to focus on menopause is motivated by expanding how we holistically think about women’s well-being across their lifespan.
Naveena Karusala
Asst. Professor, School of Interactive Computing
Georgia Tech
Feeling Isolated
The women interviewed by Karusala and Ammara either stayed at home full-time or were part of the workforce.
The researchers discovered that trusted family members might be the only sources women who stay at home and do not work turn to for disclosure.
“Women at home have the flexibility to take breaks or work at their own pace, so a lot of their experience is shaped by the emotional barriers they face,” Ammara said.
“That could come from their husbands and family members. Some are supportive and some are not. They might weaponize it and use that term against them, or they might dismiss what they’re going through.”
Ammara said it might be easier for women in the workforce to confide in their coworkers, but explaining to an employer that they need sick leave for menopause symptoms can be intimidating.
Even in online communities that have enabled women to anonymously share their health experiences, menopause is seldom discussed.
Raising Awareness
Karusala and Ammara argue in their paper that a public health approach could be the most effective way to spark conversation about menopause in a patriarchal culture in which technology use varies.
They said the challenge in implementing technologies geared toward menopause support is that the condition isn’t well understood in public. Improving maternal health, for example, is easier to promote within these societies because of the general understanding that motherhood is important.
“There must be an existing infrastructure to build on,” Karusala said. “For example, menstrual and maternal health are taught in schools and regularly discussed in primary care. Cultural and social meaning and importance are placed on motherhood.
“A lot of that doesn’t exist for menopause. Primary care doctors are unprepared to talk about menopause compared to other health issues.”
Design Solutions
Ammara said that the most effective way for technologies to make an impact on women going through menopause is to directly address systemic power structures around women’s health within Pakistani culture.
It can start with the husbands.
“Framing the issue for husbands to understand menopause should be at the forefront of designing technology solutions,” she said.
“In Islamic contexts, we suggest using faith-based framings. This has been proposed for maternal health in prior works that draw on Islamic principles to engage expectant fathers in providing care and support. Framing it around religious responsibility to involve men in the journey can also be done for menopause.”
Back to News
School of Interactive Computing
New Study Shows Explainability is a Must for Older Adults to Trust AI
New Study Shows Explainability is a Must for Older Adults to Trust AI
Voice-activated, conversational artificial intelligence (AI) agents must provide clear explanations for their suggestions, or older adults aren’t likely to trust them.
That’s one of the main findings from a study by AI Caring on what older adults expect from explainable AI (XAI).
By Nathan Deen
Share story
Niharika Mathur
AI Caring is one of three AI Institutions led by Georgia Tech and funded by the National Science Foundation (NSF). The institution supports AI research that benefits older adults and their caregivers.
Niharika Mathur
, a Ph.D. candidate in the School of Interactive Computing, was the lead author of a paper based on the study. The paper will be presented in April at the 2026 ACM Conference on Human Factors in Computing Systems (CHI) in Barcelona.
Mathur worked with the Cognitive Empowerment Program at Emory University to interview 23 older adults who live alone and use voice-activated AI assistants like Amazon’s Alexa and Google Home.
Many of them told her they feel excluded from the design of these products.
“The assumption is that all people want interactions the same way and across all kinds of situations, but that isn’t true,” Mathur said. “How older people use AI and what they want from it are different from what younger people prefer.”
One example she gave is that young people tend to be informal when talking with AI. Older people, on the other hand, talk to the agent like they would a person.
“If Older adults are talking to their family members about Alexa, they usually refer to Alexa as ‘she’ instead of ‘it,’” Mathur said. “They tend to humanize these systems a lot more than young people.”
Good Explanations
The study evaluated AI explanations that drew information from four sources of data:
User history (past conversations with the agent)
Environmental data (indoor temperature or the weather forecast)
Activity data (how much time a user spends in different areas of the home)
Internal reasoning (mathematical probabilities and likely outcomes)
Mathur said older users trust the agent more when it bases its explanations on data from the first three sources. However, internal reasoning creates skepticism.
Internal reasoning means the AI doesn’t have enough data from the other sources to give an explanation. It provides a percentage to reflect its confidence based on what it knows.
“The overwhelming response was negative toward confidence scores,” Mathur said. “If the AI says it’s 92% confident, older adults want to know what that’s based on.”
This is another example that Mathur said points to generational preferences.
“There’s a lot of explainable AI research that shows younger people like to see numbers in explanations, and they also tend to rely too much on explanations that contain numerical confidence. Older adults are the opposite. It makes them trust it less.”
There’s a lot of explainable AI research that shows younger people like to see numbers in explanations, and they also tend to rely too much on explanations that contain numerical confidence. Older adults are the opposite. It makes them trust it less.
Niharika Mathur
Ph.D. candidate, School of Interactive Computing
Georgia Tech
There’s a lot of explainable AI research that shows younger people like to see numbers in explanations, and they also tend to rely too much on explanations that contain numerical confidence. Older adults are the opposite. It makes them trust it less.
Niharika Mathur
Ph.D. candidate, School of Interactive Computing
Georgia Tech
Knowing the Context
Mathur said that AI agents interacting with older adults should serve a dual purpose. They should provide users with companionship and support independence while reducing the caretaking burden often placed on family members.
Some studies have shown that engineers have tended to favor caretakers in the design of these tools. They prioritize daily tasks and routines, leaving some older adults to feel like they are merely a box to be checked.
She discovered that in urgent situations, older users prefer the AI to be straightforward, while in casual settings, they desire more conversation.
“How people interact with technological systems is grounded in what the stakes of the situation are,” she said. “If it had anything to do with their immediate sense of safety, they did not want conversational elaboration. They want the AI to be very direct and factual.”
Not Just Checking Boxes
Mathur said AI agents that interact with older adults are ideally constructed with a dual purpose. They should provide companionship and autonomy for the users while alleviating the burden of caretaking that is often placed on their family members.
Some studies have shown that engineers have strayed toward favoring caretakers in the design of these tools. They prioritize daily tasks and routines, leaving some older adults to feel like they are a box to be checked.
“They’re not being thought of as consumers,” Mathur said. “A lot of products are being made for them but not with them.”
She also said psychological well-being is one of the most important outcomes these tools should produce.
Showing older adults that they are listened to can significantly help in gaining their trust. Some interviewees told Mathur they want agents who are deliberate about understanding their preferences and don’t dismiss their questions.
Meeting these needs reduces the likelihood of protesting and creating conflict with family members.
“It highlights just how important well-designed explanations are,” she said. “We must go beyond a transparency checklist.”
Back to News
RESEARCH
Papers (52)
1. Track: Affective Agents & Reflective Data
Informal Embodied Auditing: Exploring Facial Emotion AI (FEAI) through Community Workshops
Xingyu Li, Alexandra Teixeira Riggs, Zhiming Dai, Crystal Farmer, Kalia Morrison, Noura Howell
2. Track: AI & Data Visualization
Transformer Explainer: Learning LLM Transformers with Interactive Visual Explanation and Experimentation
Aeree Cho, Grace Kim, Alexander Karpekov, Seongmin Lee, Alec Helbling, Benjamin Hoover, Zijie Wang, Minsuk Kahng, Duen Horng (Polo) Chau
3. Track: AI Collaboration in Practice
Can LLM-Simulated Practice and Feedback Upskill Human Counselors? A Randomized Study with 90+ Novice Counselors
Ryan Louie, Ifdita Hasan Orney, Juan Pablo Pacheco, Raj Shah, Emma Brunskill, Diyi Yang
4. Track: AI for Task Augmentation
LL.me: Supporting Identity Work through Human-AI Alignment
Kaely Hall, Max Ohsawa, Vedant Das Swain, Jennifer Kim
5. Track: AI Governance and Accountability
“It just requires so much more creativity”: Barriers and Workarounds to Gathering Information for AI Contestation
Sohini Upadhyay, Dasha Pruss, Alicia DeVrio, Krzysztof Gajos, Naveena Karusala
6. Track: Algorithmic Power, Justice and Repression
Tooling Justice: Articulating Equity Work Through Design Toolkits
Adrian Petterson, Carolyn Ly, Trevor Cross, Richmond Wong, Priyank Chandra
7. Track: BIPOC Sovereignty and Care
Noondawind: Co-Designed Dashboard for Indigenous Data Access and Environmental Policy Implementation
Julia McKenna, Gabriela Buraglia, Jahanvi Kolakaluri, Rachel Baker-Ramos, Samantha Carter, Joe Graveen, Jonathan Gilbert, James Rasmussen, Brandon Byrne, Darren Vogt, Josiah Hester, Kim Marion Suiseeya, Alex Cabral
8. Track: BIPOC Sovereignty and Care
Whose Knowledge Counts? Co-Designing Community-Centered AI Auditing Tools with Educators in Hawai`i
Dora Zhao, Hannah Cha, Michael Ryan, Angelina Wang, Rachel Baker-Ramos, Evyn-Bree Helekahi-Kaiwi, Rebecca Diego, Josiah Hester, Diyi Yang
9. Track: Bodies, Care & More Than Human Places
Toxic Speculations: A Crip Posthuman Fabulation of Living in a Permanently Polluted World
Sylvia Janicki, Heidi Biggs, Noura Howell
10. Track: Co-Design and Collaboration
Engaging Communities Meaningfully in Defining Disability Representation for AI Image Generation
Anja Thieme, Rita Faia Marques, Martin Grayson, Sidhika Balachandar, Cameron Cassidy, Madiha Zahrah Choksi, Camilla Longden, Reeda Shimaz Huda, Nicholas Kalovwe, Christina Mallon, Courtney Mansperger, Daniela Massiceti, Bhaskar Mitra, Ruth Mueni Nzioka, Ioana Tanase, Yuzhe You, Cecily Morrison
11. Track: Community Governance and Moderation
Governing Together: Toward Infrastructure for Community-Run Social Media
Sohyeon Hwang, Sophie Rollins, Thatiany Andrade Nunes, Yuhan Liu, Richmond Wong, Aaron Shaw, Andrés Monroy-Hernández
12. Track: Context-specific Studies and Perspectives
Civic Data at the Seams
Ashley Boone, Na’Taki Osborne Jelks, Quanda Spencer, Destinee Whitaker, Carl DiSalvo, Christopher Le Dantec
13. Track: Context-specific Studies and Perspectives
Localized Imaginaries, Global Assets: Sociotechnical Imaginaries and the Assetization of Data Centers in Singapore
Tanmaie Kailash, Cindy Lin
14. Track: Context-specific Studies and Perspectives
Reflections Towards an Ecology of Internet Connectivity: Three Speculative Scenarios Involving Foot Pedals
Richmond Wong, Nick Merrill, Robert Soden
15. Track: Critical Reflections on AI
AI as We Describe It: How Large Language Models and Their Applications in Health are Represented Across Channels of Public Discourse
Jiawei Zhou, Lei Zhang, Mei Li, Benjamin Horne, Munmun De Choudhury
16. Track: Designing with Older Adults
Sometimes You Need Facts, and Sometimes a Hug: Understanding Older Adults’ Preferences for Explanations in LLM-Based Conversational AI Systems
Niharika Mathur, Tamara Zubatiy, Agata Rozga, Jodi Forlizzi, Elizabeth Mynatt
17. Track: Designing XR Interaction
To Slide or Not To Slide: Exploring Techniques for Comparing Immersive Videos
Xizi Wang, Yue Lyu, Yalong Yang, Jian Zhao
18. Track: Ecological HCI and Urbanism
ContAQT: Designing an Interactive Data Display to Make Multi-Pollutant Air Quality Data Accessible
Yixuan Li, Jordan Hill, Zeyu Hua, Seik Oh, Yuhan Wang, Alex Endert, Jessica Roberts
19. Track: Ecological HCI and Urbanism
Everyday Design with Surrounds: Rehearsing Alternatives Amid Urban Sociotechnical Changes
Alex Jiahong Lu, Yuchen Chen, Cindy Lin
20. Track: Ecological HCI and Urbanism
Whose Data Builds the City? Critical Data Practices for Socio-Environmentally Just Urbanization
Vishal Sharma, Anjali Karol Mohan, Neha Kumar
21. Track: Education
“I just have faith in my wallet to not mismanage my crypto”: Investigating Changes in Users’ Security Perceptions Post-FTX Collapse
Mingyi Liu, Nivedita Singh, Jun Ho Huh, Hyoungshick Kim, Taesoo Kim
22. Track: Educational Support
Beyond Claiming Sovereign AI: Motivations, Challenges, and Contradictions in Developing and Deploying Local Foundation Models in South Korea
Inha Cha, Richmond Wong
23. Track: Emergency and Serious Illness Care
Human-centered Perspectives on a Clinical Decision Support System for Intensive Outpatient Veteran PTSD Care
Cynthia Baseman, Myeonghan Ryu, Nathaniel Swinger, Kefan Xu, Andrew Sherrill, Rosa Arriaga
24. Track: Envisioning the Future
From Future of Work to Future of Workers: Addressing Asymptomatic AI Harms to Foster Dignified Human-AI Interaction
Upol Ehsan, Samir Passi, Koustuv Saha, Todd McNutt, Mark Riedl, Sara Alcorn
25. Track: Envisioning the Future
Whose Time Counts? Temporal Arrangements in Sociotechnical Infrastructures
Catherine Wieczorek, Anh-Ton Tran, Cindy Lin, Laura Forlano, Carl DiSalvo, Shaowen Bardzell
26. Track: Expression and Affective Wellbeing
Why stressed, Mom?: Exploring Family Reflection on Social and Emotional Sensor Data through Family Informatics
Hyesoo Park, Sueun Jang, Hyunsoo Lee, Jennifer Kim, Uichin Lee
27. Track: Generative AI in Design and Practice
The Promises and Perils of using LLMs for Effective Public Services
Erina Seh-Young Moon, Matthew Tamura, Angelina Zhai, Nuzaira Habib, Behnaz Shirazi, Altaf Kassam, Devansh Saxena, Shion Guha
28. Track: Generative AI in Design and Practice
The Values of Value in AI Adoption: Rethinking Efficiency in UX Designers’ Workplaces
Inha Cha, Catherine Wieczorek, Richmond Wong
29. Track: Health Equity and Underserved Populations
Designing with Medical Mistrust: Perspectives from Black Older Adults in Publicly Subsidized Housing
Cynthia Baseman, Reeda Shimaz Huda, Rosa Arriaga
30. Track: Health Equity and Underserved Populations
Promise or Peril? Exploring Black Adults’ Perspectives on the Use of Artificial Intelligence in Health Contexts
Andrea Parker*, Laura Vardoulakis, Christina Harrington
31. Track: Heritage, Memory, & Speculative Narratives
Reconfiguring through Ruptures: Material Reconfigurations and Un/Making as Tangible Tactics for Queering AI-Generated Histories
Alexandra Teixeira Riggs, Noura Howell
32. Track: Human Behavior with AI Systems
Behavioral Indicators of Overreliance During Interaction with Conversational Language Models
Chang Liu, Qinyi Zhou, Xinjie Shen, Xingyu Liu, Tongshuang Wu, Xiang ‘Anthony’ Chen
33. Track: Inferring Human State
DraftMarks: Enhancing Transparency in Human-AI Co-Writing Through Interactive Skeuomorphic Process Traces
Momin Siddiqui, Nikki Nasseri, Adam Coscia, Roy Pea, Hariharan Subramonyam
34. Track: Interactive Design Systems for Fabrication
MotionSmith: A Sketch-Based Design System for Automata Making
DoangJoo “Alan” Synn, Zhifan Guo, Sehoon Ha, HyunJoo Oh
35. Track: Learning, Training, and Self-Dev with AI
Exploring Teacher-Chatbot Interaction and Affect in Block-Based Programming
Bahare Riahi, Ally Limke, Xiaoyi Tian, Viktoriia Storozhevykh, Sayali Patukale, Tahreem Yasir, Khushbu Singh, Jennifer Chiu, Nicholas Lytle, Tiffany Barnes, Veronica Catete
36. Track: Learning, Training, and Self-Dev with AI
Situated Imaginaries: Designing AI Futures with Computer Science Teaching Assistants
Grace Barkhuff, Ian Pruitt, Vyshnavi Namani, William Johnson, Anu Bourgeois, Ellen Zegura, Rodrigo Borela, Ben Shapiro
37. Track: Methodological Foundations
Challenges in Synchronous & Remote Collaboration Around Visualization
Matthew Brehmer, Maxime Cordeil, Christophe Hurter, Takayuki Itoh, Wolfgang Büschel, Mahmood Jasim, Arnaud Prouzeau, David Saffo, Lyn Bartram, Sheelagh Carpendale, Chen Zhu-Tian, Andrew Cunningham, Tim Dwyer, Samuel Huron, Masahiko Itoh, Alark Joshi, Kiyoshi Kiyokawa, Hideaki Kuzuoka, Bongshin Lee, Gabriela Molina León, Harald Reiterer, Bektur Ryskeldiev, Jonathan Schwabish, Brian Smith, Yasuyuki Sumi, Ryo Suzuki, Anthony Tang, Yalong Yang, Jian Zhao
38. Track: Negotiating Health, Identity, and Belief
More than Decision Support: Exploring Patients’ Longitudinal Usage of Large Language Models in Real-World Healthcare Settings
Yancheng Cao, Yishu Ji, Yue Fu, Sahiti Dharmavaram, Meghan Turchioe, Natalie Benda, Lena Mamykina, Yuling Sun, Xuhai “Orson” Xu
39. Track: Online Cultures and Creator Economies
“I Am My Own Bot”: Everyday Resistance in Online Fashion Resale
Sara Milkes Espinosa, Carl DiSalvo
40. Track: Privacy Risks and Perceptions
Supporting Informed Self-Disclosure: Design Recommendations for Presenting AI-Estimates of Privacy Risks to Users
Isadora Krsek, Meryl Ye, Wei Xu, Alan Ritter, Laura Dabbish, Sauvik Das
41. Track: Privacy, Health and Gender
(Re)mediators of Epistemic Injustice: Generative AI and Hermeneutic Resource Provision in Intimate Partner Violence
Jasmine Foriest, Leah Ajmani, Munmun De Choudhury
42. Track: Qualitative Method Reflection and Tools
Does a Picture Paint a Thousand Words? Using Visual and Textual Channels to Understand Attitudes and Beliefs
Shiyao Li, Roshini Deva, Arpit Narechania, Alireza Karduni, Cindy Xiong Bearfield, Emily Wall
43. Track: Reflecting on Haptics
From Daily Song to Daily Self: Supporting Emotional Growth of Deaf and Hard-of-Hearing Individuals through Generative AI Songwriting
Youjin Choi, JinYoung Yoo, JaeYoung Moon, Yoonjae Kim, Eun Young Lee, Jennifer Kim, Jin-Hyuk Hong
44. Track: Romance and Relationships in the Age of AI
Futuring Social Assemblages: How Enmeshing AIs into Social Life Challenges the Individual and the Interpersonal
Lingqing Wang, Yingting Gao, Chidimma Anyi, Ashok Goel
45. Track: Sexual and Reproductive Health Tech
Designing Around Stigma: Human-Centered LLMs for Menstrual Health
Amna Shahnawaz, Ayesha Shafique, Ding Wang, Maryam Mustafa
46. Track: Sexual and Reproductive Health Tech
Ecological Systems Theory for Studying and Designing Menstrual Technologies
Anupriya Tuli, Madeline Balaam, Pushpendra Singh, Neha Kumar, Airi Lampinen
47. Track: Sound, Music, and Dance Accessibility
Designing a Generative AI-Assisted Music Psychotherapy Tool for Deaf and Hard-of-Hearing Individuals
Youjin Choi, JaeYoung Moon, JinYoung Yoo, Jennifer Kim, Jin-Hyuk Hong
48. Track: Textiles & Sound
3D Printing Soap: Exploring New Biodegrdable Materials and Creative Possibilities
Jing Xie, Yingting Gao, Jin Yu, Tingyu Cheng, HyunJoo Oh
49. Track: Trust and Transparency in Everyday Life
Active and Passive Decisions: How Ethical Choices Are Made (and Missed) in NLP Research
Kayla Uleah, Betsy DiSalvo, Amanda Meng
50. Track: Video Presentations
Uncovering Relationships Between Android Developers, User Privacy, and Developer Willingness to Reduce Fingerprinting Risks
Alex Berke, Güliz Seray Tuncay, Michael Specter, Mihai Christodorescu
51. Track: Video Presentations
When Should Users Check? Modeling Confirmation Frequency in Multi-Step Agentic AI Tasks
Jieyu Zhou, Aryan Roy, Sneh Gupta, Daniel Weitekamp, Christopher MacLellan
52. Track: Women’s Health and Safety
Unpacking Space, Place, and Labor in Experiences of Menopause
Umme Ammara, Fozia Umber Qureshi, Naveena Karusala
Back to Research ↑
Posters
1. Track: Posters
“Who wants to be nagged by AI?“: Investigating the Effects of Agreeableness on Older Adults’ Perception of LLM-Based Voice Assistants’ Explanations
Niharika Mathur, Hasibur Rahman, Smit Desai
2. Track: Posters
Bimanual Mid-Air Multi-Object Manipulation of Graph Data with Gesture Phrasing
Pantea Habibi, Debaleena Chattopadhyay
3. Track: Posters
Break the Window: Exploring Spatial Decomposition of Webpages in XR
Chenyang Zhang, Tianjian Wei, Haoyang Yang, Mar Gonzalez-Franco, Yalong Yang, Eric Gonzalez
4. Track: Posters
Cortex-Canvas: An Interactive Web Interface for Executing and Evaluating Models of Category-Selective Regions in Human Visual Cortex
Ruolin Wang, Yuxuan Li, Mayukh Deb, Kushal Dudipala, Kruthik Ravikanti, Sanjana Chillarege, Arya Bhanushali, Ranjani Koushik, Aashraya Katiyar, N Apurva Ratan Murty
5. Track: Posters
Input–Envelope–Output: Auditable Generative Music Rewards in Sensory-Sensitive Context
Cong Ye, Songlin Shang, Xiaoxu Ma, Xiangbo Zhang
6. Track: Posters
Reassurance Robots: OCD in the Age of Generative AI
Grace Barkhuff
Demos
1. Track: Demos
Interactive 3D-Printed Soap: Exploring Biodegradable and Ephemeral Material Interactions
Jing Xie, Yingting Gao, Jin Yu, Tingyu Cheng, HyunJoo Oh
2. Track: Demos
InterYard: Investigating E-Plants as Mediators for Attuning Human to Biorhythms in Future Cities
Sihan Wang, Chenchen Chu, Yuqing Wu, Yifan Wang, Yunyang Di, Quan Li
3. Track: Demos
Magical Touch: Transforming Raw Capacitive Streams into Expressive Hand-Touchscreen Interaction
Yuanlei Guo, Xizi Gong, Yizhong Zhang, Xiaoyu Zhang
Meet-Ups
1. Track: Meet-Ups
How Could AI Supply Chain Research Shape HCI Inquiries And Vice-Versa?
Inha Cha, David Gray Widder, Blair Attard-Frost, Jatinder Singh, Agathe Balayn
2. Track: Meet-Ups
RAI@CHI: Responsible and Human Centred AI Across Borders
Neelima Sailaja, Joel Fischer, Rishub Jain, Neha Kumar, Simone Stumpf, Min Kyung Lee, Heloisa Candello, Raquel Iniesta
3. Track: Meet-Ups
What Comes After Research? Exploring Alternative Research Outcomes in HCI
MinYoung Yoo, Sophia Ppali, Catherine Wieczorek, Hayoun Noh, Seung Hyeon Han, Anna Carter, Alexandra Teixeira Riggs, Nava Haghighi, Yvon Ruitenburg, William Odom
Workshops
1. Track: Workshops
CHIdeology: Disentangling the fragmented politics, values and imaginaries of Human-Computer Interaction through ideologies
Felix Epp, Matti Nelimarkka, Jesse Haapoja, Pedro Ferreira, Os Keyes, Shaowen Bardzell
2. Track: Workshops
Crip HCI: Cyborg Perspectives on Disability Justice
Christoph Becker, Laura Forlano, Beatrice Vincenzi, Franzisca Maas, Alesandra Baca-Vázquez, Casey Fiesler, Rua Williams
3. Track: Workshops
Cultivating Pedagogies for Post-Growth HCI
Vishal Sharma, Hongjin Lin, Jasmine Lu, Han Qiao, Asra Sakeen Wani, Christina Bremer, Philip Engelbutzeder, Christoph Becker, Neha Kumar, Rikke Hagensby Jensen, Anupriya Tuli
4. Track: Workshops
Designing with forest stories to explore what it might mean for forest-related technologies to “get it right”
Ferran Altarriba Bertran, Heidi Biggs, Oğuz ‘Oz’ Buruk, Angella Mackey, William Odom, Oscar Tomico
5. Track: Workshops
From Papers to the Real World: Making Fabrication Research Matter
Hyunyoung Kim, Daniel Ashbrook, Andrea Bianchi, Jack Forman, DPV Joseph Jayakody, Sara Nabil, HyunJoo Oh, Thomas Pietrzak, Thijs Roumen, Valkyrie Savage, Lining Yao, Clement Zheng
6. Track: Workshops
Human-Centered Explainable AI (HCXAI): Re-examining XAI in the Era of Agentic AI
Upol Ehsan, Amal Alabdulkarim, Kenneth Holstein, Min Kyung Lee, Andreas Riener, Justin Weisz
7. Track: Workshops
The Quality of Speculation: Common Ground for Speculative Design in Human-Computer Interaction?
Ronda Ringfort-Felner, Judith Dörrenbächer, Chris Elsden, James Auger, James Pierce, Richmond Wong, Marc Hassenzahl
8. Track: Workshops
Third Workshop on Human-Centered Evaluation and Auditing of Language Models: AI Agents-in-the-Loop
Willem van der Maden, Wesley Deng, Yu Lu Liu, Han Jiang, Valerie Chen, Haotian Li, Juho Kim, Q. Vera Liao, Wei Xu, Motahhare Eslami, Ziang Xiao
⬆️
Go to Top of Page
See you in Barcelona!
Development:
College of Computing
Project and Web Lead/Data Graphics:
Joshua Preston
Web Support:
Joni Isbell
Featured Research News:
Nathan Deen, Bryant Wine
Featured Photography:
Terence Rushin
Data:
, Licensed under:
CC BY-NC-SA 4.0