


default search action
Sunnie S. Y. Kim
Person information
- affiliation: Princeton University, NJ, USA
- affiliation: Toyota Technological Institute, Chicago, IL, USA
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2025
[c15]Indu Panigrahi
, Sunnie S. Y. Kim
, Amna Liaqat
, Rohan Jinturkar
, Olga Russakovsky
, Ruth Fong
, Parastoo Abtahi
:
Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations. CHI Extended Abstracts 2025: 350:1-350:9
[c14]Sunnie S. Y. Kim
, Jennifer Wortman Vaughan
, Q. Vera Liao
, Tania Lombrozo
, Olga Russakovsky
:
Fostering Appropriate Reliance on Large Language Models: The Role of Explanations, Sources, and Inconsistencies. CHI 2025: 420:1-420:19
[c13]Allison Chen
, Sunnie S. Y. Kim
, Amaya Dharmasiri
, Olga Russakovsky
, Judith E. Fan
:
Portraying Large Language Models as Machines, Tools, or Companions Affects What Mental Capacities Humans Attribute to Them. CHI Extended Abstracts 2025: 440:1-440:14
[c12]Allison Chen, Sunnie S. Y. Kim, Amaya Dharmasiri, Olga Russakovsky, Judith E. Fan:
Portraying Large Language Models as Machines, Tools, or Companions Affects What Mental Capacities People Attribute to Them. CogSci 2025
[i20]Sunnie S. Y. Kim, Jennifer Wortman Vaughan, Q. Vera Liao, Tania Lombrozo, Olga Russakovsky:
Fostering Appropriate Reliance on Large Language Models: The Role of Explanations, Sources, and Inconsistencies. CoRR abs/2502.08554 (2025)
[i19]Indu Panigrahi, Sunnie S. Y. Kim, Amna Liaqat, Rohan Jinturkar, Olga Russakovsky, Ruth Fong, Parastoo Abtahi:
Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations. CoRR abs/2504.10745 (2025)
[i18]Wesley Hanwen Deng, Sunnie S. Y. Kim, Akshita Jha, Ken Holstein, Motahhare Eslami, Lauren Wilcox, Leon A Gatys:
PersonaTeaming: Exploring How Introducing Personas Can Improve Automated AI Red-Teaming. CoRR abs/2509.03728 (2025)
[i17]Lujain Ibrahim, Katherine M. Collins, Sunnie S. Y. Kim, Anka Reuel, Max Lamparth, Kevin Feng, Lama Ahmad, Prajna Soni, Alia El Kattan, Merlin Stein, Siddharth Swaroop, Ilia Sucholutsky, Andrew Strait, Q. Vera Liao, Umang Bhatt:
Measuring and mitigating overreliance is necessary for building human-compatible AI. CoRR abs/2509.08010 (2025)
[i16]Dalia Ali, Muneeb Ahmed, Hailan Wang, Arfa Khan, Naira Paola Arnez Jordan, Sunnie S. Y. Kim, Meet Dilip Muchhala, Anne Kathrin Merkle, Orestis Papakyriakopoulos:
AI Adoption Across Mission-Driven Organizations. CoRR abs/2510.03868 (2025)
[i15]Allison Chen, Sunnie S. Y. Kim, Angel Franyutti, Amaya Dharmasiri, Kushin Mukherjee, Olga Russakovsky, Judith E. Fan:
Presenting Large Language Models as Companions Affects What Mental Capacities People Attribute to Them. CoRR abs/2510.18039 (2025)- 2024
[c11]Sunnie S. Y. Kim
:
Establishing Appropriate Trust in AI through Transparency and Explainability. CHI Extended Abstracts 2024: 433:1-433:6
[c10]Upol Ehsan
, Elizabeth Anne Watkins
, Philipp Wintersberger
, Carina Manger
, Sunnie S. Y. Kim
, Niels van Berkel
, Andreas Riener
, Mark O. Riedl
:
Human-Centered Explainable AI (HCXAI): Reloading Explainability in the Era of Large Language Models (LLMs). CHI Extended Abstracts 2024: 477:1-477:6
[c9]Sunnie S. Y. Kim
, Q. Vera Liao
, Mihaela Vorvoreanu
, Stephanie Ballard
, Jennifer Wortman Vaughan
:
"I'm Not Sure, But...": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust. FAccT 2024: 822-835
[c8]Giang Nguyen, Mohammad Reza Taesiri, Sunnie S. Y. Kim, Anh Totti Nguyen:
Allowing Humans to Interactively Guide Machines Where to Look Does Not Always Improve Human-AI Team's Classification Accuracy. XAI4CV 2024: 8169-8175
[i14]Giang Nguyen, Mohammad Reza Taesiri, Sunnie S. Y. Kim, Anh Nguyen:
Allowing humans to interactively guide machines where to look does not always improve human-AI team's classification accuracy. CoRR abs/2404.05238 (2024)
[i13]Sunnie S. Y. Kim, Q. Vera Liao, Mihaela Vorvoreanu, Stephanie Ballard, Jennifer Wortman Vaughan:
"I'm Not Sure, But...": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust. CoRR abs/2405.00623 (2024)- 2023
[c7]Sunnie S. Y. Kim, Elizabeth Anne Watkins, Olga Russakovsky, Ruth Fong, Andrés Monroy-Hernández
:
"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction. CHI 2023: 250:1-250:17
[c6]Vikram V. Ramaswamy
, Sunnie S. Y. Kim, Ruth Fong, Olga Russakovsky:
Overlooked Factors in Concept-Based Explanations: Dataset Choice, Concept Learnability, and Human Capability. CVPR 2023: 10932-10941
[c5]Sunnie S. Y. Kim
, Elizabeth Anne Watkins
, Olga Russakovsky
, Ruth Fong
, Andrés Monroy-Hernández
:
Humans, AI, and Context: Understanding End-Users' Trust in a Real-World Computer Vision Application. FAccT 2023: 77-88
[i12]Vikram V. Ramaswamy
, Sunnie S. Y. Kim, Ruth Fong, Olga Russakovsky:
UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs. CoRR abs/2303.15632 (2023)
[i11]Sunnie S. Y. Kim, Elizabeth Anne Watkins, Olga Russakovsky, Ruth Fong, Andrés Monroy-Hernández:
Humans, AI, and Context: Understanding End-Users' Trust in a Real-World Computer Vision Application. CoRR abs/2305.08598 (2023)
[i10]Doris Antensteiner, Marah Halawa, Asra Aslam
, Ivaxi Sheth, Sachini Herath, Ziqi Huang, Sunnie S. Y. Kim, Aparna Akula, Xin Wang:
WiCV@CVPR2023: The Eleventh Women In Computer Vision Workshop at the Annual CVPR Conference. CoRR abs/2309.12768 (2023)- 2022
[c4]Sunnie S. Y. Kim
, Nicole Meister
, Vikram V. Ramaswamy
, Ruth Fong
, Olga Russakovsky
:
HIVE: Evaluating the Human Interpretability of Visual Explanations. ECCV (12) 2022: 280-298
[i9]Vikram V. Ramaswamy
, Sunnie S. Y. Kim, Nicole Meister
, Ruth Fong, Olga Russakovsky:
ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features. CoRR abs/2206.07690 (2022)
[i8]Vikram V. Ramaswamy
, Sunnie S. Y. Kim, Ruth Fong, Olga Russakovsky:
Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability. CoRR abs/2207.09615 (2022)
[i7]Sunnie S. Y. Kim, Elizabeth Anne Watkins, Olga Russakovsky, Ruth Fong, Andrés Monroy-Hernández:
"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction. CoRR abs/2210.03735 (2022)- 2021
[c3]Pedro Savarese, Sunnie S. Y. Kim, Michael Maire, Greg Shakhnarovich, David McAllester:
Information-Theoretic Segmentation by Inpainting Error Maximization. CVPR 2021: 4029-4039
[c2]Vikram V. Ramaswamy
, Sunnie S. Y. Kim, Olga Russakovsky:
Fair Attribute Classification Through Latent Space De-Biasing. CVPR 2021: 9301-9310
[i6]Sunnie S. Y. Kim, Sharon Zhang, Nicole Meister
, Olga Russakovsky:
[Re] Don't Judge an Object by Its Context: Learning to Overcome Contextual Bias. CoRR abs/2104.13582 (2021)
[i5]Vivien Nguyen, Sunnie S. Y. Kim:
Cleaning and Structuring the Label Space of the iMet Collection 2020. CoRR abs/2106.00815 (2021)
[i4]Sunnie S. Y. Kim, Nicole Meister
, Vikram V. Ramaswamy, Ruth Fong, Olga Russakovsky:
HIVE: Evaluating the Human Interpretability of Visual Explanations. CoRR abs/2112.03184 (2021)- 2020
[c1]Sunnie S. Y. Kim
, Nicholas I. Kolkin
, Jason Salavon
, Gregory Shakhnarovich
:
Deformable Style Transfer. ECCV (26) 2020: 246-261
[i3]Sunnie S. Y. Kim, Nicholas I. Kolkin, Jason Salavon
, Gregory Shakhnarovich:
Deformable Style Transfer. CoRR abs/2003.11038 (2020)
[i2]Vikram V. Ramaswamy, Sunnie S. Y. Kim, Olga Russakovsky:
Fair Attribute Classification through Latent Space De-biasing. CoRR abs/2012.01469 (2020)
[i1]Pedro Savarese, Sunnie S. Y. Kim, Michael Maire, Greg Shakhnarovich, David McAllester:
Information-Theoretic Segmentation by Inpainting Error Maximization. CoRR abs/2012.07287 (2020)
Coauthor Index

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from
to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the
of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from
,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from
and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from
.
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2026-02-03 23:45 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID







