
AI seems to be well on its way to becoming pervasive. You hear rumbles of AI being used, somewhere behind the scenes, at your doctor’s office. You suspect it may have played a role in hiring decisions during your last job search. Sometimes – maybe even often – you use it yourself.
And yet, while AI now influences high-stakes decisions such as what kinds of medical care people receive, who gets hired and what news people see, these decisions are not always made equitably. Research has shown that algorithmic bias often harms marginalized groups. Facial recognition systems often misclassify transgender and nonbinary people, AI used in law enforcement can lead to the unwarranted arrest of Black people at disproportionately high rates, and algorithmic diagnostic systems can prevent disabled people from accessing necessary health care.
These inequalities raise a question: Do gender and racial minorities and disabled people have more negative attitudes toward AI than the general U.S. population?
I’m a social computing scholar who studies how marginalized people and communities use social technologies. In a new study, my colleagues Samuel Reiji Mayworm, Alexis Shore Ingber, Nazanin Andalibi and I surveyed over 700 people in the U.S., including a nationally representative sample and an intentional oversample of trans, nonbinary, disabled and racial minority individuals. We asked participants about their general attitudes toward AI: whether they believed it would improve their lives or work, whether they viewed it positively, and whether they expected to use it themselves in the future.
The results reveal a striking divide. Transgender, nonbinary and disabled participants reported, on average, significantly more negative attitudes toward AI than their cisgender and nondisabled counterparts. These results indicate that when gender minorities and disabled people are required to use AI systems, such as in workplace or health care settings, they may be doing so while harboring serious concerns or hesitations. These findings challenge the prevailing tech industry narrative that AI systems are inevitable and will benefit everyone.
Public perception plays a powerful role in shaping how AI is developed, adopted and regulated. The vision of AI as a social good falls apart if it mostly benefits those who already hold power. When people are required to use AI while simultaneously disliking or distrusting it, it can limit participation, erode trust and compound inequities.
Gender, disability and AI attitudes
Nonbinary people in our study had the most negative AI attitudes. Transgender people overall, including trans men and trans women, also expressed significantly negative AI attitudes. Among cisgender people – those whose gender identity matches the sex they were assigned at birth – women reported more negative attitudes than men, a trend echoing previous research, but our study adds an important dimension by examining nonbinary and trans attitudes as well.
Disabled participants also had significantly more negative views of AI than nondisabled participants, particularly those who are neurodivergent or have mental health conditions.
These findings are consistent with a growing body of research showing how AI systems often misclassify, perpetuate discrimination toward or otherwise harm trans and disabled people. In particular, identities that defy categorization clash with AI systems that are inherently designed to reduce complexity into rigid categories. In doing so, AI systems simplify identities and can replicate and reinforce bias and discrimination – and people notice.
A more complex picture for race
In contrast to our findings about gender and disability, we found that people of color, and Black participants in particular, held more positive views toward AI than white participants. This is a surprising and complex finding, considering that prior research has extensively documented racial bias in AI systems, from discriminatory hiring algorithms to disproportionate surveillance.
Our results do not suggest that AI is working well for Black communities. Rather, they may reflect a pragmatic or hopeful openness to technology’s potential, even in the face of harm. Future research might qualitatively examine Black individuals’ ambivalent balance of critique and optimism around AI.

Laurence Dutton/E+ via Getty Images
Policy and technology implications
If marginalized people don’t trust AI – and for good reason – what can policymakers and technology developers do?
First, provide an option for meaningful consent. This would give everyone the opportunity to decide whether and how AI is used in their lives. Meaningful consent would require employers, health care providers and other institutions to disclose when and how they are using AI and provide people with real opportunities to opt out without penalty.
Next, provide data transparency and privacy protections. These protections would help people understand where the data comes from that informs AI systems, what will happen with their data after the AI collects it, and how their data will be protected. Data privacy is especially critical for marginalized people who have already experienced algorithmic surveillance and data misuse.
Further, when building AI systems, developers can take extra steps to test and assess impacts on marginalized groups. This may involve participatory approaches involving affected communities in AI system design. If a community says no to AI, developers should be willing to listen.
Finally, I believe it’s important to recognize what negative AI attitudes among marginalized groups tell us. When people at high risk of algorithmic harm such as trans people and disabled people are also those most wary of AI, that’s an indication for AI designers, developers and policymakers to reassess their efforts. I believe that a future built on AI should account for the people the technology puts at risk.
Oliver L. Haimson receives funding from National Science Foundation.