When it comes to most new technologies, early adopters tend to be the people who know and understand the tools the best.
With artificial intelligence, the opposite seems to be true.
This counterintuitive finding comes from new research, which suggests that the people most drawn to AI tend to be those who understand the technology the least. AI seems mysterious and even magical to these people, the researchers found, leading to a sense of awe regarding AI's ability to complete tasks. This is particularly true when the task is traditionally associated with human attributes, such as writing a poem or creating a new fusion recipe.
"When you don't really get what's going on under the hood, AI creating these things seems amazing, and that's when it can feel magical," says Stephanie Tully, an associate professor of marketing at the University of Southern California Marshall School of Business and one of the study's authors. "And that feeling can actually increase people's willingness to use it."
That finding challenges the assumption that more technical knowledge will naturally lead to wider adoption of AI. "In other domains, like wine, the people who know the most about it are wine lovers," says Tully. "With AI, it's the opposite."
Across seven studies, the researchers assessed people's AI literacy using different methods, including a 25-item questionnaire they created and a 17-question test created using two AI systems.
In one experiment, the researchers recruited 234 undergraduates, assessed their AI literacy and then asked them to consider writing four papers on topics ranging from how the assassination of Archduke Franz Ferdinand led to World War I to a poem about falling in love in Venice. The participants were then asked if they would or wouldn't use a free version of an AI system to help them complete the assignment and to what extent.
The students who scored lower on AI literacy were more likely to use AI to complete the assigned tasks than the students with higher AI literacy, the study found. It wasn't about believing that AI is smarter or more useful than their own knowledge, says Chiara Longoni, an associate professor of marketing at Bocconi University in Milan, Italy, and one of the authors. It was about how inconceivable it is that AI can complete humanlike tasks, she says. The researchers found that this link persisted even after taking into account that people with lower AI literacy tend to have more concerns about the ethics of AI and its potential negative impact on humanity.
Across several other studies, including those that examined differences in AI receptivity in 27 countries, lower AI literacy scores consistently led to an increased willingness to adopt the technology. People with higher AI literacy, meanwhile, recognized that AI is an algorithm, not magic, the researchers say.
"Understanding that AI is just pattern-matching can strip away the emotional experience," says Gil Appel, an assistant professor of marketing at the George Washington University School of Business and another co-author.
While the researchers' findings suggest that encouraging a sense of awe around AI could be useful for companies deploying AI systems to the general public, the goal shouldn't be to leave consumers in the dark about how AI works, the researchers say.
"With the increase in AI around us, consumers should have a basic level of literacy to be able to understand when AI might have important limitations," says Tully.
Perhaps the best approach is to try to educate consumers about AI in a way that doesn't completely eliminate their sense of awe or curiosity, the researchers say. Tully calls this "calibrated literacy" -- equipping users with enough understanding to make safe, informed decisions without dampening their delight.
"Too little knowledge and people might misuse the tool," Longoni adds. "Too much, and they might be reluctant to try it at all."
Heidi Mitchell is a writer in London and New York. She can be reached at reports@wsj.com.