by Soyeong Cho
Artificial intelligence (AI) is often discussed in journalism as a question of whether the technology lives up to its promises. But researcher Nadja Schaetz, a postdoctoral researcher at the University of Hamburg and a faculty associate at the Public Tech Media Lab, points to an underexplored question: How do AI expectations already shape journalism long before those promises can be evaluated?
In this interview, Schaetz reflects on the ethnographic work she conducted with her colleague, Anna Schjøtt Hansen, “AI hype and its function: An ethnographic study of the local news AI initiative of the Associated Press.” The study examines AI hype not as exaggerated talk to be confirmed or debunked, but as a force that organizes decision-making, mobilizes resources, and guides newsroom action under conditions of uncertainty.
By focusing on what expectations do rather than whether they are being met, Schaetz offers a way to understand AI’s influence on journalism even when outcomes remain unclear.
Infrastructure maintenance is also a major issue. It’s not just about building tools or projects, but about having the people who can maintain them over time. Because local newsrooms can be extremely small, the skills of just a few individuals can shape how AI is approached.
Our conversation was edited for length and clarity.
Q. Your research examines how AI expectations operate inside news organizations rather than evaluating the technology itself. What motivated this focus?
From a very broad perspective, the concept of AI hype can be useful for understanding what we’re experiencing across many different domains. But often that concept comes with a strong emphasis on debunking, identifying false claims, or drawing clear boundaries around what AI is or isn’t, can and cannot do. That wasn’t what I was most interested in.
Instead, we wanted to understand how organizations navigate promises and assumptions around AI and, potentially, reach a point where there is over-investment or where certain expectations don’t come to fruition. Doing so required looking much more closely at intra-organizational dynamics that shape decisions and projects, rather than judging whether claims about AI were right or wrong.
Q. Your study is based on close collaboration with the Associated Press (AP). Why was it important to look inside AP specifically?
We were approached by Aimee Rinehart (Senior Product Manager for AI Strategy for AP), and Ernest Kung (Senior AI Product Manager for AP), who we had met through another research project and who opened the door to collaboration.
I want to emphasize how grateful I am for that openness. Especially at that time, many newsrooms were dealing with enormous uncertainty around AI and were in the process of establishing new norms and guidelines to navigate the technological change in responsible ways. Not everyone was willing to open up internal processes to researchers admit this uncertainty.
Also, AP is an important case because of its global role. Much research has productively examined AI adoption in specific national contexts, but the impacts we’re seeing are not limited by borders. Working with a global organization allowed us to observe how expectations and practices ripple across different contexts.
Q. Compared to large newsrooms, local newsrooms often face survival threats. How do these conditions shape how expectations around AI are formed and dealt with?
Local newsrooms’ expectations and strategies around AI are highly context-specific. They often operate under acute resource and staffing constraints and in small teams. The skills and availability of just a few individuals can therefore have an outsized influence, resulting in each local newsroom’s situations to vary widely.
Some newsrooms are quite nimble and may have one or two highly skilled technical staff members who can meaningfully integrate AI in ways that serve their audiences. Others may not have that capacity and face very different constraints.
Infrastructure maintenance is also a major issue. It’s not just about building tools or projects, but about having the people who can maintain them over time. Because local newsrooms can be extremely small, the skills of just a few individuals can shape how AI is approached.
Q. How does AI hype play out in day-to-day local newsrooms’ decision-making?
One of the challenges that stood out was how difficult it is to simply “experiment” with AI. Especially when the goal is to involve small newsrooms with very limited resources, expectations around AI become a necessary part of mobilizing participation to such experiments. You have to project what could be possible and what might be helpful to justify committing staff time and effort.
For small newsrooms, taking people away from other tasks to experiment with AI is a significant investment. There are also legal dimensions. Many of these newsrooms don’t have large legal teams that could absorb the risk if something goes wrong. Expectations can help to get projects started, but they also create pressures once the work is underway.
Q. When AI hype did not unfold as expected, how did journalists deal with the gap between promises and outcomes?
One clear example was the issue of the “human in the loop.” Many promises around AI emphasize automation, efficiency, and giving time back to journalists. At the same time, journalism requires human oversight to ensure accuracy.
In practice, AI often introduces new forms of work, instead of simply freeing up time. Newsrooms realized that humans needed to be involved at multiple stages of the automated processes. This created tension between the promise of automation and journalistic norms.
Managing those expectations became part of the work of managers. They reminded teams that these were experiments, with no guaranteed outcome. At the same time, developers continued to suggest that these frictions could be solved over time — essentially reinvesting in these promises. This tension is especially pronounced in journalism, where small errors matter, and human oversight remains essential.
Q. Finally, what ways of thinking about AI hype in journalism do you think deserve more attention?
My biggest concern about the current discourse around AI is how we compartmentalize AI-related problems. We tend to separate issues like AI’s impact on mis-and disinformation, copyright and labor issues, economic impacts on news organizations, and environmental costs, rather than acknowledging how they are interconnected.
Within journalism, we are starting to see more critical coverage, for example, of data centers and their effects on local communities and the environment, but at the same time, we see speculative narratives about future technological expansion that avoid confronting present costs. I think we need to reflect more critically on how we all participate in these investments and in this type of compartmentalization. That applies to journalism, academia, and public discourse alike.