Artificial IntelligencePublished on March 09, 20266 min read

AI and the "nameless monster": why new technologies awaken deeply human fears

Research in psychology, sociology and technology studies — alongside reports from institutions such as Pew Research Center and the OECD — suggests that public discomfort around AI has less to do with dystopian fiction and more to do with perceived loss of control, fear of professional obsolescence and the social adaptation required by disruptive technologies.

Conceptual illustration about artificial intelligence as a pillar of transformation and tension in the contemporary world.
  • Artificial Intelligence
  • Future of Work
  • Technology
  • Artificial Intelligence is often presented as a technical revolution, but the social phenomenon surrounding it is just as important. Some people respond to AI with curiosity; others with caution, skepticism or a sense of threat.

    Research suggests that this kind of response is not unusual when a technology spreads faster than society's ability to understand it. Public-opinion reports from Pew Research Center show that a significant share of the population associates AI with job reduction and with diminished human control over important decisions. In studies on innovation, this pattern is often associated with technological anxiety and with a perceived loss of control when people face new, complex and still opaque systems.

    This pattern is frequently discussed in studies of technological adoption. Everett Rogers's theory of innovation diffusion, presented in Diffusion of Innovations, helps explain why new technologies often pass through phases of uncertainty, resistance and gradual adoption. Electricity, the internet and different waves of automation were experienced, in distinct contexts, within that broader logic of social adaptation.

    In the case of AI, academic studies, international reports and interdisciplinary analyses often connect public concern to a recurring set of factors:

    • Technological anxiety — close to what the literature on technostress has described since Craig Brod's Technostress — in the face of systems that feel opaque or difficult to understand
    • Resistance to innovation when routines, professional identities and social habits appear to be under threat
    • A perceived loss of control over decisions, processes and criteria once considered exclusively human
    • Fear of professional obsolescence in markets that reward speed, adaptation and productivity
    • Pressure to keep up with disruptive technologies without enough time for critical assimilation

    That is why AI does not trigger only a technical debate. It also affects the way people understand competence, value and recognition. When automated systems begin to write, summarize, detect patterns or support decisions, part of the discomfort comes from the feeling that skills once seen as distinctly human are being relativized.

    In the sociology of work and in technology studies, this type of reaction often appears during periods of social adaptation to disruptive technologies. The fear is not just about the machine itself, but about the rearrangement of roles it produces: which tasks remain human, which will be transformed and which forms of knowledge become more valuable.

    In professional life, AI intensifies an old question under new conditions: if part of execution can be automated, where does human value now reside? Widely cited works on the digital economy, such as The Second Machine Age by Erik Brynjolfsson and Andrew McAfee, and Only Humans Need Apply by Thomas H. Davenport and Julia Kirby, suggest that value tends to shift away from repetition and toward contextual judgment, ethical responsibility, problem-oriented creativity and social coordination.

    How should we redefine the value of human work in a context where systems also write, analyze, classify and support decisions?

    Science communication helps dissolve the idea that the problem is an inexplicable "monster". More often, what we are seeing is a familiar social process: new technologies redistribute power, change criteria of value and force individuals and institutions to revise their expectations.

    That does not mean every concern is exaggerated. Research on automation, digital platforms and algorithmic governance shows that real risks do exist — from concentration of power to the precarization of certain roles, as well as bias and irresponsible deployment of automated systems. Widely cited work on automation, such as The Future of Employment by Carl Benedikt Frey and Michael Osborne, helps explain why fear of machine replacement became such a prominent part of public debate.

    At the same time, history also suggests that rejecting a technology in generic terms tends to impoverish the debate. The central issue is usually less about "accepting or rejecting" and more about how to regulate, understand and critically incorporate it.

    In the case of AI, that involves technological literacy, qualified public debate, institutional responsibility and updated approaches to work. Reports from the World Economic Forum, the OECD and research centers such as the AI Now Institute reinforce that the major risks are not only technical, but also social, institutional and distributive. It also means recognizing that adaptation is not submission: it is the ability to respond to transformation without giving up autonomy or critical judgment.

    Part of the fear attributed to AI can be interpreted as a reflection of a recurring human difficulty: dealing with situations in which control, professional identity and social relevance seem unstable. Naming that process more precisely is already an important step toward facing it with less panic and more clarity.

    Instead of treating AI only as a threat or a salvation, a more rigorous reading suggests seeing it as both a technical and social phenomenon: something that expands capacities, creates tensions and requires collective maturity if its effects are to be guided by public responsibility.

    Perhaps the more useful question is not whether AI will change the market, but:

    how will people, companies and institutions adapt to it without giving up autonomy, ethics and human meaning in work?

    Sources and supporting references

    These are some of the works, studies and institutions used as conceptual support for the article's arguments.

    Keep the conversation going

    Want to take this conversation into a real project?

    If this reflection touches something relevant to your current moment, I can help turn context, vision and needs into a well-shaped project.

    More writing from the blog to expand the conversation between technology, process, market and real experience.

    More articles soon

    This is the first piece in the collection. The next ones will follow to build a consistent editorial trail.

    AI and the "nameless monster": why new technologies awaken deeply human fears