May 28, 2025 | 11 min read
Generative AI will fundamentally reshape work across all industries over the coming years1. This presents myriad opportunities, but it also raises a troubling scenario for companies: a workforce that becomes less mentally agile as the workplace becomes more technologically complex.
Unlike previous innovation cycles, generative AI can augment and replace cognitive work that was previously thought to be untouchable by automation. Although there is likely to be significant workforce displacement there will also be opportunities in new roles if workers can adapt their skills to an ever-changing landscape of tools and technologies2,3.
For business leaders, investments in technology are only worthwhile if the workforce can use that technology, if they have the skills to support competitive advantage, and if the workforce adopts new technologies. Key to workforce adaptation is the conundrum of adoption, acceptance, and human behaviour with technology.
A broad base of research has been undertaken over the last 50 years, investigating how people interact with technology, elucidating the determinants of an individual’s intention to use and re-use a particular innovation. Alongside individual behavior, research has also been carried out to propose how innovations diffuse throughout a group, sub-culture, or organization4. Additionally, self-determination5 and self-regulation6of technology behaviours, as well as cognitive effects also contribute towards how people adopt and accept technologies for their own purposes, and those of their employers.
To understand how to get the most value from generative AI it is important that budget holders and implementers within organizations understand these various drivers of human behavior and take action to maximize transformation through purposeful deployment, readiness, and talent pipelines. While maximizing short term value, it is important to understand that the longer-term consequences of replacing human cognitive work with AI might have severe implications for society – including the risk of a workforce that becomes less capable while the tools they use become more complex.
Technology Acceptance and Diffusion of Innovation
Almost five decades of information systems research into how humans interact with technology systems is underpinned by validated psychology models based on the Theories of Reasoned Action and Planned Behavior7. Where a particular innovation is deemed functionally useful and endorsed by the social context, individuals will form the intention to use that technology8, or to continue to use a technology9, rather than to reject it10.
The determinants of acceptance behavior and adoption can be complex, encompassing many of the talking points of the AI revolution, such as trust, privacy, ethics, transparency, fairness, and bias, all of which can have a negative impact on human behavior if sufficiently absent or poorly controlled11,12. The main driver of acceptance behavior, regardless of the technology is that it must be useful for the purposes of the user.
The main driver of technology acceptance is performance against a pre-defined purpose. In the latest iteration of the AI revolution, skills and use cases have yet to catch up to the speed of change in the technology landscape. While the capabilities of AI are astounding, most companies are still at the stage of intending to use AI, considering use cases, or even wondering where to begin. As IT departments take the inexorable steps towards implementing a single AI solution in their technical environments, the workforce is left to figure out what the use cases should be. Since the level of skill in any workforce will be normally distributed, there is an inevitable consequence: most use cases are rudimentary, such as responding to emails or capturing meeting minutes, or basic reduction of cognitive burden en masse.
“
Most companies are still at the stage of intending to use AI, considering use cases, or even wondering where to begin.”
Risks of AI implementation
While there are many potential benefits to implementing technologies that reduce the cognitive burden on the workforce and facilitate the output of knowledge work, there are also risks. The use of AI may lead to a reduction in performance through the increase in procrastination or distraction13,14, and a potential feeling of over-qualification (such that a person equipped with AI tools might have a false sense of their own abilities) with indirect negative effects on job satisfaction and motivation15. One often overlooked implication of AI-driven automation is its effect on human cognition, and the potential for cognitive atrophy that might arise from excessive reliance on AI16.
Our brains are remarkably plastic, they rewire based on what we practice and use regularly17. The adage “use it or lose it” very much applies: When we consistently offload mental tasks to machines, we risk dulling our own cognitive skills18. For example, people who heavily rely on GPS have poorer spatial memory and reduced activity in the hippocampus (the brain’s navigation center) compared to those who navigate on their own19.
Despite recent studies on AI, we also have more established effects from mass information in social media, such as a reduction in the ability of the population to discriminate between facts and fiction, a reduction in the attention span of the population and an over-reliance on rating systems in lieu of verifying information for oneself20.
There is a risk to the fabric of industrialized society: If employees stop engaging in complex problem-solving because AI always handles it, their brains may adapt by allocating less capacity to those skills over time.
While AI becomes more powerful and easily handles routine cognitive work, we also must ask, What is the need for the cognitive worker?
For many roles, AI will be able to fulfill their tasks soon. The current zeitgeist of the learning industry is to insist that people will still be needed for problem-solving, critical thinking and decision-making. However, we are rapidly approaching a point where a perfect storm exists: AI has rapidly become capable in these areas, and the normal distribution of human capabilities in these areas means that only top performers are capable enough in these areas to warrant gainful employment. There is an imperative on businesses and on society to maintain a focus on human upskilling and reskilling, so we do not lose the cognitive essence of mankind as we surge forth in the AI revolution.
So, what should we do with AI?
The near future will be dominated by a phase of rapid adaptability for the human workforce: either in implementing the technology of the AI revolution (“the machine”), in architecting the workforce of the future (“the human”), or in laying the foundations for this hybrid workforce (data, or “the fuel” for the revolution).
As businesses configure operations around a hybrid workforce of humans working synergistically with AI and data systems against a vision, actioned through sophisticated hardware and software, businesses will need skills in these areas as part of a talent pipeline.
In some companies, entirely new job categories are emerging (think “AI supervisors” or “prompt engineers”) where employees spend their time steering AI systems and vetting their outputs. Existing roles in ethics, governance, and data validation will need to be part of that talent pipeline (as these roles change to encompass AI), as well as roles that are exclusively for humans to interact with other humans (think “coach” or “mentor”).
Maintaining a human in the loop is often essential not just for quality control, but also for ethical and safety reasons, ensuring AI decisions align with societal values and readiness. Human oversight skills and roles will need to develop alongside the AI revolution. A good example is the technology capability for autonomous vehicles versus the societal and regulatory readiness to allow vehicles on the road without human drivers. Although it is possible, we are not ready to allow it to happen.
Despite the risks, AI can support the development of mental models and act as an accelerator for learning, and a catalyst for rapid upskilling, aiding human cognition, maintaining skills plasticity, and changing how we work, providing it is used to augment human capabilities and not only to replace21.
Actions for Leaders
The challenge ahead is twofold: technological (integrating AI into operations) and organizational (reshaping the workforce and culture).
First, leadership must develop a clear AI vision that includes a complementary talent roadmap with purposeful transformation as a force multiplier for competitive advantage. Too many companies jump into AI projects without aligning their workforce strategy, leading to employee confusion or resistance. It’s vital for executives to articulate a vision of how AI will be used and why, using technology acceptance and adoption research as a guide for managing change.
With a vision, companies should invest in AI readiness across their workforce and technical estate. This means ensuring employees at all levels have at least a baseline understanding of AI capabilities and limitations, and technology systems are interoperable and robust. To support success, companies should develop an AI talent pipeline—identify a cadre of specialists (“AI builders” and “AI masters”) who can build, implement, and maintain AI tools internally.
In conclusion, designing for an AI future involves a holistic approach: technology and talent. Companies that align these will not only deploy AI faster, they’ll do so with a workforce that’s skilled, adaptive, and trusted to keep the human advantage in play.
- Dwivedi, Y. K. et al. “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int J Inf Manage 71, (2023).
- McKinsey Global Institute. Generative AI and the Future of Work in America. (2023).
- WEF. Future of Jobs Report 2025. World Economic Forum (2025).
- Rogers, E. M. Diffusion of Innovations. (The Free Press, New York, 1995).
- Ryan, R. M. & Deci, E. L. Self-determination theory and the facilitation of intrinsic motivation. American Psychologist 55, 68–78 (2000).
- Bandura, A. Social cognitive theory of self-regulation. Organ Behav Hum Decis Process 50, 248–287 (1991).
- Ajzen, I. From intentions to actions: A theory of planned behavior. in Action control: From cognition to behavior (eds. Kuhl, J. & Beckman, J.) 11–39 (Springer., Heidelberg, Germany, 1985). doi:10.1007/978-3-642-69746-3_2.
- Venkatesh, V., Morris, M. G., Davis, G. B. & Davis, F. D. User Acceptance of Information Technology: Towards a Unified View. MIS Quarterly 27, 425–478 (2003).
- Bhattacherjee, A. Understanding Information Systems Continuance: An Expectation-Confirmation Model. MIS Quarterly 25, 351 (2001).
- Venkatesh, V., Thong, J. Y. L. & Xu, X. Consumer Acceptance and Use of Information technology: Extending the Unified Theory of Acceptance and Use of Technology. MIS Quarterly 36, 157–178 (2012).
- Dwivedi, Y., Rana, N., Chen, H. & Williams, M. A Meta-analysis of the Unified Theory of Acceptance and Use of Technology (UTAUT). in IFIP International Working Conference on Governance and Sustaibability in Information Systems: Managing the Transfer and Diffusion of IT 155–170 (Springer Berlin Heidelberg, 2011). doi:10.1007/978-3-642-24148-2_10.
- Williams, M. D., Rana, N. P. & Dwivedi, Y. K. The Unified Theory of Acceptance and Use of Technology (UTAUT): A Literature Review. Journal of Enterprise Information Management vol. 28 (2015).
- Swargiary, K. The Impact of ChatGPT on Student Learning Outcomes : A Comparative Study of Cognitive Engagement , Procrastination , and Academic Performance. (2024) doi: https://dx.doi.org/10.2139/ssrn.4914743.
- Sukri, A., Rizka, M. A., Purwanti, E., Ramdiah, S., & Lukitasari, M. Students’ Perceptions of ChatGPT in Higher Education: A Study of Academic Enhancement, Procrastination, and Ethical Concerns. European Journal of Educational Research 11, 859–872 (2022).
- Zhao, H., Ye, L., Guo, M. & Deng, Y. Reflection or Dependence: How AI Awareness Affects Employees’ In-Role and Extra-Role Performance? Behavioral Sciences 15, (2025).
- Dergaa, I. et al. From tools to threats: a reflection on the impact of artificial-intelligence chatbots on cognitive health. Front Psychol 15, (2024).
- Fuchs, E. & Flügge, G. Adult neuroplasticity: More than 40 years of research. Neural Plast 2014, (2014).
- Sadegh-Zadeh, S.-A. Neural reshaping: the plasticity of human brain and artificial intelligence in the learning process. Am J Neurodegener Dis 13, 34–48 (2024).
- Dahmani, L. & Bohbot, V. D. Habitual use of GPS negatively impacts spatial memory during self-guided navigation. Sci Rep 10, (2020).
- Moravec, P. L., Dennis, A. R. & Minas, R. K. How Asking Users to Rate Stories Affects Belief in Fake News on Social Media. Information Systems Research 33, (2022).
- Hoffman, R. R., Mueller, S. T., Klein, G. & Litman, J. Measures for explainable AI: Explanation goodness, user satisfaction, mental models, curiosity, trust, and human-AI performance. Front Comput Sci (2023) doi:doi:10.3389/fcomp.2023.1096257.