Research

NEW: Long Range Planning Call for Papers: “Navigating the Paradoxes of Artificial Intelligence: Implications for Strategy Theory and Practice”

Call for papers of
LONG RANGE PLANNING
https://www.sciencedirect.com/journal/long-range-planning

 

Navigating the Paradoxes of Artificial Intelligence:
Implications for Strategy Theory and Practice

Deadline for Submissions: November 30, 2024 

Web page: https://www.s iencedirect.com/journal/long-range-planning/about/call-for-papers#navigating-the-paradoxes-of-artificial-intelligence-implications-for-strategy-theory-and-practice

The special issue accepts research on the strategic options, conceptual lenses and methods helpful to handle effectively the complexities in the continuous interaction between human intelligence and AI.


Guest Editors:

Olga Bruyaka, West Virginia University, USA, olga.bruyaka@mail.wvu.edu
Giovanni Battista Dagnino, University of Rome LUMSA, Italy, g.dagnino@lumsa.it
Mouhoub Hani, Paris 8 University, France, mouhoub.hani@univ-paris8.fr
Nidthida LinMacquarie Business School, Macquarie University, Australia, nidthida.lin@mq.edu.au
Anna Minà, University of Rome LUMSA, Italy, a.mina1@lumsa.it
Pasquale Massimo Picone, University of Palermo, Italy, pasqualemassimo.picone@unipa.it


Special issue information:

Approximately four decades ago, in an article published in Long Range Planning, Holloway (1983, p. 89) prophesied the possibility of a supercomputer that “may share or usurp functions of a corporate chief executive—functions that up to now have been thought unsharable”. That time has likely arrived and, as a result, AI is rapidly and profoundly challenging not only people’s lives, but also traditional management theories (Glikson & Woolley, 2020; Lawton & Vassolo, 2022; Krakowski et al., 2023; Kemp, 2023; Jarrahi et al., 2023) because it is changing various aspect of management, such as decision-making processes (Shrestha et al., 2019), organizational practices (Kolbjørnsrud et al., 2016; Volberda et al., 2021), open innovation strategies (Broekhuizen et al., 2023), as well as the dynamics of competition (Davenport et al., 2020), cooperation and coopetition among firms (Iansiti& Lakhani, 2020; Gregory et al., 2021). For instance, AI has turned ubiquitous in the current business landscape as organizations are increasingly deploying artifacts that are brought to simultaneously collaborate and compete with humans. This condition inevitably requires a mix of human and nonhuman agency (Murray et al., 2021), thereby leading to the development of conjoined routines that perform AI-driven capabilities (Kemp, 2023). Additionally, Kanarik et al. (2023) show that a strategy using both human designers with high expertise and algorithms in a human first-computer-last strategy can reduce the cost-to-target by half compared with the one performed only by human designers. In line with this argument, Garbuio and Lin (2021) theorize that AI can enhance the capacity of businesses and entrepreneurs to generate new ideas, not only in terms of quality and quantity of new ideas but also in the speed at which new ideas are generated.

Much of the scholars’ attention on AI has hitherto focused on the relationship between AI and human intelligence (Wilson & Daugherty, 2018; Anthony et al., 2023). Initially, strategic management research took a contingent perspective, distinguishing between tasks suited for machines from those for humans (Raisch & Krakowski, 2021). However, despite AI exhibiting certain significant limitations, it is also assuming an increasingly important cognitive and operational role. This previously unexpected evolution blurs the traditional human-machine divide, fostering cooperation or even symbiosis between and among them (Inga et al., 2021). A complementary view, the normative perspective, rather than purely automating human tasks, emphasizes integrating AI into business practice (Davenport & Kirby, 2016; Mejia, 2023; Sjödin et al., 2021). Both perspectives (the contingent and the normative) draw attention to the merits of automation (i.e., the substitution of humans and performing human tasks) and the risks of augmentation (i.e., the enriching of human tasks), but concurrently overlook the synergies they can achieve together. With very few exceptions (e.g., Berente et al., 2021; Raisch & Krakowski, 2021), extant studies fail to see the paradoxical tensions inherent in the interactions between machines and humans (Raisch & Krakowski, 2021; Kokshagina & Schneider, 2023; Kumar et al., 2023).

Taking a third, complementary perspective, much of the attention on AI has thus far been centered on the benefits and risks of AI from the customer’s point of view (Du & Xie, 2021). On one hand, customers reap the benefits of AI-enabled products, while on the other hand, concerns about the darker aspects of AI, such as those related to privacy, security, and ethics (Papagiannidis et al., 2023), loom as potential downsides. Customers also grapple with this paradox, underscoring the need for future studies to thoroughly assess the risks and advantages of creating value by means of AI.

Previous studies have frequently focused on the structural advantages and missing opportunities for firms, ecosystems and customers in the context of AI and human interaction. As such, they have fallen short to develop novel perspectives on AI and human interaction, which could serve as an alternative paradigm for value creation. We believe, examining such paradoxical tensions will enable a deeper exploration of strategic reality (Lewis & Kelemen, 2002) and prompt a comprehensive re-evaluation of what is commonly accepted (Lado et al., 2006). In fact, paradox theory suggests that tensions cannot be resolved. By depicting competing strains as tensions that are not only opposing, but also simultaneously existing, interdependent and persistent, paradox theory argues that actors need to accept, engage and navigate tensions rather than resolve them (Pina et al., 2019; Carmine & Smith, 2021; Cunha & Putnam, 2019). Foundational research on paradox in firms and organizations started over four decades ago, drawing insights from a variety of disciplines, including Eastern philosophy (Taoism, Confucianism, Legalism), Western philosophies (Hegel, Heraclitus), psychodynamics (Jung, Adler, Frankel), psychology (Schneider, Watzlawick), sociology (Taylor, Bateson), and negotiations and conflict resolution (Follett). Underlying theory of paradox is ontology of dualism – two opposing elements that form an integrated unity – and dynamism – ongoing change. More recent work emphasizes paradoxes as nested across levels and as knotted and interwoven across various tensions, while also taking into consideration power dynamics, uncertainty, plurality, and scarcity of systems within which paradoxes usually emerge (Carmine & Smith, 2021). Although non-paradoxical theories may initially seem more elegant and precise, they often fail to account for the intricacies of the managerial domain. (Dagnino & Minà, 2021). Since strategic management theory and practice are inherently paradoxical, we expect scholars to incorporate paradoxes and paradox theories into their inquiries especially when dealing with AI.

Drawing on paradoxical theories, this special issue aims at enriching our understanding of AI by examining the bulk of the significant trade-offs that have surfaced in this emerging domain, such as human-robot competition for jobs, the implications of reduced human decision-making, the risks and opportunities of AI for customers, the influence of AI on CEO/TMT advice seeking- application, the role of AI in creativity and innovation/new idea generation and so on. Other relevant trade-offs in AI regards “human-machine competition” and “human-machine cooperation” (Hoc, 2000), “human-machine coopetition”, and automation vs. augmentation (Raisch & Krakowski, 2021; Kumar et al., 2023). This kind of studies are expected to be impressively helpful in promoting a new class of robots called “cobots”, whose primary mission is collaborating rather than competing with humans (Drolshagen et al., 2021; Sowa et al., 2021; Fügener et al., 2022; Gip et al., 2022), as well as in the promotion of humanizing strategy (Nonaka & Takeuchi, 2021). In sum, prior research on human-machine interaction, multi-agent robotics, and human-centered artificial intelligence is frequently limited in scope and application due to the unique challenges in combining humans and machines into teams (Natarajanet et al., 2021).

Furthermore, through the application of paradox theory, we encourage research that extends beyond the significant dimensions thus far overlooked in AI literature, including aspects such as the emotional adaptation of robots to humans (Huang et al., 2019; Drolshagen et al., 2021), societal and environmental aspects (Marjanovic et al., 2021) and ethical considerations regarding employee and consumer manipulation (Murtarelli et al., 2021). In a related vein, recent studies (e.g., Han et al., 2023) provide support for the idea that the “consumer mindset” (centered on competition vs. collaboration) has an impact on consumers’ attitudes toward anthropomorphic AI robots during service delivery. More precisely, the authors confirm that competitive mindset-driven consumers respond less favorably to anthropomorphic (vs. non-anthropomorphic) AI robots, whereas collaborative mindset-driven consumers respond more favorably to anthropomorphic (vs. non-anthropomorphic) AI robots.

The special issue we propose encourages prospective contributors to delve into the strategic options, conceptual lenses and methods that will likely turn helpful to effectively handle the complexities within the continuous interactions between human intelligence and artificial intelligence. We welcome conceptual contributions as well as qualitative and/or quantitative papers that provide fresh and tangible insights on this fast-growing domain. We also advocate for research that can significantly contribute to the teaching and dissemination of digital and AI strategies. We also welcome submissions that share insights on the utilization of AI in academic research within the field of strategy, emphasizing both the benefits and drawbacks associated with its application. Finally, leveraging paradox theory, scholars can effectively realign existing frameworks and models, strategy tools, and instructional materials with the development of AI (Cepa & Schildt, 2023).

The following list offers some potential topics for consideration in the special issue:

AI Adoption and Application:

  • How do firms develop strategies to integrate with AI? And/or how do business strategies are influenced/impacted by the application of AI?
  • How do cultural, demographic, geographic and industry factors influence AI adoption and approaches to managing the AI-human relationship within organizations?
  • What factors determine the dynamic interactions among individuals, organizations, and robots?
  • How can firms and stakeholders capture value from AI?

Relationships with AI

  • What are the conceptual contours, practices and theories explaining human-robot competition for jobs? What are the implications of reduced human decision-making?
  • How can we define and manage human-machine competition, human-machine cooperation, and human-machine coopetition?
  • What are the underlying mechanisms explaining the loop between automation and augmentation?

Impact of AI paradox on firm and managerial capabilities

  • What is the impact of AI on firms’ and managers’ soft skills (e.g., resilience, leadership style, communicating with impact, becoming a global citizen)?
  • How are strategic leadership attitudes evolving in the age of AI?
  • How does AI affect firms’ absorptive capacity, and what are the implications for sustaining competitive advantage?
  • How does the application of AI and its paradox influence CEO and TMT advice seeking behavior?
  • How does the application of AI assist or hinder managers and entrepreneurs in the process of new idea and new business opportunity generation?

Teaching AI and Doing Research with AI

  • What roles can business schools play in facilitating effective mitigation of AI-human tensions?
  • How can scholars adapt existing frameworks, models, strategy tools, and instructional materials effectively to align with the advancements in AI, and what are the key considerations in this process?

Manuscript submission information:

The call for papers will accept submissions that significantly advance strategy theory by means of various kinds of empirical and conceptual enquiry and bear potential of bestowing impactful contributions to policy and practice. Multiple submissions from same authors (even if not the first author of the manuscript) will not be allowed. Any questions pertaining to this special issue are to be directed to the Guest Editorial team as a whole using their contact details reported above.

The Long Range Planning’s submission system will be open for submissions to our Special Issue from June 1, 2024. When submitting your manuscript to Editorial Manager, please select the article type “VSI: Navigating the Paradoxes of AI”. Please submit your manuscript before November 30, 2024.

All submissions deemed suitable to be sent for peer review will be reviewed by at least two independent reviewers. Once your manuscript is accepted, it will go into production, and will be simultaneously published in the current regular issue and pulled into the online Special Issue. Articles from this Special Issue will appear in different regular issues of the journal, though they will be clearly marked and branded as Special Issue articles.

Please ensure you read the Guide for Authors before writing your manuscript. The Guide for Authors and link to submit your manuscript is available on the Journal’s homepage at: Long Range Planning | Journal | ScienceDirect.com by Elsevier

In the first half of 2024, we aim to arrange three specialized virtual workshop (organized according to the tree key geographical regions from where the proposed guest are coming from so as to overcome the time zone limits and duly attract authors from the various constituencies) to advertise the special issue as well as to develop the short papers presented there. In such events, we reserve to invite a range of experts in AI and strategy from academia, regulatory boards and business practice to give inspirational speeches, discuss creative ideas about theory and practice and provide thought-provoking reflections. These activities are felt as extremely important because they are an essential opportunity to share information with potential submitters of papers and assist them along their paper development process.

Then, after the first round of reviews or approximately by June-July 2025, we plan to assemble a highly-targeted in-person workshop, alternatively in Paris or in the US East Coast, expressly aimed at involving authors in developing their papers submitted to the Special Issue that have survived this first round of reviews (of course with absolutely no guarantee of final acceptance).

References:

Anthony, C., Bechky, B. A., & Fayard, A. L. (2023). “Collaborating” with AI: Taking a System View to Explore the Future of Work. Organization Science34(5), 1651-1996.

Berente, N., Gu, B., Recker, J., & Santhanam, R. (2021). Managing artificial intelligence. MIS Quarterly45(3), 1433-1450.

Broekhuizen, T., Dekker, H., de Faria, P., Firk, S., Nguyen, D. K., & Sofka, W. (2023). AI for managing open innovation: Opportunities, challenges, and a research agenda. Journal of Business Research167, 114196.

Business Insider (2023). A video game company made a bot the CEO, and its stock climbed.

Carmine, S. & Smith, W. (2021). Organizational Paradox. Oxford Bibliographies. doi: http://dx.doi.org/10.1093/obo/9780199846740-0201 available online 19 October 2023.

Cepa, K., & Schildt, H. (2023). What to teach when we teach digital strategy? An exploration of the nascent field. Long Range Planning56(2), 102271.

Dagnino, G. B., & Minà, A. (2021). Unraveling the philosophical foundations of co-opetition strategy. Management and Organization Review17(3), 490-523.

Davenport, T. H., & Kirby, J. (2016). Only Humans Need Apply: Winners and Losers in the Age of Smart Machines. New York: Harper Business.

Davenport, T., Guha, A., Grewal, D., & Bressgott, T. (2020). How artificial intelligence will change the future of marketing. Journal of the Academy of Marketing Science48, 24-42.

Drolshagen, S., Pfingsthorn, M., Gliesche, P., & Hein, A. (2021). Acceptance of industrial collaborative robots by people with disabilities in sheltered workshops. Frontiers in Robotics and AI7, 541741.

Du, S., & Xie, C. (2021). Paradoxes of artificial intelligence in consumer markets: Ethical challenges and opportunities. Journal of Business Research129, 961-974.

Fügener, A., Grahl, J., Gupta, A., & Ketter, W. (2022). Cognitive challenges in human–artificial intelligence collaboration: investigating the path toward productive delegation. Information Systems Research33(2), 678-696.

Garbuio, M., & Lin, N. (2021). Innovative idea generation in problem finding: Abductive reasoning, cognitive impediments, and the promise of artificial intelligence. Journal of Product Innovation Management38(6), 701-725.

Gip, H. Q., Guchait, P., & Wang, C. Y. (2022). Competition or collaboration for human–robot relationship: a critical reflection on future cobotics in hospitality. International Journal of Contemporary Hospitality Management35(2), 2202-2215

Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals14(2), 627-660.

Gregory, R. W., Henfridsson, O., Kaganer, E., & Kyriakou, H. (2021). The role of artificial intelligence and data network effects for creating user value. Academy of Management Review46(3), 534-551.

Haefner, N., Parida, V., Gassmann, O., & Wincent, J. (2023). Implementing and scaling artificial intelligence: A review, framework, and research agenda. Technological Forecasting and Social Change197, 122878.

Han, B., Deng, X., & Fan, H. (2023). Partners or Opponents? How Mindset Shapes Consumers’ Attitude Toward Anthropomorphic Artificial Intelligence Service Robots. Journal of Service Research, 10946705231169674.

Hoc, J. M. (2000). From human-machine interaction to human-machine cooperation. Ergonomics43(7), 833-843.

Holloway, C. (1983). Strategic management and artificial intelligence. Long Range Planning16(5), 89-93.

Huang, M. H., Rust, R., & Maksimovic, V. (2019). The feeling economy: Managing in the next generation of artificial intelligence (AI). California Management Review61(4), 43-65.

Inga, J., Ruess, M., Robens, J. H., Nelius, T., Rothfuß, S., Kille, S., … & Kiesel, A. (2023). Human-machine symbiosis: A multivariate perspective for physically coupled human-machine systems. International Journal of Human-Computer Studies170, 102926.

Jarrahi, M. H., Askay, D., Eshraghi, A., & Smith, P. (2023). Artificial intelligence and knowledge management: A partnership between human and AI. Business Horizons66(1), 87-99.

Kanarik, K. J., Osowiecki, W. T., Lu, Y., Talukder, D., Roschewsky, N., Park, S. N., Kamon, M., Fried, D. & Gottscho, R. A. (2023). Human-machine collaboration for improving semiconductor process development. Nature616(7958), 707-711.

Kaplan, A., &Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons62(1), 15-25.

Kemp, A. (2023). Competitive advantages through artificial intelligence: Toward a theory of situated AI. Academy of Management Review, in press.

Kokshagina, O., & Schneider, S. (2023). The digital workplace: Navigating in a jungle of paradoxical tensions. California Management Review65(2), 129-155.

Kolbjørnsrud, V., Amico, R., & Thomas, R. J. (2016). How artificial intelligence will redefine management. Harvard Business Review2(1), 3-10.

Krakowski, S., Luger, J., &Raisch, S. (2023). Artificial intelligence and the changing sources of competitive advantage. Strategic Management Journal,44(6), 1425-1452.

Kumar, A., Krishnamoorthy, B., & Bhattacharyya, S. S. (2023). Machine learning and artificial intelligence-induced technostress in organizations: a study on automation-augmentation paradox with socio-technical systems as coping mechanisms. International Journal of Organizational Analysis.

Lado, A. A., Boyd, N. G., Wright, P., & Kroll, M. (2006). Paradox and theorizing within the resource-based view. Academy of Management Review31(1), 115-131.

Lawton, T. C., &Vassolo, R. S. (2022). Dynamics in strategic management research: An agenda for LRP. Long Range Planning55(5), 102246

Lewis, M. W., & Kelemen, M. L. (2002). Multiparadigm inquiry: Exploring organizational pluralism and paradox. Human Relations55(2), 251-275.

Mejia, S. (2023). The Normative and Cultural Dimension of Work: Technological Unemployment as a Cultural Threat to a Meaningful Life. Journal of Business Ethics, 1-18.

Murtarelli, G., Gregory, A., & Romenti, S. (2021). A conversation-based perspective for shaping ethical human-machine interactions: The particular challenge of chatbots. Journal of Business Research129, 927-935.

Natarajan, M., Seraj, E., Altundas, B., Paleja, R., Ye, S., Chen, L., Jensen, R., Chang, K. C. & Gombolay, M. (2023). Human-Robot Teaming: Grand Challenges. Current Robotics Reports, 1-20.

Nonaka, I., & Takeuchi, H. (2021). Humanizing strategy. Long Range Planning54(4), 102070.

Papagiannidis, E., Mikalef, P., Conboy, K., & Van de Wetering, R. (2023). Uncovering the dark side of AI-based decision-making: A case study in a B2B context. Industrial Marketing Management115, 253-265.

Pina e Cunha, M. & Putnam, L. L. (2019). Paradox theory and the paradox of success. Strategic Organization17(1), 95-106.

Raisch, S., & Krakowski, S. (2021). Artificial intelligence and management: The automation–augmentation paradox. Academy of Management Review46(1), 192-210.

Shrestha, Y. R., Ben-Menahem, S. M., & Von Krogh, G. (2019). Organizational decision-making structures in the age of artificial intelligence. California Management Review61(4), 66-83.

Sjödin, D., Parida, V., Palmié, M., & Wincent, J. (2021). How AI capabilities enable business model innovation: Scaling AI through co-evolutionary processes and feedback loops. Journal of Business Research134, 574-587.

Sowa, K., Przegalinska, A., & Ciechanowski, L. (2021). Cobots in knowledge work: Human–AI collaboration in managerial professions. Journal of Business Research, 125, 135-142.

Tesla (2023). AI & Robotics, https://www.tesla.com/AI, available online 19 October 2023.

Volberda, H. W., Khanagha, S., Baden-Fuller, C., Mihalache, O. R., & Birkinshaw, J. (2021). Strategizing in a digital world: Overcoming cognitive barriers, reconfiguring routines and introducing new organizational forms. Long Range Planning54(5), 102110.

Wilson, H. J., & Daugherty, P. R. (2018). Collaborative intelligence: Humans and AI are joining forces. Harvard Business Review, 96(4), 114-123.

World Economic Forum (2022). ChatGPT: the bot that can engage in intelligent conversation.

Keywords:

paradoxes, paradox, artificial intelligence, AI, strategy theory and practice