Skip to main content

POLICY BRIEF

Eur. j. cult. manag. policy, 11 November 2024
This article is part of the Special Issue Artificial Intelligence: Cultural Policy, Management, Education, and Research View all 4 articles

Cultural work, wellbeing, and AI

  • 1Business School for the Creative Industries, University for the Creative Arts, Epsom, United Kingdom
  • 2Institute for Digital Culture, University of Leicester, Leicester, United Kingdom

Introduction

In museums, heritage, and non-profit cultural organisations, thought leadership on the ethical implications of AI is gathering speed. Notable initiatives include the Network of European Museum Organisations (NEMO)’s efforts to address the uptake of AI in museums (Network of European Museum Organisations, 2024) and the UK’s Arts and Humanities Research Council’s BRAID programme dedicated to integrating Arts and Humanities research into the “Responsible AI” ecosystem (BRAID, 2024). This is a fast-evolving area with new analysis and calls to action appearing with frequency. As yet however, little attention has been given to how AI is emotionally impacting lived experiences of cultural workers as organisations seek to operationalise it. This paper highlights the need to consider such impacts, as well as the general-purpose technologies AI builds upon, such as biotechnology and a connected ecosystem of devices, on cultural workers and their practices. The convergence of these technologies signals what futurist Amy Webb calls a technology “supercycle” with far-reaching implications (Aiello, 2024).

This paper does not present new empirical evidence but rather portends the need for more research on the relationship between wellbeing, AI, and work in cultural organisations. Based in the UK and the Netherlands, the authors are researcher-practitioners who explore the impact of digital technology on cultural workforces. We seek to critically anticipate the implications of AI on those working on the ground in cultural organisations. Our ideas and examples draw from the UK, Europe and the United States, but we hope they speak to experiences of cultural workers across the globe.

In what follows, a strategic foresight method known as the “Futures Triangle” (Inayatullah, 2023) will be deployed to consider the plausible future of AI-driven cultural work that might emerge between three pushing and pulling corners – the past, the present, and the future – each of which is shaping the adoption of AI in our cultural organisations. This framework helps raise awareness of the trends, drivers, and signals of which cultural organisations need to be aware to ensure an integration of AI that empowers workers to engage effectively while addressing the ethical dilemmas involved. This structure recognises the history that has shaped our current decisions, determines what to carry forward or leave behind, and confronts present challenges. We must consider the impact of our choices and how they will resonate into the future, ensuring they reflect the people and context of multiple possible futures.

Weight of the past

In Future Thinking, trends refer to long-term patterns that can be carried forward into the future. There are two trends that have impacted cultural work, wellbeing, and AI: digital transformation and decolonisation.

Trend 1: digital transformation

The recent history of digital transformation in museums, heritage and cultural organisations is convoluted. Many associate momentum to a 2017 review of English museums which argued for “dynamic data for dynamic collections” (Department of Culture, Media, and Sport, 2017: 64). The following year, the UK’s Department for Culture, Media, and Sport (DCMS) launched Culture is Digital, a report calling for “practitioners and organisations across the cultural and tech sectors” (Department of Culture, Media, and Sport, 2018: 17) to develop “digital thinking” (ibid: 9) stating that organisations felt “held back” by a lack of infrastructure, resources, digital skills and leadership training, resulting in “a fragmented approach” to technology (ibid: 5). Despite digital developments in many cultural organisations the story remained patchy, with funding tending to prioritise “shiny” short-term over long-term infrastructure projects, an ongoing deficit in digital literacy, cataloguing backlogs, and a predominance of non-interoperable systems and formats. This often led to reluctance, fear, and apathy when it came to instilling digital change up to and since the COVID-19 pandemic. As a respondent in a survey on AI in heritage organisations stated: even if AI has the potential “to revolutionise the heritage sector,” there remain “real problems that are affecting us, like skills shortages, funding shortages and volunteer shortages” (Oates, 2023). They continued: “the obsession with “digital” is unhelpful, especially when digital is so poorly defined that the adjective is arbitrarily used as a meaningless noun” (ibid). However, as John Stack, Director of Digital Innovation and Technology at The National Gallery, notes: “Unlike previous recent waves of technology (crypto currency, web3, NFTs, etc.) the AI revolution feels different – there’s a sense in which it seems likely to change many things, but we don’t fully understand the potential and implications” [Stack J. “AI Article”. Message to: Frost, S. 2024 Jan 9 (cited 2024 Sept 5)].

Trend 2: decolonisation

Often founded on colonial principles, cultural organisations carry legacies that have both hindered and guided their embrace of new technologies. Decolonisation initiatives have burgeoned in recent years with bodies such as UNESCO, International Council of Museums (ICOM), American Alliance of Museums, and Museums Association supporting the need to “recognise the integral role of empire in museums - from their creation to the present day” (Museums Association, 2024). While actions for enhanced traceability and diligence in the acquisition of cultural objects has grown, there has been less vigilance regarding the integration of emerging technologies which invite new, equally problematic, forms of coded gaze into legacy institutions. Despite conversations amongst museum and heritage practitioners on the dangers of commercial technologies (Pratty, 2019), there have been few practical attempts to source long-term ethical alternatives. As Oonagh Murphy, editor of this Issue, points out: there is a need to “engage in a broader conversation about the power and impact” of digital tools and products (Murphy, 2024: 73). NEMO has implied that museums have the “potential” to be “partners in the development of ethical practices related to emerging technologies” (2024), but there is more work to be done to link decolonisation efforts with the integration of AI.

Push of the present

Drivers are broad long-term forces that are likely to have a significant impact on the future. Drawing on research exploring AI in the global workplace, we locate the following as shaping the signals to emerge in the future.

Driver 1: job displacement

Research suggests that AI will predominantly result in augmentation rather than automation (International Labour Organization, 2024: 6). Clerical work will be most exposed to automation because of generative AI, with 24% of tasks “highly exposed” (ibid). This is a gendered issue. As women tend to be overrepresented in the clerical field, women’s jobs may be “twice as likely to be exposed” as men’s (Muldoon et al., 2024: 53). Museum and heritage environments have long been recognised as “pink collar” workplaces (GEMM, 2019) whereby women undertake most administrative roles. In the USA, 58.6% of the workforce is female across archival, curatorial and museum technician roles (DataUSA, 2022) while a UK report on the arts, culture and heritage workforce found only 34% of women occupy managerial or director positions with the majority in junior roles (Policy and Evidence Centre for the Cultural and Creative Industries, 2024). Jobs in areas such as collections management could have large components of their roles augmented by AI. Large Language Models excel at “formal, standardised tasks with clear objectives and large amounts of text data” (Muldoon et al., 2024: 53). In cultural organisations, this form of digital data entry is typically done by women on relatively low pay (Frost, 2022). Generative AI is also valuable in content creation, meaning that marketing and social media roles may be affected.

Driver 2: demand for AI literacy

Discussions of AI workplace integration emphasise reskilling workforces to support its effective use. In the context of cultural organisations, stakeholders are recognising AI literacy as an opportunity to reshape digital work, introduce new efficiencies, whilst emphasise the value of employee agency to digitally experiment through AI methods. Industry specialist Jocelyn Burnham, who offers AI workshops for the cultural and creative sectors, recognises AI as a tool for experimenting with what new technologies might offer, rather than being anxious about how they might disrupt the status quo. AI prompts us to ask more critically engaged questions, to understand our own needs and the needs of audiences in different ways. Large Language Models (LLMs) are helping workforces augment their digital skills, enabling those who have not been able to use new technologies so easily in the past to do so, whilst helping staff learn in different ways [Burnham J. “AI and Cultural Organisations”. (Personal interview, 2024 Jan 17) Brighton; 2024 (unpublished)].

There is a flipside. The International Labour Organization notes that some parts of the world are at risk of an “AI divide” whereby “high income nations disproportionately benefit from AI advancements” (2024: 5). This could be the case in the cultural field also where AI training is inconsistent. As Angie Judge, CEO of Dexibit, notes: “AI will quickly create a world of the haves and have nots. I hope the museum sector will find itself on the right side of that equation” (Styx, 2024). For Stack, “best practice is still emerging and much of the work is “bottom up” with museum practitioners exploring the potential, rather than top down with managers and leaders directing this work”. He continues: “museums are starting to recognise the need for a policy document that is set for internal review every 6 months or so, while other internal museum policies often go years before review” [Stack J. “AI Article”. Message to: Frost, S. 2024 Jan 9 (cited 2024 Sept 5)]. Such need for agility in the creation of AI policy has been recognised by the UK government also: their Generative AI Framework for HMG describes itself as “necessarily incomplete and dynamic” (Gov.uk, 2024).

Driver 3: misinformation and bias

The most consequential driver for cultural organisations may be the inherent bias embedded within and potential misuse of LLM training data. Many have documented the ways AI datasets have been drawn through the rules and algorithms of those who trained them, leading to race and gender discrimination across multiple online platforms (Leavy et al., 2020; Buolamwini, 2023). Much of this software has been invested in by legacy tech companies and it is widely recognised that technological development is under the control of those “with strong imperatives to continue expanding their operations and increasing their profits” (Muldoon et al., 2024: 161). New government guidelines for the public sector emphasise the need, when working with AI, “to establish and communicate how you will address ethical concerns from the start” (Gov.uk, 2024) but how can smaller, less powerful organisations reclaim power when limited by the options available to them? Helpfully, the European Commission’s recent foresight report focused on the future of Big Tech in Europe and its implications for research and innovation (R&I). Using a scenario-based approach, it offers recommendations to guide the EU’s R&I policy. Significantly, all four imagined scenarios indicate “varieties of high tech capitalism” with only one emphasising the prosperity of civil society over Big Tech (European Commission, 2024: 8). Cultural organisations can use such resources to understand the current landscape and strategically respond not only to present developments but anticipate alternative futures of information control.

There is an additionally troubling aspect regarding the increased risk of copyright infringement for cultural organisations who make income from their picture libraries through licensing for reproduction and commercial research. This issue gained attention in the UK through high-profile publications such as the House of Commons Culture, Media and Sport Committee report on Content Remuneration, which calls on government “to ensure that creators have proper mechanisms to enforce their content and receive fair compensation when their works are used by AI systems” (House of Commons Culture Media and Sport Committee, 2024). If this driver is not managed proactively, what might be its effects on income in years to come?

Driver 4: the emotional toll of digital work

The exponential growth of AI-related employment has surfaced questions regarding its hidden human costs (Muldoon et al., 2024; Gray and Suri, 2019; Catanzariti et al., 2021). Attention is being paid to how human labour plays a pivotal role in enabling AI technology, particularly generative AI, through the paid, piecework of collecting, processing, and labelling of datasets needed by models such as ChatGPT for training. Our research has observed the emotional toll of digitally driven labour in museums, heritage, and cultural organisations (Frost, 2021; 2022; Vargas, 2020). Daily tasks such as establishing suitable naming conventions, cleaning up and figuring out problems in the data, and dealing with acquisition backlogs, require persistence, care, clarity, and impartiality. Those who promote digital change experience personal, psychological costs and ethical dilemmas consistent with other types of cultural and creative work (see Banks, 2017; Belfiore, 2021). When it comes to AI, the concern of many digital staff is, as Steven Franklin, Social Media Manager at The Royal Institution explains, “if you’ve got technology that is inherently designed to increase individuals’ productivity then the demands of the job will probably go up with it. So, you’re going to be expecting more people to do more” [Franklin S. “AI and Cultural Work”. (Personal interview, 2024 Jan 12) Brighton; 2024 (unpublished)].

A recent study explored the “dark side effects of digital working” of 142 workers, specifically their levels of stress, overload, anxiety and Fear of Missing Out on Information (Marsh et al., 2024). It found that “employees who are overloaded by information or worried about missing out on it in the digital workplace face risks to their wellbeing at work” (2024: 12) and stated that greater consideration of the digital workplace “is essential to not only employee productivity but also wellbeing in modern organisations” (ibid). Cultural workplaces similarly need to reflect on how AI will impact wellbeing, and how the information overload of working with new technology has emotional consequences for workers.

Driver 5: challenging systems of expertise

It is easy to suggest that AI poses a threat to those who have spent years honing their skills as cultural workers, drawing on data suggesting that those with higher qualifications are more exposed to AI (Department for Education, 2023: 18). We propose instead that cultural roles will be positively augmented by AI, creating opportunities for better discoverability and searchability in daily operations as well as in creative reuse and reimagination within the interpretation of collections and audience engagement. AI will need to be accommodated and understood within all job roles in the cultural workplace; it will become the responsibility - and the possibility - of all (and yes, this will require consistent resourcing of time, people, energy, and money – as does all digital maturity work).

More concerning for us is a wider existential issue: if AI, as has been argued by many in the creative industries, is devaluing creativity as a specifically human endeavour, will this de-prioritise the importance of the institutions that care for it? We have heard outcries, in areas such as screenwriting, music production, video games and the visual arts regarding the use of generative AI and other digital technologies to replicate human creative outputs (SAG-AFTRA, 2024; Weiss, 2023; The Art Newspaper, 2023). As the quality of generative AI develops, will there not be a moment when even experts are unable to distinguish between AI and human generated cultural artefacts? What might this mean for history itself, for the historical record? Nick Bostrom’s Deep Utopia (Bostrum, 2024) offers one answer – he invites us to rethink the complexity of utopia and confront the unintended consequences of our pursuit for better representation and justice, challenging the binary thinking of utopian vs. dystopian outcomes. He argues that true progress instead requires moving beyond fixed notions of perfection to consider multifaceted possibilities and the risks of future realities.

Pull of the future

Signals represent small, local innovations or disruptions that have the potential to grow in scale and distribution (Howard, 2021). Below are five signals, posed as “What if?” Questions, that critically anticipate how cultural workplaces might tackle the present and reimagine a future with AI. “What if” questions encourage creative thinking, challenge assumptions, and open alternative possibilities. For futurist Peter Schwartz, they are part of scenario planning, a strategic tool to prepare for uncertainty by imagining diverse future scenarios (Schwartz, 1991).

1. What if we developed greater “coopetition” - cooperating with our competitors to achieve a common goal - as we integrate AI in our cultural workplaces? Our sector must radically cooperate with other stakeholders on the ethics of AI - from corporations to governments, from other cultural organisations to grassroots bodies.

2. What if we embedded agility in the development of AI literacy in our cultural workplaces? The integration of AI in our cultural workplaces has required on-the-job learning. New technological developments mean greater demand for new digital literacies in our workplaces; it is imperative to embrace lifelong learning and the upskilling of workforces.

3. What if we recognised the power dynamics inherent in the relationship between AI technology, climate change, and social justice, and acknowledged more openly how they shape decision-making in our cultural workplaces? These are linked trends and thinking about them together can reshape methods and modes of decision-making and equity.

4. What if we ensured that the emotional and psychological impacts of digital work were central to employment practices in our cultural workplaces? By prioritising the wellbeing of cultural workers and audiences, and by recognising the human effort behind AI, cultural organisations will ensure that AI enhances both behind-the-scenes work and public interaction.

5. What if we examined the processes enabling AI, considering their direct impact on cultural workers, using our role as cultural custodians to lead the global conversation on responsible AI in ways that reflect and protect lived experience in our cultural workplaces? This would encourage us to evaluate the processes behind AI development and consider how cultural professionals can leverage their influence to guide responsible practices that safeguard the future of the cultural workforce.

What is needed now is more empirical research on how employee wellbeing and job quality in relation to AI innovation are being considered in museums, heritage and cultural organisations; more collaboration with other parts of the cultural and creative industries as we operationalise AI; and more acknowledgement - especially at the level of leadership and boards - of how AI operates and circulates within a dichotomy of power, systemic inequality, and cultural legacy. Ultimately, we need to learn to better interpret and navigate this complex landscape, ensuring that the integration of AI supports and enhances lived experiences of cultural workers rather than undermining them.

Author contributions

All authors participated in the design, interpretation, and analysis of the data and review of the manuscript; SF conducted interviews, LV supplied industry and adjacent sector resources, SF and LV wrote the manuscript. All authors contributed to the article and approved the submitted version.

Funding

The author(s) declare that financial support was received for the research, authorship, and/or publication of this article. The research and development of this paper originated during the AHRC-funded “One by One” initiative (2017–2022) (grant numbers AH/PO14038/1; AH/TO13192/1; AH/VOO9710/1), led by the School of Museum Studies at University of Leicester (UK). Open access funded by the UKRI open access block grant (University of Leicester).

Acknowledgments

Additional thanks to John Stack, Dr Steven Franklin, and Jocelyn Burnham for their thoughtful contributions to this research. Open access funded by the UKRI open access block grant (University of Leicester).

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

Aiello, C. (2024). This futurist is predicting a supercycle of tech as AI biotech wearables converge. Inc. Available at: https://www.inc.com/chloe-aiello/this-futurist-is-predicting-a-supercycle-of-tech-as-ai-biotech-wearables-converge.html (Accessed September 5, 2024).

Google Scholar

Banks, M. (2017). Creative justice. Maryland: Rowman and Littlefield.

Google Scholar

Belfiore, E. Who cares? At what price? The hidden costs of socially engaged arts labour and the moral failure of cultural policy. (2021) Eur. J. Cult. Stud. 25:1, 61–78. doi:10.1177/1367549420982863

CrossRef Full Text | Google Scholar

Bostrum, N. (2024). Deep utopia: life and meaning in a solved world. Virginia: Ideapress publishing.

Google Scholar

BRAID (2024). BRAID is a UK-wide programme dedicated to integrating arts and humanities research more fully into the responsible AI ecosystem, as well as bridging the divides between academic, industry, policy and regulatory work on responsible AI. Available at: https://braiduk.org/ (Accessed September 5, 2024).

Google Scholar

Buolamwini, J. (2023). Unmasking AI. New York: Penguin Random House.

Google Scholar

Catanzariti, B., Chandiramowuli, S., Mohamed, S., and Natarjan, S. (2021). “The global labours of AI and data intensive systems,” in Conference: CSCW '21: computer supported cooperative work and social computing. Available at: https://web.archive.org/web/20211026014830id_/https://dl.acm.org/doi/pdf/10.1145/3462204.3481725 (Accessed September 5, 2024).

CrossRef Full Text | Google Scholar

DataUSA (2022). Archivists, curators, and museum technicians. Available at: https://datausa.io/profile/soc/archivists-curators-museum-technicians?gender-and-age-employment=employedGA&gender-options=sex2 (Accessed September 5, 2024).

Google Scholar

Department for Education (2023). Impact of AI on UK jobs and training. Available at: https://assets.publishing.service.gov.uk/media/656856b8cc1ec500138eef49/Gov.UK_Impact_of_AI_on_UK_Jobs_and_Training.pdf (Accessed September 5, 2024).

Google Scholar

Department of Culture, Media, and Sport (DCMS) (2017). The Mendoza Review: an independent review of museums in England. Available at: https://www.gov.uk/government/publications/the-mendoza-review-an-independent-review-of-museums-in-england (Accessed September 5, 2024).

Google Scholar

Department of Culture, Media, and Sport (DCMS) (2018). Culture is digital. Available at: https://www.gov.uk/government/publications/culture-is-digital (Accessed September 5, 2024).

Google Scholar

European Commission (2024). Foresight: futures of Big tech in Europe: scenarios and policy implications. Available at: https://www.mediahuis.ie/app/uploads/2024/04/Futures-of-Big-Tech-in-Europe.pdf (Accessed September 5, 2024).

Google Scholar

Frost, S. (2021). “People. Change. Museums.” podcast. Available at: https://open.spotify.com/show/66Pk1HVrHjMRiEftrTv1wM (Accessed February 7, 2024).

Google Scholar

Frost, S. (2022). The hidden constellation podcast. Available at: https://open.spotify.com/show/7CYYN45Gb0tue2T6taayPl (Accessed February 7, 2024).

Google Scholar

Gender Equity in Museums Movement (GEMM) (2019). Museums as a pink-collar profession: the consequences and how to address them. Available at: https://docs.wixstatic.com/ugd/434074_6549b5054a474ac99b64d5780bc012b7.pdf (Accessed September 5, 2024).

Google Scholar

Gov.uk (2024). Generative AI framework for HMG. Available at: https://www.gov.uk/government/publications/generative-ai-framework-for-hmg/generative-ai-framework-for-hmg-html (Accessed September 5, 2024).

Google Scholar

Gray, M., and Suri, S. (2019). Ghost Work: how to stop silicon valley from building a new global underclass. Boston: Houghton Mifflin Harcourt.

Google Scholar

House of Commons Culture, Media, and Sport Committee (2024). Content remuneration: fifth report of session 2023-24. Available at: https://committees.parliament.uk/publications/44143/documents/219382/default/(Accessed September 5, 2024).

Google Scholar

Howard, S. (2021). Drivers and Signals: How are they different? Institute for the Future. Available at: https://legacy.iftf.org/future-now/article-detail/drivers-and-signals-how-are-they-different/#:∼:text=Drivers%20on%20the%20other%20hand,decrease%20in%20trust%20in%20government(Accessed: September 5, 2024).

Google Scholar

Inayatullah, S. (2023). The futures triangle: origins and iterations. World Futur. Rev. 15 (2-4), 112–121. doi:10.1177/19467567231203162

CrossRef Full Text | Google Scholar

International Labour Organisation (2024). Mind the AI divide: shaping a global perspective on the future of work. Available at: https://www.ilo.org/publications/major-publications/mind-ai-divide-shaping-global-perspective-future-work (Accessed September 5, 2024).

Google Scholar

Leavy, S., O’Sullivan, B., and Siapera, E. (2020). Data, power and bias in artificial intelligence. arXiv Prepr. arXiv:2008.07341. Available at: https://arxiv.org/pdf/2008.07341 (Accessed September 6, 2024).

Google Scholar

Marsh, E., Perez Vallejos, E., and Spence, A. (2024). Overloaded by information or worried about missing out on it: a quantitative study of stress, burnout, and mental health implications in the digital workplace. Sage Open 14 (3). doi:10.1177/21582440241268830

CrossRef Full Text | Google Scholar

Muldoon, J., Graham, M., and Cant, C. (2024). Feeding the machine: the hidden human labour powering AI. Edinburgh: Canongate Books.

Google Scholar

Murphy, O. (2024). “Power, data, and control: AI in the museum,” in AI in museums. Editors S. Thiel, and J. Bernhardt (Bielefeld: Transcript), 73–82.

Google Scholar

Museums Association (2024). Decolonising museums. Available at: https://www.museumsassociation.org/campaigns/decolonising-museums/(Accessed September 5, 2024).

Google Scholar

Network of European Museum Organisations (NEMO) (2024). NEMO presents 3 recommendations addressing the development of AI technology in museums. Available at: https://www.ne-mo.org/news-events/article/nemo-presents-3-recommendations-addressing-the-development-of-ai-technology-in-museums (Accessed September 5, 2024).

Google Scholar

Oates, E. (2023). Heritage pulse: spotlight on artificial intelligence. Arts Mark. Assoc. Available at: https://www.culturehive.co.uk/resources/heritage-pulse-spotlight-on-artificial-intelligence/(Accessed February 7, 2024).

Google Scholar

Policy and Evidence Centre for the Cultural and Creative Industries (PEC) (2024). State of the nation: arts, culture, and heritage: audiences and workforce. Available at: https://pec.ac.uk/state_of_the_nation/arts-cultural-heritage-audiences-and-workforce/(Accessed September 5, 2024).

Google Scholar

Pratty, J. (2019). A tangled web. Museums J. Available at: https://www.museumsassociation.org/museums-journal/features/2019/04/01042019-a-tangled-web/(Accessed September 5, 2024).

Google Scholar

SAG-AFTRA (2024). Video-game strike. Available at: https://www.sagaftra.org/contracts-industry-resources/contracts/interactive-media-video-game-strike (Accessed September 5, 2024).

Google Scholar

Schwartz, P. (1991). The art of the long view: planning for the future in an uncertain world. New York: Doubleday.

Google Scholar

The Art Newspaper (2023). DeviantArt and Midjourney deny wrongdoing in copyright infringement lawsuit. Available at: https://www.theartnewspaper.com/2024/05/10/deviantart-midjourney-stable-diffusion-artificial-intelligence-image-generators (Accessed September 5, 2024).

Google Scholar

Vargas, L. (2020). “A ‘CALM’ approach to leadership in the digital age,” in Digital pathways. Available at: https://digipathways.co.uk/resources/a-calm-approach-to-leadership-in-the-digital-age/(Accessed February 7, 2024).

Google Scholar

Weiss, L. (2023). SAG-AFTRA’s new contract falls short on protections from Artificial Intelligence. Prism. Available at: https://prismreports.org/2023/12/05/sag-aftra-contract-falls-short-ai-protections/(Accessed September 5, 2024).

Google Scholar

Keywords: museums, heritage, digital labour, AI, future thinking, cultural work

Citation: Frost S and Vargas L (2024) Cultural work, wellbeing, and AI. Eur. J. Cult. Manag. Polic. 14:12825. doi: 10.3389/ejcmp.2024.12825

Received: 08 February 2024; Accepted: 21 October 2024;
Published: 11 November 2024.

Copyright © 2024 Frost and Vargas. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Sophie Frost, sophie.frost@uca.ac.uk; Lauren Vargas, vargaslmv@gmail.com

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.