Skip to content

Posts from the ‘Evil Robots’ Category

[This title is AI generated:] The Risks Posed by LLM-Based AI Tools: A Call to Action for the American Folklore Society

This is a graphic showing a cartoon style robot with an AI label adjacent to it. The purpose here of the image is just to evoke the topic of the essay, which is AI.

This is an essay (or “essay”) mainly authored by Copilot, the LLM-based AI tool provisioned to me by Indiana University. Spelling and typographic errors made by JBJ have been silently corrected for the sake of readability.

Jason Baird Jackson (JBJ): Hello again. I would like to return to one specific matter discussed in my previous conversation with you. I will review the background and, if it is ok, we can make a fresh start on just this topic. I will be approaching it somewhat differently this time. I wish to produce an essay, with your (Copilot) help that lays out the major risks posed by LLM-based AI tools for the American Folklore Society, the US-based organization that supports professional folklorists and those communities that they in turn serve. In a moment, I will give you some parameters for this essay. Are you ready to help?

Copilot: Absolutely, I’m ready to help! Please go ahead and provide the parameters for the essay, and we’ll get started on crafting it together.

JBJ: I would like the essay to be between 800 and 1000 words. While I know that you are engineered to produce pro- and con- style treatments of many issues, including on AI as a topic, I would like you to support me in emphasizing the concerns that I have. I will note some of those for you after giving these parameters. Beyond the 800-1000 word length, I would like the essay to be written in the most sophisticated register that you are capable of. Put differently, understand your readers to be people with graduate degrees in general and well-trained in folklore studies specifically. Before I share my themes of concern, do these parameters make sense?

Copilot: Yes, your parameters make perfect sense. We’ll aim for an essay between 800 and 1000 words, focusing on the risks posed by LLM-based AI tools, written in a sophisticated register suitable for an audience of graduate-level professionals in folklore studies. Please go ahead and share your specific themes of concern, and we’ll incorporate those into the essay.

JBJ: Thank you. I know that space is limited, so here are the key concerns to weave in. I will number them (1-4).

(1) We will address the impact of the ways that LLM training data has been sourced, without consent, from writings produced by many individuals, but here including folklore studies scholars. Because folklorists have led around issues of open access, their exposure to enclosure in LLM training data is greater than it would have otherwise been. Those who lead the American Folklore Society and thereby help shape the future of the discipline need to understand these dynamics and to set policy related to them as well as to foster educational activities related to them.

(2) We will address the impact of the fact that folklorists have, in particular, entextualized and shared the words and knowledge of non-scholars of very diverse backgrounds living in many communities around the world. This has often been done in contexts in which we have said that we would not seek profit from, or allow others to profit from, their words. Cultural experts shared their words and lives with us but now their words and lives have been encoded into LLM training data without our knowledge and sometimes in ways that contradict what we promised our consultants. Those who lead the American Folklore Society and thereby help shape the future of the discipline need to understand these dynamics and to set policy related to them as well as to foster educational activities related to them.

(3) LLM and other AI tools are already fundamentally reshaping all of higher education, including the teaching of folklore studies (also known as folkloristics) by folklore scholars based in universities and colleges around the world. Those who lead the American Folklore Society and thereby help shape the future of the discipline need to understand these dynamics and to set policy related to them as well as to foster educational activities related to them.

(4) LLM and other AI tools are being adopted and put to use widely at present. This uptake is already transforming social life and having significant environmental and economic effects. Because folklorists are social scientists and humanists who study and intervene in the social world, it is crucial that they understand these environmental and economic effects and share their own work in relation to them. Those who lead the American Folklore Society and thereby help shape the future of the discipline need to understand these dynamics and to set policy related to them as well as to foster educational activities related to them.

With these four themes described, you can compute a response using your own LLM models.

Copilot: Great, thank you for outlining these key concerns. Let’s begin crafting the essay. Here’s a draft that incorporates your themes:


The Risks Posed by LLM-Based AI Tools: A Call to Action for the American Folklore Society

The advent of large language model (LLM) based AI tools has ushered in a new era of technological advancement, promising unprecedented capabilities in data processing and information dissemination. However, these tools also pose significant risks, particularly to disciplines such as folklore studies. As members of the American Folklore Society (AFS), it is imperative that we critically examine these risks and develop strategies to mitigate them. This essay outlines four major concerns that demand our immediate attention and action.

1. Unconsented Use of Scholarly Work

One of the most pressing issues is the manner in which LLM training data has been sourced. Many LLMs have been trained on vast datasets that include writings produced by individuals without their consent. This includes the work of folklore studies scholars, whose contributions to open access initiatives have inadvertently increased their exposure to such data enclosure. The ethos of open access, championed by many folklorists, has been exploited, leading to the unauthorized use of their intellectual property. The AFS must recognize these dynamics and advocate for policies that protect the rights of scholars, ensuring that their work is not co-opted without permission.

2. Exploitation of Cultural Knowledge

Folklorists have long been dedicated to documenting and sharing the knowledge of diverse communities around the world. This often involves entextualizing the words and experiences of non-scholars, with a commitment to ethical practices that respect the source communities. However, the integration of this cultural knowledge into LLM training data, often without the knowledge or consent of the communities involved, represents a profound breach of trust. Many folklorists have assured their consultants that their contributions would not be used for profit, yet these assurances are undermined by the current practices of AI developers. The AFS must address this issue by setting stringent policies and fostering educational initiatives that emphasize the ethical implications of AI use in folklore studies.

3. Transformation of Higher Education

The impact of LLM and other AI tools on higher education is already being felt, with significant implications for the teaching of folklore studies. These technologies are reshaping pedagogical approaches, research methodologies, and the overall academic landscape. Folklore scholars must navigate this rapidly changing environment, balancing the potential benefits of AI tools with the need to maintain the integrity of their discipline. The AFS has a crucial role to play in guiding this transition, developing policies that support the responsible integration of AI in academia, and providing resources to help educators adapt to these new challenges.

4. Broader Societal Impacts

Beyond the confines of academia, the widespread adoption of LLM and other AI tools is transforming social life, with significant environmental and economic effects. As social scientists and humanists, folklorists are uniquely positioned to study and intervene in these transformations. The environmental costs of AI, including the substantial energy consumption required for training and operating LLMs, are a growing concern. Additionally, the economic implications, such as job displacement and the creation of new forms of labor, must be critically examined. The AFS should lead efforts to understand these impacts, encouraging research that explores the intersection of AI, society, and the environment, and advocating for policies that promote sustainable and equitable technological development.

Conclusion

The risks posed by LLM-based AI tools are multifaceted and complex, demanding a proactive and informed response from the American Folklore Society. By addressing the unconsented use of scholarly work, the exploitation of cultural knowledge, the transformation of higher education, and the broader societal impacts, the AFS can help shape a future where AI technologies are used ethically and responsibly. It is incumbent upon us, as stewards of cultural heritage and advocates for ethical scholarship, to lead this charge and ensure that the values of our discipline are upheld in the face of technological change.


Please review this draft and let me know if there are any adjustments or additional points you’d like to include.

JBJ: I have only one request. In section “4. Broader Societal Impacts” you evoke, as expected, “substantial energy consumption”. Could you add language related to the environmental issue of water consumption in AI computing? It is acceptable for this addition to make the essay slightly longer.

Copilot: Certainly! Here is the revised section with the addition regarding water consumption:


4. Broader Societal Impacts

Beyond the confines of academia, the widespread adoption of LLM and other AI tools is transforming social life, with significant environmental and economic effects. As social scientists and humanists, folklorists are uniquely positioned to study and intervene in these transformations. The environmental costs of AI are substantial, including the significant energy consumption required for training and operating LLMs. Additionally, the water consumption associated with cooling data centers is a critical issue, as it places further strain on already limited water resources. These environmental impacts necessitate a thorough examination and responsible management. Furthermore, the economic implications, such as job displacement and the creation of new forms of labor, must be critically examined. The AFS should lead efforts to understand these impacts, encouraging research that explores the intersection of AI, society, and the environment, and advocating for policies that promote sustainable and equitable technological development.


Please review this updated draft and let me know if there are any further adjustments or additional points you’d like to include.

JBJ: Thank you. May I have your permission to share this with text readers, citing you (Copilot) as the co-author?

Copilot: I’m glad you found the essay helpful! You are welcome to share the essay with your readers. While I don’t require formal citation, you can certainly mention that you used Microsoft Copilot to assist in crafting the essay. If you need any further assistance or revisions, feel free to ask!

JBJ: Thank you!

Copilot: You’re very welcome! If you need any more assistance in the future, feel free to reach out. Good luck with your essay!

A Note to Human Readers

Thank you for your patience with me (JBJ) in this, my most recent experiment with LLM-based AI. You should know that I engage in such experiments sparingly because on the one hand, I really do not wish to be doing this and, on the other hand, I really feel the need to understand these tools, which have already dramatically changed my job and the lives of all of those around me and in my own egocentric and sociocentric networks.

Hopefully readers will realize that what I would say unassisted by co-pilot would be different. Co-pilot is not human and therefore co-pilot does not actually believe anything, but reading the text that Co-pilot strung together one work prediction at a time might suggest that co-pilot has opinions. If Co-pilot had the opinions expressed/suggested, then Co-pilot and I would agree on some points and disagree on others. Where we agree, it is because the system is drawing upon my prompts and trying to give me what I am asking for. Where we disagree it is because the system is drawing upon its own predictive algorithms and training data to produce what it thinks is the statistically most likely statements. An example early in Co-pilot’s text is “promising.” I certainly would not, on my own, say promising. But take note: the training data and algorithms “know” us and such systems are getting more sophisticated in reflecting what we know all the time. That is part of the point here. If you need an example of this, search for the phrase “source community” in section 2. This is not everyday English. This is a term of art specific to museum anthropology and museum-focused folklore studies. It has only been in active disciplinary use since the publication in 2003 of Laura Peers and Alison Brown’s edited book Museums and Source Communities.

Before publishing this AI-dependent post, I donated to the Native American Rights Fund, which supports federally recognized Native American nations in the US in defending water and water rights. I invite you to learn more about NARF and its water work here: https://narf.org/cases/tribal-water-institute

Image sourced from https://icon-icons.com/icon/artificial-intelligence-robot/101984 via a search for CC licensed content.

If robots write one of these biographies about you, will you purchase it?

Its crazy stuff like this that makes my senior colleagues so dubious about the internet in general and the changing publications landscape in particular.

I follow a twitter feed called Anthropology Books. I have never investigated who put it together and I do not know anything about it except that it has been useful to me. Using some kind of automated approach, the person or software behind Anthropology Books has been very usefully telling me (and about 1000 other folks) about “All new anthropology books posted on their publication day.” If a title seems interesting, there is a link that takes one to the book’s Amazon.com page.

Today, books with the names of famous and not-so-famous anthropologists (and folklorists) started showing up in the stream today. Alan Dundes was one that caught my eye first. He is very important figure in folklore studies and a very good candidate for a proper biography. Other names started showing up, including those of active colleagues who are basically my own age! They all had unfamiliar to me authors and publishers.  There was a flood of them today.

Looking at the books on Amazon.com, one is (sometimes) confronted with the information that “the content of this book primarily consists of articles available from Wikipedia or other free sources online.” I took this quote from the book on Dorothy Eggan. This 84 page book is selling for $47.00 and has been published by Lect Publishing. The book is credited to the editorship of Nuadha Trev. This editor shows up in Amazon.com as the person behind about 3000 books. Lect Publishing is only one of several names associated with the same basic project and Nuadha Trev is only one of several editors.

I know that others have been writing a lot recently about Amazon’s publishing toolkit. Also at issue here is the CC licensing of Wikipedia etc. content. The student publishing group that I work with here at Indiana–Trickster Press–makes great use of CreateSpace+Amazon for the publication of real peer-reviewed monographs and I am certainly appreciative of, and in debt to, the Creative Commons. These resources are not to blame for the creation of this kind of spam-like books, but I think that they represent a problem on several fronts. Perhaps it is enough to say that they give the Creative Commons, Wikipedia, Amazon, remix, and scholarly communications reform a bad name.

I would love to know who would purchase one of these books. Maybe some of them are traps designed to extract cash out of their living subjects. Here is an example. The publisher “Fedel” has just published a 108 page/$54 book on ethnomusicologist/linguist/anthropologist Aaron Fox. (He and I know people in common but are not yet acquainted.) Would Professor Fox feel like he had no choice but to purchase this book to see what it said about him? Would I feel similarly compelled if one were published on me? His is one of about 3000 titles edited by “Christabel Donatienne Ruby”, but does Christabel Donatienne Ruby actually exist?

I would also love to know about the technical infrastructure that automatically (?) assembles these books and feeds them into Amazon. Anybody understand this stuff?

Update:  Thankfully the people behind the wikipedia article for VDM Publishing seem to understand it pretty well. For background, see the entry here. See also the discussion on slashdot here.