Generative, analytical and assistive informatics, sub-areas of so-called artificial “intelligence”, threatens numerous jobs and fields of labour in the book sector and will replace some professions by machines in the medium run; be it in the areas of writing, editing, proof reading, production, cover design, illustration, translation, selection and editing of original and translated works, audio book production or in the promotion and distribution of books.
Already, numerous criminal and damaging “AI business models” have developed in the book sector – with fake authors, fake books and also fake readers. It has been proved that the fundaments for large language models such as GPT, Meta, StableLM, BERT have been generated from copyrighted book works whose sources are shadow libraries such as Library Genesis (LibGen), Z-Library (Bok), Sci-Hub and Bibliotik – piracy websites. Without legal regulation, generative technologies accelerate and enable the expansion of exploitation, legitimisation of copyright infringement, climate harm, discrimination, information and communication distortion, identity theft, reputational damage, blacklisting, royalty fraud and collective licensing remuneration fraud.
At the same time, a close look and assessment is needed to categorise and regulate the individual aspects of advanced informatics; because not all smart software is “AI”, not every application is equally risky. We as a society, but especially as originators, as writers, need:
- A clear legal position on the exception(s) on text and data mining within Art 3 and Art 4 of the 2019/790 CDSM Directive, to clarify, whether machine learning is covered by TDM or not, which is at this time highly doubtful, and which leads to the consequences of volunteer licensing for a new form of usage – instead of opt-outing;
- A secured way to decide to use our writers’ and translators’ works for scraping and as “training material” for machine learning and competing products, rather than accepting previous illegal use or uncompensated terms;
- A “clean slate”: The immediate shutdown of those generative AI applications developed on works that are based on violations of copyright and personal rights.
The success of Generative AI in the book sector is based on theft
The spreading, mostly uncritical enthusiasm for generative advanced informatics (“AI”), such as large-scale language, image or audio models that produce culture-like output on text-prompts, lowers the appreciation for human creative labour. This enthusiasm is blind to the origins of these systems, as well as to the medium- and long-term consequences. This analysis draws attention to the seven sins of generative AI, which is considered as threat.
A distinction must be made within assistive or analysing informatics, as these are mainly supporting software, and not meant to replace human creativity and labour.
The invisible side effects of generative advanced informatics (AI) in the book sector and its impact on writers
(1) Generative “AI” is based on exploitation of human labour.
If all participants were adequately remunerated, none of the big twelve generative text or image (re-)generators (such as StableLM, BERT, GPT, Midjourney) could realistically cover their business. For years and far BEFORE the TDM exception of the 2019/790 CDSM Directive, works by citizens , authors,, and artists have been stolen and used to train the software. This is the only way their existence today is possible. In order to categorise language, videos and images, “labellers” are also exploited – often for hourly wages of less than two euros. Eight percent of all Americans do ghost work, the work of making so-called AI systems appear smart – data labelling, flagging, content filtering. More often, this repetitive work is outsourced from Silicon Valley for cost reasons, , to crowd and gig workers in Venezuela, Mexico, Bulgaria, India, Kenya, Syria or the Philippines, where there is neither minimum wage nor trade unions.
(2) AI harms human authors, their income and reputation through fake authors, fake books, fake readers – and identity theft:
(a) Uncontrolled AI output is being pushed into the bestseller lists with click farms: For months, the global self-publishing provider Amazon has been flooded with bogus books by fake authors whose text and visual content have been mashed together by (re-)generative text and image output software. AI bots from click farms “read” these nonsense works and pushed them into the bestseller lists . This led to a rapid decline in revenue for human authors by shared-revenue models, such as Kindle KDP (A pot of revenue divided by pages read and number of authors, similar to Spotify). At peak times, 80 out of 100 Kindle KDP bestsellers are AI editions. Only in September 2023 the retailer giant added a new section to its content guidelines for KDP focused on AI, which since then includes definitions of “AI-generated”, to label AI output.
(b) Identity theft and name deception: The world’s most important review platform Goodreads, like Amazon, is flooded with AI books published under the illegitimately used names of real human authors (or slightly altered spellings of real known names). These books are listed as new releases in the authors’ profiles and entice readers to buy them. However, the income from these AI books flows to unknown sources. Human authors who are cheated out of their earnings must spend money to defend themselves with lawyers. So far, neither Goodreads nor Amazon have stopped this identity theft, which damages the reputation of human authors when a (low-quality) AI product is associated with their name.
(c) Unauthorised machine translations open up foreign-language markets and channel sales to unknown sources: We observed cases of books being illegally translated from, for example, the original English language into Spanish and Portuguese by means of robot translation without a licence, and published under a different name, usually in Amazon Selfpublishing and often even equipped with an AI cover. The author names, in turn, deliberately resemble well-known names. The revenues flow to unknown sources.
(d) Publishing services only against payment by the author: Publishers are increasingly also using AI-generated covers. We have cases where authors requested human graphic designers and were asked to pay. This practice is considered indecent. However, the authors, as weaker contractual partners , hardly have the courage to refuse this, out of well-founded fear of being considered “difficult” or of being rejected by publishers for future cooperation. They are pressured into accepting a technology that harms their own profession at its core.
(e) Illegal remuneration claims to collective management organisations and media clients: It cannot be ruled out that automatically generated and machine translated press articles and machine translated books, or even regenerative produced AI images, already “enjoy” private copying remuneration from collective managements organisations (CMOs), as there is no legal labelling obligation yet; or generated texts, machine translations and generated images flow into the media on a royalty basis.
(f) Machine voices replace human narrators – and lead to the loss of licence fees for writers: DeepZen has been working on clone voices since 2013 and offers its repertoire to publishers to save on fees; numerous publishers, including renowned ones, have already used synthetic audio book narration. The dislocation continues in the question of revenue distribution: when there is no audiobook narrator, to whom does his calculated share go? The job calls for professional narrators is declining rapidly. To professionally produce a voice clone (of human people) costs less than 2000 Euros in a professional studio. It is even cheaper with programmes like Murf, Lobo, Respeacher, Voice.Ai or Overdub. After a few seconds of recording, a voice clone is generated with which you can make “Anyone” say “Anything”, no matter how immoral or fraudulent , .
In 2022, Google introduced its services for publishers in six countries, in early January 2023 Apple introduced a series of AI voices named such as Madison and Jackson. Authors and publishers who sell their books through Apple Books are supposed to make use of these (and sign a confidentiality clause to this effect). The areas of clone voices or synthetic “voices” range from dubbing to audio books to trick calls for fraudsters or for deep-fake interviews etc. Actors and audio book narrators are increasingly confronted with having to agree to voice cloning in contracts for work if they want to continue to be employed. This leads to the gradual elimination of voice actors and narrators. In addition, there are cases in which voice clones were created without the consent of the human speakers. Or to be replaced by purely synthetic voices of devices (example: “Tonie Box”, where synthetic robot voices read automatically generated texts to children for goodnight ). AI dubbing also becomes relevant when e-books are read aloud by devices and voice clones, but the author has neither granted a licence nor receives remuneration.
All in all, all these new “AI business models” lead to the following paradox: those who made the existence of generative programmes possible in the first place are not remunerated. But those who use the software profit monetarily. This transfer of value as a form of exploitation cannot be intended by the legislator.
(3) “AI” is a high-risk communicator and unreliable source of information.
“Hallucinating” is the vocabulary currently used to describe generative text systems that completely invent or incorrectly plug together data, events, court decisions or biographies, contradict themselves when asked questions, or need to be constantly corrected by users with reinforcement learning from human feedback (RHLF) . In the process, users conveniently teach the system what its developers did not. At the same time, generative text software makes it easier for actors such as propaganda farms to rapidly and cheaply spread disinformation and hate speech; or creates fake authors who flood social networking platforms or market players such as Amazon with GPT output and artificial communication or automated ChatBot “reviews” of books. The lack of or inadequate security checks to save costs and the lack of test and correction series prior to publication mean that generative text applications must be assessed as fundamentally untruthful. At the same time, however, the “faith” and lack of sensitivity towards digital content of many of the over 100 million users is so high that they do not recognise these “hallucinations” – or do not even suspect that the output is false. Basically, AI needs original, “fresh”, human texts in order not to go crazy, as Stanford University found out: If synthetic content (AI output) is used as training , the system collapses.
Stable Diffusion, an image-generating (“text to image”) computer science, knows no black members of any national European parliament, no female doctors, and poses as cleaners basically Asian women. Text generators reproduce sexist and gender-stereotypes – as they draw on texts that come from a particular more Western, male, white-oriented canonor “learnt” misogyny from the comment sections of social web media. A bias can refer not only to gender or skin colour; but to places, ages, social classes, professions, medical conditions, cultures or the classification of facts, of concepts such as “success” or “happiness” or political opinion.
Effect: Users of a generative AI adopt the bias and reinforce it. As a result, people are pigeonholed even more quickly and, above all, unquestioningly, this can have an impact on social and professional access, education, housing, health care and credibility.
(5) “AI” companies fear the Brussels-Effect of the upcoming AI Act – with good reason.
The Stanford University surveyed twelve AI companies on 22 requirements of the proposed AI Act. Results: Few companies disclose information on the copyright status of training data; hardly any provided information on energy consumption and emissions reductions; NONE were able to report on safety audits and mitigation strategies for structural or systemic risks. Microsoft and Open AI have been lobbying for months against the planned AI regulation; they see their business models and previous billions in profits at risk, which are based on exploitation, theft of intellectual property, lack of transparency and risk ignorance.
It is therefore all the more important to take a clear stance and to insist on transparency, authorisation and remuneration in all regulations in an unambiguously understandable way.
(6) Lack of clarity as to whether the statutory permission for TDM within the 2019/790 CDSM Directive, Articles 3 and 4, allows the use of copyrighted works as “training data” for machine learning. If so, the opt-out provided is not an option.
- Unclear legal situation: It is at least uncertain whether legal permissions for TDM (based in national legislations on Art 4, 2019/790 CDSM Directive) allow the use of copyrighted works as “training data” (cf. on this below, at Dictionary: TDM) for machine learning. Even more it is considered that machine learning for generative informatics is a total new form of usage and has in any case to be handled within a volunteer and remunerated licensing system.
- In any case, however, the opt-out provided for TDM in the 2019/790 CDSM Directive is in no way practicable. And this is not only due to the lack of contractual routines everywhere in Europe, in which authors could already declare the opt-out when transferring rights of use – as there is no common practice to declare, if writers or translators agree to TDM or not. None of the contracts concluded until 2022 include queries on TDM; and it can be assumed that this use does NOT fall under electronic use or under database storage.
- No sector standard for meta data: There is no standardisation to make an opt-out machine-readable within works that are “available online”; also according to contracts none of the AI development companies have asked so far, to be quite sure. It is also unclear what “available online” means and where to draw the line.
- No technical application for opt-out in sight: Even though the W3C group is working on developing solutions (see July 2023 report ), currently only for URL and metadata of EPub3, authors remain unprotected until an indefinite time. Meanwhile, W3C developers are questioning the interpretation of the TDM exception and if this covers machine learning. In addition, a new ISO standard (ICSS) is being tested for approval (previous standards in the book sector are ISBN, ISSN, ISNI, ISTC, DOI); opt-out declarations with this new identifier could be machine-read by special software – if AI developers were interested in rights clearance …
- It is completely unclear how an opt-out can be explained for analogue works.
- It is also an open question whether an opt-out also applies to works that have already been used for TDM in the past. Equally open is how to deal with out-of-print works, when they are digitised again by libraries or archives: who is implementing the opt-out in there?
- Unlawful scraping: In addition, there is ample evidence that even machine-readable
txt opt-out statements on html websites are simply ignored by scrapers or unsupervised machine learning crawlers.
- No chance to exercise one‘s right: In fact, in practice, as an author, it is impossible to exercise the opt-out option.
AI companies have also been pulling copyrighted book works from bit torrent piracy sites since 2013 ,, . The corpus Book3 and The Pile was proven to contain 190,000 titles; under investigation by volunteer research teams are 1.2 million more copyrighted titles. At the end of September 2023, this led to a lawsuit by 17 US authors such as George R R Martin and Jodi Picoult, among others, together with the US Authors Guild.
(7) Generative informatics (“AI”) is a climate threat
According to a study by the Riverside University, training GPT-3 using computing centres in the US consumed 3.5 million litres of water, and Microsoft’s data centres in Asia consumed 5 million litres. ChatGPT(3) consumes 500 ml of water per 20 questions. The carbon emissions analysis conducted by the University of Berkeley concludes that training GPT-3 consumed 1,287 MWh and resulted in emissions of over 550 tons of carbon dioxide equivalent.
The energy consumption of so-called AI will be higher than that of all human workers by 2025; by 2030, machine learning will account for 3.5% of global electricity consumption.
Both the indifference to intellectual property theft and the habitus that digitally available works should be available for free or absurdly cheap are symptoms of the negation of human authorship behind every work. What is disturbing is that big companies are now making billions of in profits from theft and this seems to outrage only a few political decision makers.
If the future of technology is to be sustainable, innovative and equitable, then systems that cause harm must be shut down and regulations based on authorisation, remuneration and transparency must be put in place for the development of future artificial communication. If this does not happen, the future of AI is built on coercion and plunder.
Learn more about the EWC Campaign agAInstWritoids
About the authors.
This analysis paper was researched and written originally in German for the Netzwerk Autorenrechte (Authors’ Rights Network) by Nina George (EWC Commissioner) and André Hansen (VdÜ, German Literary Translators Association) I Editors: Dorrit Bartel, Tamara Leonard I Provider research: Monika Pfundmeier (EWC Board Member, Syndikat Board Member), and examined by legal advisors.
The EWC was granted permission to translate, adapt and share it. A full publication on your website needs exchange with the authors via the EWC Secretariat, please.
The Authors’ Rights Network (www.netzwerk-autorenrechte.de) represents 16 associations and 16,500 writers and translators from Germany, Austria and Switzerland. Contact: email@example.com. Lobby-Register Nr. R005345