Page 140 - Weiss, Jernej, ur./ed. 2025. Glasbena interpretacija: med umetniškim in znanstvenim┊Music Interpretation: Between the Artistic and the Scientific. Koper/Ljubljana: Založba Univerze na Primorskem in Festival Ljubljana. Studia musicologica Labacensia, 8
P. 140

glasbena interpretacija ... | music interpretation ...
                 What does this mean in practice? As Maggie Harrison Dupré points
            out,
                 when you feed synthetic content back to a generative AI model, strange
                 things start to happen. Think of it like data inbreeding, leading to in-
                 creasingly mangled, bland, and all-around bad outputs. 25

                 Devin Coldewey presents an interesting practical example in relation
            to pictures of different species of dogs (which are reproduced in his article).
                 models gravitate toward the most common output. It won’t give you a
                 controversial snickerdoodle recipe but the most popular, ordinary one.
                 And if you ask an image generator to make a picture of a dog, it won’t
                 give you a rare breed it only saw two pictures of in its training data;
                 you’ll probably get a golden retriever or a Lab.
                 Now, combine these two things with the fact that the web is being over-
                 run by AI-generated content and that new AI models are likely to be in-
                 gesting and training on that content. That means they’re going to see a
                 lot of goldens! 26
                 Hence the output of AI operations based on more and more content
            already generated by AI is likely to be more and more standardised – or, as
            Shumailov et al. put it as quoted above, with a “very small variance”. Col-
            dewey juxtaposes pictures created by an AI image generator fed only with
            “real” data with another one influenced significantly by AI-generated pic-
            tures, which does indeed result in a very obvious lack of variety. The de-
            generative effect comes to the fore even more if the process is repeated. AI
            now trains itself on data already produced by AI, on the basis of a previ-
            ous AI picture and so on. The results become less and less distinctive, un-
            til they don’t look like a dog – any dog – at all anymore. As Tor Constanti-
            no elaborates,
                 [a]fter the first two prompts, the answers steadily miss the mark, fol-
                 lowed by a significant quality downgrade by the fifth attempt and a
                 complete devolution to nonsensical pablum by the ninth consecutive
                 query. 27
            25   Maggie Harrison Dupré, “When AI is Trained on AI-generated Data, Strange
                 Things Start to Happen,”  Futurism, February 8, 2023, https://futurism.com/
                 ai-trained-ai-generated-data-interview.
            26   Coldewey, “‘Model collapse’.”
            27   Tor Constantino, “Is AI Quietly Sabotaging Itself – And the Internet?” Forbes, Au-
                 gust 26, 2024, https://www.forbes.com/sites/torconstantino/2024/08/26/is-ai-quiet-
                 ly-killing-itself-and-the-internet/.


            140
   135   136   137   138   139   140   141   142   143   144   145