Announcement

Collapse
No announcement yet.

After the Hollywood writers’ strike, AI-generated television is inevitable

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • After the Hollywood writers’ strike, AI-generated television is inevitable

    Chris Pizzello/The Associated Press
    For Hollywood writers and their cohorts in the universe of chronically underpaid and under-appreciated ink-stained wretches, the tentative contract deal reached last week with major film studios and television producers is as good as it gets.

    The deal promises writers total pay hikes of 12.5 per cent over three years, staffing minimums for television productions and bonuses tied to viewership on streaming platforms. Most important, it seeks to limit the use of artificial intelligence in the creation of content and compels studios to disclose when AI is being used to generate material.

    Writers should enjoy the moment. They deserve it. Just look at how hapless actors and television comedians have been during the 148-day strike without the lifeblood of scripts and jokes written by these unseen wizards. And how grumpy audiences have been to have their favorite series or late-night shows interrupted.

    The sad reality is despite writers’ value and the promises of safeguards against AI, the new contract is simply postponing the inevitable, slowing but not stopping the decline of human-generated content. The knocking at the door they hear is I, Robot. And it’s getting louder.

    The reason is as old as the struggle between creativity and commerce itself. As long as writers are treated as commodities – almost as a utility keeping the industry’s lights on – rather than talent of comparable value to box-office stars, their employers will always look for cheaper ways to do the work they do. AI is the obvious, if odious, solution, and it is coming fast.

    The film and television industry is hardly alone in feeling the tectonic shifts in the way work gets done. IBM research suggests 1.4 billion people worldwide will be affected by AI and automation, and 40 per cent of workers will need new job skills over the next three years. Goldman Sachs predicts 300 million jobs will be lost or degraded by AI.

    The scope of that shift makes the AI provisions in the writers’ agreement more of a stop-gap than pennies from heaven. A key concern raised by writers in contract negotiations was how ChatGPT and other platforms were threatening screenwriting jobs by marginalizing the human element.

    The eventual concession seems pretty soft. Requiring studios to disclose when they are giving writers concepts that are AI-generated or contain AI-created materials leaves the door ajar for the AI wolf. Do they really think a major player like the Walt Disney Co. DIS-N, whose fumbles in its streaming and film properties seriously dented its market value, will submit to anything that limits its commercial success? It’s nothing personal, of course; it’s just business.

    What might the “inevitable” look like for Hollywood writers? It’s not inconceivable that formulaic TV shows like sitcoms and police procedurals could be written entirely by AI. The same goes for love-channel movies that follow well-worn storylines – boy meets girls, boy turns out to be a sociopath, boy bumps up against girl and her BFFs and ends up in jail.

    Over time, human-generated content may be limited to special projects and the idea of studios and actors choosing their favorite writers will slowly fade. A niche genre of human-only content could emerge, and be branded as a specialty product – and of course command a premium price.

    Perhaps platforms featuring human-only content will pop up, sort of quaint versions of cable channels now aimed at aging Baby Boomers featuring reruns of The Andy Griffith Show, The Beverly Hillbillies, Ironside and Mannix.

    For film and television writers, their job security lies as much in the hands of those on the other side of the screens as it does in any contract. The big question is whether viewers will care if their favorite shows and films are computer-generated or written by humans.

    There is probably a generational skew here. Old-schoolers appreciate a line of emotive dialogue from a human mind – and heart. Somehow Humphrey Bogart’s iconic, “Here’s looking at you, kid,” from Casablanca would not be the same from a machine. Could AI even create a line such as Lauren Bacall’s smoldering “Just put your lips together and blow” in To Have and Have Not? Doubtful.

    My college-aged kids would disagree and cluck at my dusty sentimentality. Their generation has become insatiable Olympic-class consumers of content across a wide waterfront of platforms, and they hardly care how it is generated. If they can get the next tranche of Stranger Things on Netflix sooner, they don’t give a damn whether it was produced by humans or machines.

    Call me old-fashioned, but I am in the camp that believes frankly, my dears, you should give a damn.​
    1234567890

  • #2
    I don't watch much tv at all, less than an hour a day... But I have no issue with AI as long as it is identified as such just before a show or movie begins. I think ultimately that the studios and networks are going to find that people are less likely to watch AI stuff.

    Comment


    • #3
      AI isn’t magic. All it does is try every possibility within a data set, at random, until it finds a solution that resets in the highest scoring outcome.

      First, human programmers need to give the AI a data set that is relevant to the problem to be solved. Then the programmers need to supply parameters to score outcomes which give the desired result.

      The process is much more difficult than it looks on the surface. As the saying goes, “Garbage in. Garbage out.” (GIGO)

      I think AI is cool but it’s not the magical thing that people seem to think it is. AI programming has gotten a lot better in recent times but I think it still has a long way to go. There are plenty of times when AI produces wonky outputs that don’t make sense but people seem to forget.

      I have heard stories of AI suggesting things like garbage flavored ice cream.

      Just because something scores well in an AI doesn’t mean it actually makes sense in the real world.

      AI might, very well, suggest interesting stories for movies and TV shows but it still takes a human to make it into a viable script, suitable for production.

      Comment


      • #4
        "AI" (it is neither artificial or intelligence) is perfect for the Hollywood Studio mentality of "Make something just like this, but different." As long as you don't want anything too original, it can probably do the equivalent of an elevator pitch. Still going to have to hire the writers to fill in the blanks.

        Comment


        • #5
          AI isn’t magic. All it does is try every possibility within a data set, at random, until it finds a solution that resets in the highest scoring outcome.
          AI uses pretty much the same logic tree as a chess program. The "magic" is in the dataset that it searches when building that tree.

          It will continue to improve. I remember playing Sargon and Chessmaster way back when; it could take the programs several minutes to decide on the next move, and I could even beat them (sometimes). Today I can run Stockfish on my cellphone, it makes moves almost instantly, and nobody (including the world's top grandmasters) can possibly beat it when it's cranked up to the top level.

          Today: garbage-flavoured ice cream. Tomorrow: I'm sorry Dave, here's your garbage-flavoured ice cream.

          Comment


          • #6
            But I have no issue with AI as long as it is identified as such just before a show or movie begins. I think ultimately that the studios and networks are going to find that people are less likely to watch AI stuff.
            As much of an incomprehensible narrative mess as many of the recent "superhero " movies have been, they could just as well have been written by AI. It might have made more sense if they were. But I'm with the author on one point: Entertainment consumers of tomorrow aren't going to give two hoots whether humans or a bunch of computers turn out future blockbusters. And the e-writers are just going to get better. The funny thing is, by "teaching" AI how to do its thing by pointing out its mistakes, we're helping it get better. We are hastening our own journey down the drain.

            "AI" (it is neither artificial or intelligence) is perfect for the Hollywood Studio mentality
            When it comes to entertainment, they need a catchy new name for the process. I propose using "CGE" -- computer generated entertainment. This could be expanded to CGW (writing) and CGA (acting).

            My problem with "AI" is that in a lot of fonts, including the Film-Tech default, the I looks like a lowercase L, so I keep reading the word AI as if it was some guy's name.

            Comment


            • #7
              Paging Vger?

              Comment


              • #8
                The trick with AI is that it remembers (keeps track of) which combinations work (get high result scores) and which don’t. Then, it prioritizes attempts that scored well and uses those scores to decide how to start its next attempts.

                If we are playing a simple golf game whose inputs are azimuth, loft angle, distance to the hole and shot strength, an AI might try random azimuths for its first few shots then record which ones landed closest to the hole. Once it finds a range of azimuths that work, it will try different combinations of the other inputs until it narrows down which shot go in the hole and which don’t.

                After many tries, the program will “teach itself” how to hit a golf ball into the hole, virtually every time. I’m not trying to tell people something they probably already know. I’m just pointing out how mundane the process actually is, compared to what it looks like to a casual observer.

                Many people might think that AI is some new, magical computer technology when it really comes down to the fact that computers are getting fast and powerful enough to make many attempts at solving a problem much faster than a human.

                AI is cool and it’s getting better and better all the time but the day when computers can write entire movie scripts without human intervention is still a long way off.

                Comment


                • #9
                  AI has does well finding methods to "win" in games such as Chess and Go where there are rules to the game and proven strategies to win. Beyond that, we are back to talking about how long it might take for monkeys on typewriters to produce the works of Shakespeare (Infinite monkey theorem​).

                  When it comes to AI writing movie/tv scripts, AI is going to have problems writing anything other than a script for another Marvel movie. The key phrase here is "Stochastic parrot"

                  In machine learning, a stochastic parrot is a large language model that is good at generating convincing language, but does not actually understand the meaning of the language it is processing. The term was coined by Emily M. Bender[ in the 2021 artificial intelligence research paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? by Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell.

                  Definition and implications

                  Stochastic means "(1) random and (2) involving chance or probability". A "stochastic parrot", according to Bender, is an entity "for haphazardly stitching together sequences of linguistic forms … according to probabilistic information about how they combine, but without any reference to meaning." More formally, the term refers to "large language models that are impressive in their ability to generate realistic-sounding language but ultimately do not truly understand the meaning of the language they are processing."

                  According to Lindholm, et. al., the analogy highlights two vital limitations:

                  (i) The predictions made by a learning machine are essentially repeating back the contents of the data, with some added noise (or stochasticity) caused by the limitations of the model.

                  (ii) The machine learning algorithm does not understand the problem it has learnt. It can't know when it is repeating something incorrect, out of context, or socially inappropriate.

                  They go on to note that because of these limitations, a learning machine might produce results which are "dangerously wrong".

                  Origin

                  The term was first used in the paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? by Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell (using the pseudonym "Shmargaret Shmitchell"). The paper covered the risks of very large language models, regarding their environmental and financial costs, inscrutability leading to unknown dangerous biases, the inability of the models to understand the concepts underlying what they learn, and the potential for using them to deceive people.[6] The paper and subsequent events resulted in Gebru and Mitchell losing their jobs at Google, and a subsequent protest by Google employees.

                  Subsequent usage

                  In July 2021, the Alan Turing Institute hosted a keynote and panel discussion on the paper. As of May 2023, the paper has been cited in 1,529 publications. The term has been used in publications in the fields of law, grammar, narrative, and humanities. The authors continue to maintain their concerns about the dangers of chatbots based on large language models, such as GPT-4.
                  Source:https://en.wikipedia.org/wiki/Stochastic_parrot

                  Comment


                  • #10
                    AI written and generated television programming is probably inevitable, and to some extant can be blamed on the audience. While overall ratings are important to networks and content providers, they also keep their eyes on what is called "viewer engagement". Ask yourself: when was the last time you sat through a one-hour network TV drama, NCIS or Law & Order or something, and didn't look at your phone? Producers are very well aware of the fact that most viewers are not paying close attention, and feel, perhaps rightfully, that anything that barely hangs together story-wise will suffice for the average viewer, whose TV is basically background noise while they are busy watching TikToks.

                    I'm hoping that this effect doesn't slop over into the movies, but I'm not optimistic. I saw The Creator at a large suburban multiplex yesterday and the young gal sitting farther down the same row as me didn't put her phone down once, stopping to look at the screen from time to time when the music or sound effects cued her that something big was happening. What does she care if a background extra is real or AI generated? I'd be surprised if she even notices.

                    Comment


                    • #11
                      The Wikipedia page on "infinite monkeys" is interesting. I especially like this:

                      By 1939, the idiom was "that a half-dozen monkeys provided with typewriters would, in a few eternities, produce all the books in the British Museum." (To which Borges adds, "Strictly speaking, one immortal monkey would suffice.")

                      One way to speed up the work of the monkeys is to do a check after each character or word is typed. Does it make sense? If not, throw out the new character or word and try another instead of leaving the nonsensical word in the string. I see this infinite monkey with testing as an analogy to evolution. Mutations happen all the time. Some have no effect, some have bad effects, and others may have beneficial effects. Those with very bad effects are thrown out (the offspring does not survive).

                      It will be interesting to see what AI does in the future. So far, it seems to be good at writing text, but often the text is wrong. Google is including AI generated in its search results. So far, it seems useless.

                      Comment


                      • #12
                        Honestly, I think the governments of the world need to decide just how to handle AI in all of its various forms. I think people inherently/intrinsically own their likeness, persona, sound. So, generating fake people based upon real people should be a no-go. But even the written word. I could see a whole new set of copyright lawsuits where AI is used to see of other AI has, in effect, stolen from the works of someone else. I could see that being a full-time job just keeping up with those that are stealing from one's past work in the name of AI just scooping up previously published works.

                        Where I see AI helping will be in things like computer coding, where the end result is primarily what is judged...not every line of code within it.

                        Comment


                        • #13
                          I think the intellectual rights of one's likeness and one's voice are pretty well represented already in current law and precedents. You can't simply copy someones face and/or voice, slap it on something else without the consent of those you're copying. AI will not really change anything for this concept, other than that it will allow almost anybody to do it, which will in turn, impact the price of those kinds of rights significantly, unless you're some kind of super star.

                          The biggest problem with AI will be copyright law in general. The ongoing consensus right now seems to be that AI-produced material simply isn't covered by any current copyright laws, as copyright only protects works generated by humans and not by machines. So, essentially if I task ChatGPT to write me a book, while I can still sell it, others could freely use the text within, as long as it has been generated by AI, because I can't claim any copyrights on it and neither can anyone else. This will make it especially complicated for hybrid productions: An AI generated story which I did edit, for example. In this case, the edited parts will be subjected to my copyrights, but the rest is not. As you can see, this easily creates a complicated mess IP rights. Nothing new, but yet another dimension to the puzzle.

                          Another ongoing problem is that of copyright and the material an AI is trained upon. There is a big gap in legislation here, as we simply don't know how to handle this. If you read a lot of books and write a new book based on the concepts of what you've read throughout all those books, as long as it is sufficiently different and only "inspired by" the concepts of those books, you're perfectly fine and your work will be considered as new and original under current copyright law. How does this work in the world of an AI? If I let it read all the works of modern horror writers like Stephen King and Jonathan Maberry and ask it to come up with a new horror story, it will certainly use elements of what was found in those horror stories it was trained on. But we modeled those AI systems to operate like human brains. So, how is this different from how a human writer would operate?

                          Over here in the Netherlands, there is a temporary regulation in place that all works publicly accessible, weather or not they're protected by copyright, can be used to train AI models, unless the owner of the material has officially withdrawn consent for such use. How you do the latter isn't yet formally codified anywhere, but at least it's a start. It's obviously pretty lenient towards AI developments, but without such guide-rails, nobody knows on which side of the law they may find themselves on.

                          Originally posted by Harold Hallikainen View Post
                          The Wikipedia page on "infinite monkeys" is interesting. I especially like this:

                          By 1939, the idiom was "that a half-dozen monkeys provided with typewriters would, in a few eternities, produce all the books in the British Museum." (To which Borges adds, "Strictly speaking, one immortal monkey would suffice.")
                          What about this one:
                          • You can represent any text, image or pretty much everything as a string of numbers, that's how we save files on a computer anyway.
                          • The general rule is that there is no such thing as copyright on mathematics and as such, there is no copyright on the number Pi for example.
                          • It has been mathematically proven that EVERY possible sequence of number in existence is a part of the number Pi.
                          • So, practically anything that can be represented as a string of numbers cannot be copyrighted, as it is already part of the number Pi.
                          Obviously, that's not how copyright law is usually interpreted.

                          Originally posted by Harold Hallikainen View Post
                          It will be interesting to see what AI does in the future. So far, it seems to be good at writing text, but often the text is wrong. Google is including AI generated in its search results. So far, it seems useless.
                          Keep in mind that the Google AI results are pretty low-effort, because they currently can't spend the resources necessary. If there is one reason why AI may not take off as fast as it could, it's because it's currently VERY resource intensive, like a factor of millions more intensive than your average search query. While Google has the added advantage of volume and caching, generating really relevant AI output for all the queries they usually perform is probably not achievable right now.

                          That being said, I've switched from an AI sceptic to be more of an AI alarmist and advocate. While I've studied rudimentary machine learning concepts back in university, I've always remained skeptic about if machines could ever come close to the intelligence of humans. I'm now convinced that, although there is a lot of hype, that the end of humans being the dominant "species" regarding intelligence is in sight. I'm not sure what that means for us as a human race. I'm not afraid that the Terminators will get me any real time soon, but I was always convinced that if the human race would eventually meet its demise, it would probably be something stupid and would look nothing like the movies.

                          Comment


                          • #14
                            Originally posted by Harold Hallikainen View Post

                            It will be interesting to see what AI does in the future. So far, it seems to be good at writing text, but often the text is wrong. Google is including AI generated in its search results. So far, it seems useless.
                            Skynet, eventually. I don't think I'll be around to see it.

                            Comment


                            • #15
                              Originally posted by Ed Gordon View Post
                              The key phrase here is "Stochastic parrot"
                              Yes! That's the term I was fishing for but couldn't remember! Thank you!

                              You can, for example, program a computer to speak perfect Chinese but you can't say that the computer actually understands Chinese.
                              You can only say that the computer stochastically generates words and phrases in Chinese.

                              In the same way, we can say that a computer can stochastically generate (parrot) a movie script but it's impossible to say that it actually understands how to make a movie.

                              Correct?


                              Comment

                              Working...
                              X