Is This Really a Thing?

Is This Really a Thing?


Part 2: Will AI Depopulate Hollywood?

August 28, 2023
Landscape Hollywood Sign Hill Los Angeles California SunsetCould Artificial Intelligence make Hollywood a ghost town?

Reality TV, strikes and cyborgs, OH MY! Hollywood may be heading toward AI-generated content, and we all may already be living in a cyborg state … so was this episode AI-generated? This is part two of a two-part episode. Be sure to go back and listen to Part 1: Will the Hollywood Strike be an Extended Thing?


 


Featured Guests

 


Episode Transcription

Actor Bryan Cranston speaking at a SAG-AFTRA strike rally in Times Square in New York City on July 25, 2023: Uh, we’ve got a message for Mr. Iger. I know, sir, that you look through things through a different lens. We don’t expect you to understand who we are, but we ask you to hear us, and beyond that, to listen to us when we tell you we will not be having our jobs taken away and given to robots.


Paul Jarley: The real issue, Bryan, is whether the AI listens and understands us.


This show is all about separating hype from fundamental change. I’m Paul Jarley, Dean of the College of Business here at UCF. I’ve got lots of questions. To get answers, I’m talking to people with interesting insights into the future of business. Have you ever wondered, is this really a thing? Onto our show.


In our last episode, we explored the current writers and actor’s strikes and how the parties might come to some agreement to get everyone back to work and spare us a lot of new reality TV. A key part of that analysis involved the limitations of AI today. It can’t produce a final product without humans. That, of course, is today. AI technology is changing rapidly and its impact on the industry is likely to grow over time. In today’s episode, we look at the long-term implications of AI in Hollywood and ask, could AI depopulate the industry in 10 years? In other words, could it eliminate or substantially reduce the number of people working in Hollywood, especially the writers and actors. To shed light on these topics, I returned to the discussion I had with my group of UCF experts. To just remind everyone, Cassandra Willard is an instructor and program director in our Center for Entrepreneurship and a practicing attorney with extensive experience in entertainment law.


Ray Eddy is a lecturer in our Integrated Business department with an interest in understanding the customer experience. Ray is not just an academic, he has worked as a stunt man, started his own production company and written, directed and starred in several performances. David Luna is a professor in our Marketing department. He is currently working on several projects, studying human machine interactions in the context of chatbots, intelligent assistance, and AI. And last but not least is Robin Cowie. Rob is a graduate of our Motion Picture Technology program at UCF. He’s a little hard to summarize, having worked in a variety of positions in the industry from EA Sports, to Nickelodeon, to the Golf Channel, and the Dr. Phillips Center for Performing Arts. Today, he is the President and CTO at Promising People, a company that produces training and placement services for people who have been incarcerated. But, you probably know Rob best from his work as co-producer on “The Blair Witch Project.” Listen in.


David, if AI is going to depopulate Hollywood, it’s going to have to produce movies that are more profitable than the ones being created today. What do you see as the main issues here?


David Luna: There are different kinds of costs involved in making a movie, right? One of them would be the creative part, and from what has transpired from the conversations with the writer’s union, it seems like it’s a fairly small part of the process. And the other part is the production cost, right? Which seems to be the larger cost in making movies. So if we think of a commercial success as something as making a profit, you want to minimize one of those two costs. So on the production side, you could think about well, having Harrison Ford play Indiana Jones until the 30th century, for example, through AI. That’s one part of it. Being a professor of marketing, I am also quite sensitive to the issues that stem from how consumers will perceive these products. I have done some work on trust and whether people trust AI interactions.

We could think about the fact that people will trust these products less and thereby have more negative attitudes toward them, like them less, go to the movies less. Another thing that we can think about in terms of consumers is the issue of the uncanny valley, which is if AI-generated images or characters are meant to be human beings, and the closer they get to looking like a human being, people get a little antsy about it. So that’s another issue that could bring about negative attitudes, and that would affect the bottom line, obviously, because people won’t go to the movies.


Paul Jarley: There’s a lot to unpack in David’s comments there. First, let’s tackle authenticity. So my understanding is voice is the easiest thing for AI to replicate right now. Is that true?


Robin Cowie: When we talk about AI, there’s, there’s so many things that we’re talking about. So to narrow it, I think over the last six to nine months, the conversation’s really been about large language models. And large language modules specifically from Open AI, but also Google’s Bard or, you know, some of the older ones from DeepMind, or even the new one that Meta just released called Llama 2. These are all large language models and they’re designed literally to be about language. So I would say the easiest thing for a large language model to process is text, not necessarily audio. But essentially the current premise behind large language models is that essentially it’s about math and it’s about probability. And that pattern recognition is behind everything. And so music especially, you know, we are all very familiar with those patterns, and therefore music comes up a lot because voice synthesis or instrument synthesis or anything like that comes up a lot. It’s maybe one of the easiest patterns to recreate, but I think the real innovation is in, in text right now.


Paul Jarley: So where do you think the most powerful application of AI will be in the next few years? In movie making.


Robin Cowie: I worked at Electronic Arts. We used AI for a lot of the background elements, a lot of the gaming elements back then. And this is, you know, in the ancient days, four years ago and over the last four years we’ve seen exponential development with using AI just in the gaming space. But I think when I started being obsessed with it four years ago, I thought, “Wow, this is going to be as revolutionary as the iPhone was.” And now there are some people that are saying, this is as revolutionary as fire. I’m probably, currently, at the place of “This is as revolutionary as the steam engine.” But there is no doubt in my mind that every aspect of every kind of human job, every form of creativity, every form of task-oriented work, every single aspect of human interaction is going to be changed by AI.


Paul Jarley: Rob is my resident futurist. He’s always the first in line with new innovations. What do you think, Ray?


Ray Eddy: The truth is, it’s, it’s this, I could say this is two iterations of this than I can think of in the past. And one is back in the, in the early nineties when CGI became a much bigger thing, “Terminator 2” kind of changed the game in 1991, and that led to “Jurassic Park” in ’93 and then the Lucas making the “Star Wars: Episode I” and it just got more and more and more and actors started thinking, “Well, they’ll never need us anymore because they can recreate.” And in particular, stunt people also felt the same way because, who needs to jump off a building or get lit on fire when you could pretend to do that with CG and it’ll look just as good. The backlash to that has been that there’s a real push towards what we call practical effects, which is actual real effects.


A real fire, a real explosion, a real high fall. Because as of right now, you can still tell a difference. Now, the technology will keep advancing. There will be a day when you can’t tell the difference. Just like with deepfake videos, you can’t tell the person is the actually saying those lines or not. As of right now, there’s still, I guess, inertia in the industry to sort of make that decision. You go with the CG version, which is safer, or the real practical version, which might be more expensive. Then again, CG is pretty expensive too. But, but the other iteration I’ve just referred briefly is back in the early 1900’s when animation first appeared, and then, you know, in the 1920’s, Steamboat Willie came along and then Snow White got an honorary Oscar award, and so all of a sudden actors back then were afraid, “Well, they never need actors again, because cartoon characters would never complain about the wages. They’ll never complain about working long hours. They never would complain about the danger” and there was a fear that animation would replace actors. So this is kind of happening, as I see it, the third time now, AI is be the next thing that will take over. The first two times there was a lot of, you know, concern, but it hasn’t led to massive loss in income, or in job opportunities. It sort of has shifted the game a little bit, but it hasn’t eliminated anything. AI, it’s hard to say. I, I still feel the same as what Rob was just saying. I think that’s, there’s a lot to the fact that it will change the game as time moves forward.


Paul Jarley: My own take on this is the most vulnerable groups are people like extras that you would think AI would be pretty good at filling those kinds of roles pretty quickly.


Ray Eddy: I, I would agree with it, yeah, completely. That when you need a massive thousands of people, whether they’re in a …


Paul Jarley: Ben Hur, think Ben Hur.


Ray Eddy: Ben Hur, sure. Or any sports stadium or, or, or any zombie movie, you want to have a thousand zombies chasing somebody, you know. You create a few dozen and then just repeat them. That happens already, more with CG than with AI. But AI will allow for natural progressions of activities and reactions and things to be moving forward. So it does change the authenticity of it in a way, but also it could lead to sort of loss of control over what’s exactly happening. If you did CG, you just make it happen. If you make it AI, it’s sort of, there’s some randomness that maybe is good, maybe is bad, but the control factor is, is left open. But in any case, as I said, the technology changes so rapidly, it’s hard to say how authentic this will be. But as of right now, I think there’s still a desire to sort of see real people do real things as much as we can. And certainly for the industry, that’s what the industry wants. The, you know, actors and the stunt performers and people who make a living as extras, they don’t want to lose their livelihood either. So there’s, there’s a lot, a lot of people behind this trying to make sure they can keep it under control as much as possible if they can.


Robin Cowie: I agree a lot with what Ray is saying. Christopher Nolan is a filmmaker who’s famous for doing things real, but even Christopher Nolan is going to put all the extra safety harnesses on and all the safety equipment on in camera that you used to not be able to do. So Christopher Nolan can do magnificent things because you have the ability to use computer graphics and computer technology to remove those safety harnesses so you never see them. And so the stunts are actually raised to a huge level, even with extras, what you’re able to see, obviously with “Lord of the Rings,” again, going back, you know, almost 25 years now to the first “Lord of the Rings” movies there, you’re using massive crowd control using CG. But a lot of that CG is powered by actual real actors. And so again, with computer games, from what I’ve done, we synthesize a lot of things driven by a small collection of humans that actually power massive teams of football players. So I would do a casting where I would have 40 different body types that would actually perform the work of 40 different types of humans, and that would then power thousands of characters in the game. So it’s not quite eliminate extras or eliminate writers or eliminate, it’s really about the human machine synthesis.


Paul Jarley: So is the use of AI though, in those situations that you’re describing, Robin, is it cheaper than just having a human do it?


Robin Cowie: It’s even more than cheaper. It’s really doing things that were never possible. Some of it is cost, right? If you were to rotoscope, you know, a hundred thousand people, it’s not possible, right? You could, like, you could go back to the ’60’s and individually rotoscope every image. You could do it, but it would take you years and years and years. But now, you know, you can do so many things so much faster with the compute that is possible with AI. It’s to the point where they use synthetic humans so much that most movies that you watch have some form of face replacement, some form of this. Anyway, that is kind of different, I believe, than the current writer strike that’s more connected to large language models, which is really that we’ve created essentially some form of alien intelligence that is in the mathematics of these large language models and specifically on a thing called a transformer that is challenging human thought and sequential behavior. So that’s a whole different level than where we were before with face replacement.


David Luna: Rob, are you saying that the audience cannot identify the synthetic human at this moment?


Robin Cowie: In many, many ways, the Turing test is always can you tell the difference between humans and computer interaction? And absolutely we’ve gone way, way past. What keeps on happening with technology, or that test, is we keep on advancing it. You know, like, oh, well, can we tell the difference between computer playing go or chess? Or can we tell the difference between a voice synthesis and interacting? We have crossed all of those boundaries. Can we replicate people in a highly realistic manner that there’s no way you can tell between real CG. In fact, I just want to make the announcement: I’m actually computer generated and no, I’m just kidding. This is something Paul set up. No, no, no. But yeah, we are already androids, we already use digital extensions of our life. We already have digital interactions. It’s just that we’re about to go through an exponential integration of these at a level that people have never seen before in history.


David Luna: So I think that kind of addresses one of Ray’s things that he said in that he mentioned that people want to see real people on screen. We think we’re seeing real people.


Robin Cowie: Correct.


Paul Jarley: So could you see in the next five to 10 years entirely AI-produced products that are being marketed through their own channel that would be viewed outside of what we think of traditional Hollywood and movie making?


Robin Cowie: We already are, absolutely, yes.


Paul Jarley: I could see that in just sort of a product portfolio kind of sense, David, you know, you have real actors and what they’re producing and what AI’s producing and let the marketplace decide, right? Ultimately, it’s the box office that’s gonna tell us where to go here and the production costs.
David Luna:

It seems to me that the video game industry would be sort of similar to this, right? Where, you know, you have a whole platforms online that are based on people playing video games and so on. So this could be leveraged to build some channels like that.


Paul Jarley: Well, I do think maybe in the short term, the professional sports leagues might want to renegotiate their television rights because I think content is going to be really important. If Robin’s right, and this drags on for a couple of years, you already see Netflix buying a lot of international content.


Robin Cowie: I will say that understanding human behavior and doing more and more customized content is really going to be there. Look, I’m a big believer in synthesis. I mean, I think we’re all basically cyborgs and I think we’re going to become more and more cyborgs. And it won’t be an uncomfortable cyborg state, It’ll be so intricate to us that we just won’t even realize how much it is that way. And it’s bad because we’ve seen what the echo chamber of TikTok is like, what the echo chamber of social media is like, and basically we tend to serve up to people more of what they love. And can we do that synthetically with computers? You betcha. So I think as humans, we have to really invest in confronting that, educating people, and instilling a love of humans.


Paul Jarley: It’s unquestionably the case that AI’s share of the market is going to increase, if you want to think about it in very broad terms. But the counter example would be Broadway. I know Ray’s done some work on, you know, shaping experiences and Broadway would seem to be the counter example. Thoughts there, Ray?


Ray Eddy: Sure. No, I agree with that. I think there’s a lot to be said, you know, and on my study of immersion and that kind of activities, live performance is different. I think where technologies could intersect here would be something like holograms. There are already some hologram performers, if you look back at it kind of hit the mainstream when they, had a hologram of Tupac Shakur at Coachella, and it looked so real and kind of launched it into a real possibility for future development in the entertainment realm, whether in music or in live performance. And there are some holograms that are on some shows. I saw one a long time ago, sir Laurence Olivier’s face was projected. The play was called “Time” and he was basically playing God and his head was the whole size of the stage, and it was him.


There were live actors who can superimposed their actions and put someone else’s face over them. They had people recreate Michael Jackson’s dance moves on stage, but it has Michael Jackson’s face and singing a song. So these kinds of technologies can exist and allow the technology to advance in a different way than AI. But to your point, Paul, about who wants to see Tom Cruise in 40 or 100 years, well, people want to see The Beatles now, so it might be a retro thing rather than a same person in perpetuity thing, but they come back later as a, as a flashback. Those, there are some thoughts about the live aspect of it.


Paul Jarley: Part of my thought about that goes back to a prior podcast we did on backyard chickens. Carolyn Massiah was on, and at the very end we were joking about the chickens having their own Facebook page and whether people would watch it or not. And Carolyn talked about marketing simplicity in an ever complex world. I kind of wonder if there’ll be a little of that here, that essentially Broadway is that, taking people back to a prior time where the craft was done differently.


Ray Eddy: Sure. And there’s also the element of the potential chaos of a live performance. Things can go wrong. Someone forgets a line, it’s as simple as that. Or a prop fails, or some technological problem that makes it more tangible and real and vibrant maybe to an audience. So it makes it different. Going to a film, you know, you’re going to see it’s done and perfect and it’s out there. In live theatrical productions, or even in like theme park performances, if something goes wrong, some people say that’s why you go to see a NASCAR race. You don’t go to see the race, you go to see the crash. So that element of risk and unpredictability is always going to be there more in a live performance than AI or CG. Then again, if your AI starts going and you know, doing hallucinating during a live performance, then you’re gonna have a whole different story going on. And that’ll be…


Robin Cowie: It could, it could be super, it could be super fun. I think there’s three things. I think there’s suspension of disbelief and then there’s surprise and there’s delight. And those three guiding principles for entertainment, suspension of disbelief, that can be created by humans, it can be created by machines, it can be created by humans and machines working together. And I think that is true for surprise and delight. I think right now, humans are better at surprise and delight, and truly delight is probably the last realm of humanity. I think that Pixar movie Wall-E is actually one of the best depictions of robots and AIs that we have. You know, I think the reason why Wall-E is so wonderful is that he does create that sense of human delight and human satisfaction. And there’s a scene in that movie where you see the fat humans on a cruise ship and they’re all gorged on delight. And hopefully we don’t go that way. But I do think that, at the end of the day, this is simply an evolution of tool. We humans are tool makers, and this is the most sophisticated tool we’ve ever made. It’s just that the tool now actually talks back and has ideas of its own, and that’s something that we are all wrestling with.


Cassi Willard: And I think just to add to the point, when it comes to the live performance side of things, look at how much content had to move in a digital space due to restrictions based on the pandemic. As live events have come back more robustly, you still have individuals who are showing live performances via live feed, or they’re showing recorded, uploaded elements of performances. People are still selling out football stadiums. People are still selling out arenas because as a human being, one of the other things to think about is the community element. You want to be a fan of those people who are in the credits. Working in the industry as long as I have, I sit and I watch the credits, and I also know going to a theater isn’t the same experience as logging in in my home. So I think that’s one element that AI can’t quite find a workaround to just yet.


Robin Cowie: I have a question for Cassi that I’ve been wanting to talk to an entertainment attorney about this question ever since I heard about it. So MIT currently is putting forward a new concept. We all know that there’s copyright, which essentially I own this and if I’m going to copy it, I have the right to do that. But what large language models and transformers are doing is not really copyright. Essentially they’re taking the mathematical value of letters, which make up words, and then the mathematical combination of those words and then predicting the mathematical probability of what comes next. I mean, that’s essentially what these large language models do. So what they have proposed is the idea of a learning right. So that somebody’s actual pattern that they’re talking, so in my book, in my writing, in my math, in my formula, there’s a pattern of logic that underlies that. And now I can grant you the right to learn from my pattern of behavior. And I think it’s a really interesting idea because it does speak on a math level versus a actual copyright level. And I was really curious as to what you thought of that idea.


Cassi Willard: It opens a whole other legal realm here because ultimately we’re now looking at almost like a business method patent that comes into play. Now when you’re in that AI space, it’s the equivalent of the Amazon platform, back in the day, owning the one-,click purchase or the Amazon platform owning certain rights to methods of photography for products that they’ve held over time. So now we’re looking at those processes and methods, which honestly, hopefully, will go through and further advance AI so you don’t have a character get lost in a script that was mentioned earlier that we can further enhance. So we want to try and make AI step as far away from bad automated customer service as we possibly can. Because that’s the analogy I always make. It’s like having bad automated customer service sometimes when you’re in this space. So we want to ensure that we’re providing the best product possible, but that’s part of the push and pull Rob, very much, that we’re seeing in the AI space because those software platforms that exist right now, the software licensing agreements are going through and speaking specifically as far as what rights exist and what ownership you may have, and especially if you’re looking at things in that kind of co-writer brainstorming space, when you start looking at that software licensing, some of them say the intellectual property doesn’t belong to anybody because it’s a combination of a bunch of different ideas.


Because as these platforms are creating content, they’re further evolving their formulation. So you’re now getting this swirl of what you’re inputting as well as whatever else exists in that universe that’s now building on top of, on top of. So this becomes this really massive digital group project that we now are sitting around going, which one of us owns what percentage of what? It’s just we’re entering in for a software agreement, which you don’t see in more traditional kind of well-established software formats.


Paul Jarley: What’s the lesson for students here? I’m ultimately an educator.


Robin Cowie: In my opinion: use the tools, get involved, be creating things with this. I remember one of the things that made made it possible for me to make “Blair Witch” was that digital editing had just come on the scene and we were able to shoot for eight days real time and get 38 hours of footage that we got down” to 87 minutes. And if people ask me what do I think was the best thing that we did on Blair Witch,” in my opinion, it was the editing because we took incredible human experience and crafted it into something that was really unusual and really unique in 87 minutes. So I couldn’t have made Blair Witch without digital editing tools. I know there’s a lot of things that creatives and humans are going to do that wouldn’t have been possible without the AI tools. For me, students should go make cool stuff. That’s literally what I got taught at UCF when I went to UCF. I had this old, crusty film professor and he just said, “Make shit.” And that’s what we did. It was the best advice I ever got.


Cassi Willard: I would say sandbox when you’re in a safe space, learn those skill sets, because every piece of technology, every tool that you learn to exploit is another thing you can add to a resume. It’s another skill set that you can go through and expound on. And you can learn in a tech space a lot less expensively, a lot lower barrier to entry, a lot lower risk than you can anywhere else.


Paul Jarley: Now’s the time to experiment. Last question: 10 years from now, will AI have depopulated Hollywood or not? Yes or no? And why?


Cassi Willard: I’m going to go ahead and boldly say no. I think it’s going to cause people to evolve. I think you’re going to see this technology implemented like every other form of technology. I think it’ll cause it to evolve. I don’t think it’ll end up killing the industry.


Paul Jarley: David, what do you think?


David Luna: I think it will definitely depopulate the industry. And it’s what’s happened in factories, right, with using robots. You just need fewer workers as you partner out with the technology.


Paul Jarley: Ray?


Ray Eddy: I lean more towards what David said. I think I agree with Cassi, it will not kill the industry. If it had been that, I would say no. Will it depopulate? Yes. The question’s going to be how far? I’ll hedge by saying we just don’t know how far it will depopulate, but it will take some jobs away, yeah.


Paul Jarley: Robin, you get the last word.


Robin Cowie: I think it’s going to be exponentially bigger. One thing that humans love and crave is entertainment. And that entertainment, it’s going to come in lots and lots of forms. And this is really a booster, it’s an accelerator, it’s an ability for us to make more customized entertainment, more personal, larger scale, larger volumes at lower costs. So ultimately I think Hollywood 10 years from now is gonna be 10 times larger than it is now and still making us laugh, cry, and have a good time.|


Paul Jarley: It’s my podcast, so I get to go last. AI is going to replace mediocre scripts, mediocre productions, mediocre actors, and if there’s time, mediocre podcasts. Think Hallmark holiday movies, the “Fast and the Furious” installments, Ashton Kutcher and maybe this podcast. What it won’t replace is awesomeness: the first two seasons of “Twin Peaks,” “The Shawshank Redemption,” Jack Nicholson and “Real Dictators,” listen to it. Face it. Hollywood has been in a creative rutt, relying on franchises for big box office numbers for years. The machines have thrown down the gauntlet, and the humans need to respond. IP laws may provide some guardrails, but the creatives are going to have to win this on the basis of talent and imagination. I’m betting that they will. Storytelling is the most human of endeavors, and perhaps all those profits from those low cost generated AI movies will allow the studios to take more risks and find some fresh faces to bring them to life.


I do think Rob is right. AI will increase the total production out of Hollywood, and AI-generated content will get a share of the market. But records didn’t kill live concerts, movies didn’t kill the theater, television didn’t kill movies, and the streaming services haven’t yet killed network TV. We will just have more options and more content. I’m also guessing that some nights people will still complain that there’s nothing on. Will the increase in content lead to enough new opportunities to compensate for the loss of jobs for mediocre writers and actors? Probably not. But it won’t depopulate Hollywood either. If there is a third thing I know about new technology, it’s that when people play with it, they find unexpected applications and create new employment opportunities. In the meantime, go see Barbenheimer. I dare AI to come up with that.


So what’s your take? Check us out online and share your thoughts at business.ucf.edu/podcast. You can also find extended interviews with our guests and notes from the show. Special thanks to my new producer, Brent Meske, and the whole team at the Office of Outreach and Engagement here at the UCF College of Business, and thank you for listening. Until next time, charge on.


 


Listen to all episodes of “Is This Really a Thing?” at business.ucf.edu/podcast.