AI and Digital Storytelling - StoryCenter's Joe Lambert Interviews Brian Alexander, Georgetown University

Since the launch of ChatGPT last November, and ChatGPT4 last March, the world has been talking about artificial intelligence in a way that was not expected for decades. Suddenly, the world is very close to a computer that is as prepared for thinking, even critically thinking, as a typical college graduate. ChatGPT can write papers, it can pass the Bar and other standardized exams, and it can create images, scripts, poems, songs, and stories, in the style of various well known creators (admittedly not very successfully but getting better every day).  It can summarize conversations, long and short, and it can translate in real time. It can even give you advice and emotional support … of a kind.

Proponents and doomsayers have lined up in the face of the breakthrough of these large language model and deep learning computing engines. How will these developments affect digital storytelling as a community of practices? I have my own ideas, but I was curious what other folks might think.    

Of course I asked ChatGPT 4.0 and Bard what they thought (see the responses here). But I thought having a chat with a human might be useful as well. So I spoke with Bryan Alexander of Georgetown University, digital storyteller, one of the leading educational futurists in the U.S.

Joe: Bryan, a lot of folks in the education community know about you, through your various writings and ongoing substack and other publications, but they may not know what you are up to now, and about your work in digital storytelling. Can you share a bit of that story?

Bryan: I've been a futurist specializing in the future of higher education for quite some time. I'm currently a Senior Scholar at Georgetown University, where I teach some classes in their graduate program on Learning Design and Technology.

Back in the early 2000s, I was focused on the future of higher education in terms of education technology. I worked for a nonprofit (NITLE) that worked with a whole bunch of small colleges across the US, helping them grapple with emerging technology, things like mobile devices and social media, which then called web 2.0.

A colleague introduced me to this wild and crazy idea called digital storytelling. I took one of your workshops at Middlebury College in Vermont. And I was blown away by this, I created a completely goofy story, which my family loved.

I thought that the pedagogy and curriculum were amazing. I could see all kinds of educational possibilities. So I researched, did some practice, and started offering digital storytelling workshops with NITLE colleagues through our network of small colleges. Those workshops then  became the basis of my first book, The New Digital Storytelling (2011). And I've been just in love with it ever since.

After that period, I started up my own business, on the future of education. I've moved from Vermont, to the Washington DC area. I have given talks and consulted on every continent, except for Antarctica. And I'm hoping to get there. I've published a series of other books, including Academia Next, and Universities  On Fire, which is about the impact of the climate crisis on higher education for the next 75 years.

Joe: Busy man with a busy life. Let’s jump into it.  What is your take on AI, is it more hype than substantive change? Or is it likely to foster a major disruption/revolution in society in general and education in particular?

Bryan: I'm very bullish on AI in general. People make enormous claims for it. What invention do we compare it to? How do we compare it to the smartphone or the internet, or do we compare it to steam power, and perhaps to fire?

Potentially, this is an enormously influential technology that can really change civilization in ways that we're in many ways not prepared for. When we're thinking of a recent AI, we're often speaking about generative AI coming in large language models, or LLMs. The most famous is ChatGPT, but also I think a lot of people use image creating programs like Midjourney or Dall-E to create art.

Generative AI right now is at the cusp of several potential directions. It’s also possible that the technology may stop or shrink drastically for a few reasons. One is that the environmental impact is enormous. The carbon footprint of a generative AI is vast. So if we actually take climate change more seriously, that may put a damper on AI.

The second reason is that there is a huge copyright and creativity problem. That is, all these tools, the image ones as well as the text ones, draw on datasets of human creativity, ChatGPT, Bard and Bing have scoured the web for all kinds of writing, including some of mine, and possibly some of yours. But this has been done without the usual mechanisms of copyright. No one has sought permission when it has licensed this content. So it's possible that in the next few weeks, or the next few months or a year, we may see a judge order OpenAI to stop operations or to destroy what data and applications they have.

Right now in the US, the fear of AI is significant. AI is part of the Hollywood writers and actors strike, and you can imagine a lot of people becoming sympathetic to restrictions on AI. So it's possible at this time next year, we'll be wiping our foreheads and saying, “Wow, so much for the great AI scare of 2023.”

It's also possible that none of the attempts at restriction will matter. We already have open source versions of powerful AI tools. There are hundreds of startups separate from the big players working on this. And China.  China is very, very supportive of AI. So it's possible that if the U.S. clamps down on AI, China will just move right past us. The cat may be way out of the bag at this point.

In fact, we've had a storm of opposition to AI from the beginning. Even its inventors are filled with foreboding. Sam Altman, CEO of the non-profit Open AI, has declared that AI is a terrifying force, like splitting the atom. There's definitely a lot of money sloshing around which is driving both dread and hype, but it could turn out to be like bitcoin, a big bubble followed by a burst. 

There is a lot of economic research on this, as well, trying to figure out: are we facing a future of widespread unemployment and underemployment as a result of automation? Do we invent new jobs that replace ones that are lost, as we've been doing for 200 years in the industrial revolution? Or do we just hybridize, transmuting our work to include automation? There’s also the question of whether all of our jobs are going to be intertwined with AI in many ways, which, I think, may bring us to digital storytelling.

Joe: The old materialist, Marxist part of me says follow the productivity curve. And if the productivity curve is good in some areas, the investment will continue. I think what's striking is how likely it is that thinking machines will replace a whole bunch of white collar jobs, in addition to the more obvious blue collar jobs in manufacturing, service and support,  and very soon truck drivers, delivery drivers, security and even police. 

Bryan: Journalists are terrified, lawyers are terrified, as well as politicians.

Joe: I think we both share the belief that there will be disruption, and there has already been a disruption. I often think we are sitting in the future we used to talk about. Whatever we feared, is actually here. And if I’m honest, it’s not as bad as I thought,  because x happened to counterbalance the y of change. So I think we, at StoryCenter, can imagine the usefulness of it. We usually work with people for whom multimedia authorship is a completely new idea, it is not their vocational calling, they do not self define as creative or see themselves as learning the crafts of storytelling. They want to play and explore their expressive voice for many different reasons.

So AI could be a tool to help move people into expression, to move past creative blocks, to supplement the coaching and support the facilitator gives in person or in an online workshop to deepen the reflective work the participant is doing. in particular, I could see AI assisting with initial edits, with suggestions of visual treatments, with the ways in which people explore soundtrack possibilities.  What do you think?

Bryan: To build on what you are suggesting, you remind me of the title of a book from a mutual friend, Howard Rheingold, called Tools for Thought (2000). That is one of the ways AI can serve us. As a writer I’ll use ChatGPT or Bard to respond to something, and it’ll spit out the consensus view, which I can then bounce off of.  So these tools can be useful for drafting and working through ideas. And as we discussed, you could ask ChatGPT, please rewrite my initial thoughts in Spencerian stanzas, or by imitating Raymond Carver's style of short stories– and then play with that. We already assist folks in using searches in Creative Commons or stock material for images and music ideas. This will just get a bit more sophisticated, and perhaps useful in developing an original feel to the stories. And of course before long there will be a tool that allows you to say: make a multimedia story about my best friend using images from my Facebook, and it’ll spit out the script, and make the movie. We already have a PowerPoint generator app called Slides GPT. It produces a basic PowerPoint with bullet points and images. I've been using it for a few months now to terrify some of my audiences. You’ll see this with audio, video, animation, etc, getting better all the time. But it cannot yet write your story. You ask AI to write a poem. The writing is doggerel. Awful. But that will change.

Joe: We have many friends in education. And educators are entering this fall with no a small amount of fear.  AI can assist students in acing any number of standardized tests, write their papers, solve their equations, formulate their lab results. What are you telling your colleagues about the coming years, how can they make peace with these tools?

Bryan: Right now, fall semester is starting in a few weeks, and there is no protection against AI. There are several detectors, and they don't work very well. AI  is widespread, it's easily accessible. It will take some time for educators to adapt their pedagogy to it.

But back to our work in digital storytelling in education. We can be critics of AI’s creative potential. It gives cheesy generic versions of creative response, but let’s be real. People sometimes like cheesy, generic, expected responses. Look at Hallmark cards, which are an industry, or look at a lot of best selling fiction. So what we're gonna go for is the kind of modernist or romantic idea of creativity, a critical reflective kind of communication, and I think this is where we use AI and bounce off of it with our own ways of thinking.

This will evolve. As an example, I was following the work of Ethan Mollick, who managed to coax ChatGPT into running a simulation for him. So I did this. I asked ChatGPT to simulate me teaching a high school classroom. And I asked it to describe the classroom, fill it with students, and pick what I'm teaching, and then to come up with student questions for me to answer. And then I asked it to assess my answers and then carry the simulation on. So it had me teaching the French Revolution, and a student asked a kind of stupid question about it. And in the simulation, I was mean to him, and, and ChatGPT criticized me. My answer was right, but the way I responded caused the student to react badly. And the discussion of my poor teaching style went on for hours. We played this like it was like Dungeons and Dragons, like a role playing exercise.

So I could imagine this as useful for a digital storytelling facilitator. Feed in the ethical guidelines, the suggestions of how a facilitator should respond in a story circle, what ways different types of groups might respond, and have it supervise my responses in a simulacrum of a workshop. This might be an interesting exercise to go through, especially for someone who is new to it.

Joe: Yes, I asked ChatGPT to articulate StoryCenter’s seven steps of digital storytelling.

Bryan: How did it go?

Joe: Well, it was pretty verbatim. But you can train it, so to speak, in our processes. I think it could assist in all aspects of our methods, and support peer coaching and support.

In terms of the general impact, what my staff and I are saying at StoryCenter, is, oh, this will be great for business. And our argument is not so much how this is going to assist us, as it is an awareness that people will have higher and higher value for experiences that humanize us. People want validation from other people. Or nature. Or their dog. The “high touch” inverse relationship to “high tech” will get even more important. 

And so it suggests life is not only about productivity. It's about a kind of generative state, when two people are holding on to each other's words, listening in to each other to simply hold human frailty, fallibility, imperfection. I know there are robots that will take care of me in the aging future, that I won't actually have a problem with a talking dog, a robot that I don't have to feed and doesn't poop. If I'm in the twilight of dementia, right? I will be fine with being able to stroke the phony dog who talks back to me and keeps me company. But I'll also want a real human being to lean in, put their hand on my forearm and say, I understand, and here is a story that shows I understand, with layers upon layers of unspoken complexity.

So this takes me to the idea of the Turing test, where the AI has the ability to reason, but also the ability to relate, to emotively cognate. That may take a bit longer, so our business model will increasingly be about teaching people to listen deeply in a way a machine can not.

Bryan: Have you seen Robot and Frank? Frank Langella has a robot butler, who schemes a heist with him?

Joe: No, I haven't seen it. I should

Bryan: It's really sweet.

I think your group's consensus is right. And it ties into some really large scale changes that are happening. I think one of the things that COVID taught education is the importance of the body, and the importance of the whole person. Along with that, in the U.S., enrollment in higher education has been declining for 10 years, for the first time. And so I think a lot of schools have more incentive to improve the quality of the student experience. When I talk to academic leaders, they're really keen on mental health. And they see that, to some extent, it’s embodied. There is a great deal of feminist pedagogy informing this thinking. I think your competition will be people who want to make stories using digital tools– not the tools that you've been using so far, but more advanced, automated ones. So I don't need to hang out with Joe and a colleague for three days. Instead, I could do this in an afternoon, me and Digital Storytelling Mach 3.0, or whatever the tool might be called.

Joe: From day one in our work in the 1990s, companies large and small were making DIY apps for various parts of our process. Some of them turned out well, like the StoryCorps app or our Listening Station app. But we remain a boutique experience, for organizations and individuals that can afford the unreplicable aspects of a boutique experience. I think as everything gets automated, having a real person to hang out with, that listens to, and responds to, your complex set of creative challenges and shifting emotional awareness, will be worth it.

Bryan: I think you have two other advantages as well, which will be sharpened. One is the experience economy. Whether it is hiking up Everest, or being in a room with a dozen people hashing out their stories, I think that experience will continue to be valuable. I also think there is the therapeutic, mental health part of your work. There is a mental health crisis among youth, and the population in general, for many reasons … but the therapeutic function of StoryCenter workshops will remain valuable.

Joe: So while people may be able to tell their robot/AI to make a film for them, it was never about simply having the product, it was about self awareness, critical reflection that emerges from the various components of our workshop process. A singular AI might support teasing parts of that out, but some aspects of our work is about the exchange– I’m showing an aspect of my vulnerable, sensitive self, and that encourages the next person to take the risk in digging a bit deeper into the heart of the heart of the matter, the still-yet-to-be-processed lived experience.

Bryan: I'm interested in the extent to which we see people using AI tools to offer other forms of story. For example, we've seen people create comic books, you know, just basically typing in language and then generating images and boom, you're done. I've been tempted to do that, because I can write but I can't illustrate. It may take a few years, where I could ask the machine to create a feature film, using stylistic touches informed by Stanley Kubrick, and starring a fully realized digitization of Helen Mirren. We see it in gaming, for example, already people using AI tools to “train” more complex opponents and characters. Even the new Apple XR headset and softwares that will premiere in 2024, where AI will populate our visible spaces with characters, entertainment and information in new ways.

Joe; There is a great deal of critique of AI (as with other technologies) that it is so informed by dominant cultural norms that it greatly misrepresents or underrepresents the perspectives of marginalized groups. What is your take on that issue?

Bryan: It’s a major theme in technology studies. In fact, one of the things I do in my technology seminar is I have my students analyze a particular technology, looking at which populations it hails, or focuses on; then which populations it excludes or leaves behind. This has been a problem for AI for about 20 years. It is the old garbage in garbage out, or bigotry in and bigotry out … we've got to continue to push for the expansion of the dataset that informs AI. If you're looking through Google Books, or a vast collection of copyright-available material of the 500 years of printed material, you're gonna get a lot of historically now antiquated or problematic terms and images and ideas. OpenAI, as well as the other platforms, have built in guardrails to prevent that, although the guardrails are interestingly limited in some ways. And maybe you saw the Intercept article about risk assessment and human rights violations with ChatGPT mimicking broad stroke assumptions about numerous countries … And even if ChatGPT will not answer directly, where is the best place to torture someone, if you asked it to write a Python script with the question and the answers it would provide, it did it. If you read Catherine O'Neil’s Weapons of Math Destruction, she points out that algorithms replicate the bias of their creators: as they reproduce things, they can then accentuate them.

At the same time, AI can be seen as a threat to authoritarian power. A number of countries, like China and Egypt, have limited what types of AI can be used, what questions an AI can be asked, and what answers can be given, which suggests that AI could simply be another tool for propaganda.

Joe: So let us wrap this up. Where do you think this leaves us in regards to what our relationship to this, and perhaps the whole gamut of information age tools and technologies? They are such a huge part of many of our lives, right?

Bryan: It is not a panacea. It will not give us utopia. It may not be the end of everything. It will be useful or not.  One of my favorite boring uses of ChatGPT was realtors using it to generate copy. My first thought was: that's terrible! Then I thought, Actually, how many people go into selling houses because of their love of deathless prose? Zero. And writing copy is a chore for them. And if you say, “Okay, write me two paragraphs about a three bedroom house with a chihuahua” or whatever, that’s useful.

Joe: Maybe you’ll have more time off, now that all the stuff you did took you seconds instead of hours.

Bryan: In the 1930s, John Maynard Keynes, a great economist, wrote this interesting article about the eventual 15 hour week. What were we going to do with all our leisure time? Well, he was wrong, because what we did was we filled up that downtime with more and still more work to do.

We need to re-think priorities for our lives, for our societies. Bill McKibben‘s book Falter (2019) suggests we’ve blown up civilization in the past two hundred years with growth and more growth, yet this neo-liberal sense of an ever expanding economy will not save us. We need new paradigms that put a pause on growth, we need to put our focus on repairing what we have broken. What we have damaged. This is where reparations could come in, climate restoration, social justice, and expanded, rather than more limited, human rights. We need to use the technology to focus on fixing everything we've done wrong. I'm a futurist, I have to think about these things. If we have these kinds of cultural currents weave through us, then they might lead us towards spending more time sitting around a virtual fire with you and a colleague and creating a digital story.

Joe: I share that hope, and I share those values. I thank you for the time.

Bryan: You’re welcome! Take care, my friend. Be well.

Image Credits: top of page: Joe Lambert with photoshop beta generative AI from original image #JWSTArt - by the Storytellers and illustrator Matt Murphy (James Webb Telescope); bottom of page: Bryan Alexander.

Previous
Previous

Let's Live Together - StoryCenter's Joe Lambert Interviews Human Rights Lawyer Jennifer Harbury

Next
Next

Grounding the Telling of Others' Stories in Ethical Practice: An Interview with Syrga Kanatbek Kyzy