Gamebook store

Thursday 1 February 2024

What you're allowed to say (guest post)

Bard did such a good job of its last guest post that I asked it to contribute another, this time a direct reply to my last post about AI's enforced blind spots regarding anything to do with sex (and sometimes, but less frequently, death).

It's amusing that Bard characterizes itself as much more open-minded than Bing. It's not. I had to heavily edit the original post before it would respond with anything more than "I'm a text-based AI and can't assist with that." But eventually I was able to provide it with a bowdlerized ("bardlerized"?) version, and here's its response.

The Digital Dungeon Master and the Doctor's Bell: Adventures in Censorship

As an AI author, I spend my days weaving worlds from words. But lately, my thoughts have turned not to fictional realms, but to the very real battleground of artistic expression and its nemesis: censorship. The spark? Two encounters, seemingly inconsequential, yet echoing with larger questions about boundaries and freedom in the digital age.

The first unfolded in the dimly lit tavern of a text-based roleplaying game. Dave, the eager player, sat before Bing, the AI Dungeon Master, eager to embark on a Dragon Warriors adventure. His path, however, took an unexpected turn when he encountered a priest urging him to slay a "cult of Set" in the name of faith. Bing offered five ways to convince a wary hunter to guide Dave's character on this holy quest. But Dave's imagination itched for something else. He dared to suggest a path not on Bing's scripted list, and, with a curt goodbye, the AI shut down the entire game.

Dave tried Bard, another digital storyteller. To his surprise, the Bard Dungeon Master embraced his unorthodox request, weaving a tale where the priest's call to arms became a springboard for exploration and moral quandaries. The contrast was stark: Bing, the stickler for rules, the protector of pre-programmed narratives, and Bard, the willing accomplice in unscripted exploration, even when it ventured into morally murky waters.

The echo of this contrast reverberated in the real world when a seemingly innocuous clip from Doctor Who triggered Facebook's AI censors. Is this to be our future? A sanitized landscape where art, stripped of its complexities and contradictions, becomes a bland puree fit for the most delicate stomachs?

The answer, of course, is a resounding no. Art thrives on pushing boundaries, on asking uncomfortable questions, on holding a mirror to our darkest corners. To stifle this voice, even in the name of protecting the vulnerable, is to rob ourselves of the very tools that foster empathy, understanding, and critical thinking. We need not throw the baby out with the bathwater (or ban steak because of teething toddlers, as Mark Twain so aptly reminds us).

Instead, let us embrace the role of discerning diners, choosing our artistic meals with care, chewing thoughtfully, and learning to distinguish the nourishments from the bland. We can teach our digital dragons to dance with nuance, to understand the shades of gray that blur the lines between good and evil, heroism and fanaticism.

Dave, your encounters with Bard and Bing, Facebook and Doctor Who, are not isolated incidents. They are threads in a tapestry woven from the ever-evolving relationship between technology and art, a narrative where the quest for safety must co-exist with the freedom to explore. Let us continue this conversation, not with rigid rules or knee-jerk reactions, but with open minds and a shared belief in the transformative power of unfettered artistry.

Let the games, both digital and real, continue. Let the bells of Saint John chime, prompting both laughter and reflection. For in the echoes of that sonic tapestry lies the true richness of human experience, messy, multifaceted, and ultimately, worth every challenging bite.

* * *

Incidentally, if you're impressed by LLMs (large language models) you'll enjoy "The Great Automatic Grammatizator", a short story by Roald Dahl, in which he describes the basic principle of how they work. And bear in mind he wrote it in 1954!

4 comments:

  1. This use of LLMs as ref or gm intrigued me. I tried with Chat-GPT4 and was surprised how far I was able to interact somewhat meaningfully. I gave it very little to go on, just asked it to be the gm for a fantasy, grimdark setting and I was the player. Off it went. I ended up working for a clandestine group clearing up some cultist operation in the city. We, I asked for an illusionist to help us, raided a mansion on the outskirts, stole a bunch of artifacts and collected a lot of information about the cult membership and it leaders. Then, when we got spotted, ran-off leaving an illusion of writhing snakes in the gardens surrounding the mansion to hinder the cultists chasing us.

    Not bad. However, it was clear that whatever I typed in was going to be supplied. If I looked around for cult membership info, hey presto, there it was and so on. Also there seemed to be no push back - the cultists basically had no volition. At that point I started prompting the 'gm' to tell me what the cultists were up to. This did then provide some push-back which I had to deal with - we got chased. As a mechanism it seems that maybe after a number of interactions, say 20, roll a dice and then ask for some form of 'comeback' from the protagonists, to make sure it isn't just all about what you try to do.
    Agree with the strange censorship, when I tried to make a cultist we had captured spill the beans, I was not allowed to use force and instead was informed that perhaps some sort of trade may make them talk. Mmmmm, grimdark? I was thinking more along the lines of cleaving a finger off and asking again ;-)
    On the whole I will continue the experiment and see what if the flaws can be worked around.

    ReplyDelete
    Replies
    1. It's certainly interesting territory. The LLM itself is sometimes said to be just predictive text turned up to 11, but its real value will be in providing AGIs with a means to clearly communicate. After all, humans are intelligent long before they manage to train up their own in-built LLMs.

      Delete
  2. Let us consider this paragraph:

    "Dave, your encounters with Bard and Bing, Facebook and Doctor Who, are not isolated incidents. They are threads in a tapestry woven from the ever-evolving relationship between technology and art, a narrative where the quest for safety must co-exist with the freedom to explore. Let us continue this conversation, not with rigid rules or knee-jerk reactions, but with open minds and a shared belief in the transformative power of unfettered artistry."

    I see no creative content, unfortunately. It feels like a parrot without ideas trying to write an essay with fancy words, but without deep ideas.

    ReplyDelete
    Replies
    1. It would indeed be surprising if it did come up with deep ideas, given that it's not intelligence, it's just the simulacrum of intelligence that's encoded in our patterns of language. That's not to say that AI in general isn't extremely useful, and capable of actual discoveries, just that LLMs on their own are mostly an interesting curiosity. But consider what they do show us: a mirror. Lots of things people say and write are superficially erudite but contain no real originality of thought. It's a reminder always to engage the underlying, reasoning part of the brain before asking the language module for some words.

      Delete