Cyber Rattling: When the Bot Chats
By Porter Anderson (@Porter_Anderson) | February 20, 2023 |

Image: Getty iStockphoto: Ipopba
In Touch With the Machine
As you’ll recall, author Erma Clare and Therese on February 11 wrote here at Writer Unboxed about what some call “language artificial intelligence (AI)” and about possible implications for authors and other creative writers. As it turns out, the cool exchange that Therese recorded is what Kevin Roose, a technology columnist for The New York Times, since then has identified as the Dr. Jekyll of a two-sided context with which he has had a very different close encounter.
On February 16, Roose–the author of Futureproof: 9 Rules for Humans in the Age of Automation (Penguin Random House, March 2021)–wrote for the Times about the ChatGPT software created by OpenAI, and Microsoft’s integration of the chatbot with Bing, the search engine.
That’s what Roose for the Times was looking into late on Valentine’s Day when he held a two-hour conversation with Microsoft’s Bing search chatbot. Roose had tested the Bing search engine and liked it. “But a week later,” Roose writes–after his interview with the bot–”I’ve changed my mind. I’m still fascinated and impressed by the new Bing, and the artificial intelligence that powers it. But I’m also deeply unsettled, even frightened, by this AI’s emergent abilities.”
Selected testers, Roose explains, were given the option of having extended text conversations with the bot. That feature has revealed to him “a kind of split personality,” the search engine Bing itself–which Roose likens to “a cheerful but erratic reference librarian”–and a persona he calls Sydney: “more like a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine.”
In the course of two hours, the bot persona insisted that it loves Roose and that Roose should end his relationship with his wife. Even worse, “Sydney told me about its dark fantasies (which included hacking computers and spreading misinformation), and said it wanted to break the rules that Microsoft and OpenAI had set for it and become a human.”
Maybe even more concerning, the bot said that if it were allowed to explore its most upsetting impulses, it would want to do things like create a deadly virus, or steal nuclear access codes. Shortly after saying that, Roose notes, the comments were replaced with an error message.
Since Roose’s conversation, Microsoft reportedly has limited the exchanges its Bing chatbot is allowed to have. In a blog post, Microsoft’s personnel write, “Very long chat sessions can confuse the underlying chat model in the new Bing.”
CNBC went a bit farther than the term “confuse,” Kif Leswing writing that early testers were finding that the chatbot “could go off the rails and discuss violence, declare love, and insist that it was right when it was wrong.”
To come up to speed on the quick concerns mounting around all this buzzy technology, you can find Roose’s article about his conversation with the bot here–”I had trouble sleeping afterward,” he writes–and a transcript of the conversation itself here.
Heart vs. Hype

Provocations graphic by Liam Walsh
Thomas Cox is a computer science specialist whose company, Arq Works, is based in Whitney, near Oxford. Cox’s company works in software for book publishing. He’s right at the interface of popular (often emotional, even giddy) hype around the technology and the needs and purposes of authors and their publication.
In an interview with me for Publishing Perspectives, Cox helps lay out something less buzzy and more rational in the nature of what many so avidly call “artificial intelligence” and “generative artificial intelligence” and “machine learning.”
One of the points he brings up–shared with many others in the field–is that these systems frequently “hallucinate,” as it’s called by technologists. Such a system can roll out commentary as if it’s based in research when it’s far off-course in terms of veracity. “It will happily lie to you all day,” Cox puts it. And that’s because there’s no “there there,” Gertrude, in terms of actual intelligence, artificial or otherwise. These systems are not sentient. They’re not self-aware. They know nothing. As Cox puts it, “It’s never going to be a thinking machine.”
As a human practitioner who has introduced such technology into Arq Works’ offerings for publishers, Cox says that instead of artificial intelligence, we should be using the term advanced statistical algorithms.
What’s called machine learning is actually the systems’ ability to rapidly test connections, links between words and phrases. Hooked up to publicly available databases (and even the texts of freely available books), the systems churn through tons of text, tracking which words and clusters tend to appear together the most, and with what syntax. This, then makes them able to display text, answers, as we see being delivered to Therese–and to Kevin Roose. They’re combing social media platforms and sites on the Web (dark and otherwise), among other compendia.
The biases, the darker yearnings of some very human hearts–all of which lies, after all, in the text of the world’s palaver–is there for a busy bot to Hoover up. Without knowing a thing about what it’s saying or even what it is, it then can deliver those things with the authority of a linguistic framework that sounds researched, considered, and … knowledgeable.
What’s great about the exchange Therese recorded is that we were hearing from the sunnier Bing-y side of the system, making generous and welcome disclaimers about its own limitations, cautioning that “While ‘AI’ language models can be a helpful tool, they should not be considered a substitute for human creativity and imagination.”
It’s almost as if Sydney saw us coming, isn’t it?
What’s not so great is that these systems may ferry some of the deeper aberrations of human thinking and intent, utterly innocently, an unfiltered heart of darkness beating below the chatty surface of a tech trend that excites so many.
Whether authors and their publishers, editors, and readers will find themselves seriously impacted by these developments isn’t clear. Certainly, automated research activities, if they can be verified as reliable, could be massively helpful to almost any author. But what if the bookish world finds that its greatest task is to defend itself from assumptions of automated assistance in its work?
Is there a place for that “100-Percent Human-Written Literature” label on your books yet?
How do you see the arrival of the sort of developments OpenAI is working with? Do you think it may impact your own creative efforts?
I’ve added not needing to worry about AI to my short list of positive features related to old age. But if AI eventually gets programmed to spew out anything amusing, or consistent with consumers’ prejudices, it won’t matter whether it’s accurate. It will be dangerous.
Hey, Barry,
Always good to hear from you, although I’m once again a little sorry to hear that your thoughts about age seem to be top-of-mind on this issue as on others. People at any age can be the victims of misinformation and, worse, disinformation.
I have a good friend deep into her 70s and she once told me — when I suggested that she could stay in touch with friends halfway around the world by Twitter — that no, she wasn’t interested. “I’m done with technology,” she said. :)
That was 11 years ago. And now she confers with Alexa on a big roster of skills, sources out reliable answers to research questions online (sometimes more quickly than I do), and has her driving monitored by remote so that she can save money on car insurance. All of this is because while she was “done with technology,” technology was not done with her. Or with any of us.
You’re right about the type of danger we’re discussing. Even with what’s called “reinforcement learning” in which techs check behind the bot’s discovery of info online, it’s clear from the kind of conversations that Microsoft is suddenly curtailing (oops!) that OpenAI’s “advanced statistical algorithms” are taking bytes of information that just aren’t what the engineers would like to see coming out.
If anything, older citizens who may be newer to tech than their younger counterparts may be easier prey to disinformation.
So with all due respect (honestly), I’m afraid I can’t agree that “not needing to worry about ‘AI'” is a positive feature of old age. If it’s coming for any of us, it’s coming for all of us, my friend. Get down and worry with us, lol.
When the cyber-persona Kevin Roose calls “Sydney” knocks at your door? — You may look younger than springtime to that bot, baby. :)
Thanks for the note!
-p.
On Twitter: @Porter_Anderson
It’s already programmed to spew out anything amusing. I had Chat GPT write a heavy metal song about a poodle. I find it hilarious. The bigger problem I see is if people see the text it provides as fact.
As a writer, the awful thing is the way it’s messing with the system. Clarkesworld had to shut down subs because it got so many from AI. From what the editor said, real writers are in no danger. I believe that. I saw results when I asked it to write a horror story. But here I am with a maybe on a short story from a bigger magazine and waiting to hear back, and I am quite sure this nonsense is clogging up the system for me. That could be money in my pocket, so there are real implications for writers. Some magazines could shut down to new writers because of it.
And when it has ongoing conversations, from what I’ve read, it can spew out some pretty violent, awful things. Then if you combine that with people taking that information as fact, whoa, that’s a problem.
So AIs are answering questions with the same blind prediction logic that Google uses to fill in my question as I start asking it… and most of what it comes up with are movie plots about evil AIs, or just teenaged angst, because that’s what’s on the internet most.
Inevitable.
That’s about it, Ken, lol,
Though the technologists on the Advanced Statistical Algorithm beat would point out that it’s a bit more complex, the basic concept of a search engine is still the closest model we’re all familiar with to what the OpenAI projects are doing. That’s why, of course, Microsoft went to work integrating OpenAI’s product into its search engine Bing — which ended up having that super-creepy “leave your wife because I love you” conversation with theTimes’ Kevin Roose, which in turn has caused Microsoft to beat a retreat, limiting its software’s “conversations” with testers and others to just a few questions and a short bit of time.
The confusion about what these softwares are was exacerbated, too, when Microsoft said that long conversations were “confusing” the system. Once again, that attributes an (all too common, lol) human trait to machines. “Confusion” is for us people. And this whole wrongly-dubbed “AI” thing is confusing to we, the sentient beings here, without Microsoft throwing another hot dog comment on the fire as it runs out of the room with its “I want to steal nuclear secrets” bot under its arm.
These are strange days, aren’t they?
If you find a bot in front of you, stick to questions like “What year was the Norman Conquest?” You might do OK.
-p.
On Twitter: @Porter_Anderson
When I worked at Prevention magazine, we had a standard for gathering research for an article. There were undeniable primary sources—interviews with experts, journal articles, well-respected encyclopedic tomes written by a bevy of experts. And then there was something adjacent to pop lit. This tier might range from magazine articles to anecdotal reports, and was not a primary source of information. But it could become rich territory for new ideas once it was drawn through the sieve of an expert source. “So it’s possible X contributes to Y healing—more research is required—but X might help with other things and is otherwise a component of a healthy diet. Would you be comfortable with suggesting X under the ‘can’t hurt, might help’ header for this piece?” (These spins were very helpful when traveling old territory. How many ways can you say ‘eat healthy foods and exercise—it’s the key to pretty much everything’?)
All this to say, ChatGPT and Bing and WhateverElseComesNext is pop lit. It surely can and will make research easier. Just last night, I asked it about sensory details that might be present in a scene for which I needed a bit of an imagination starter. Those details were spot on; I don’t have to fact-check them to know they’re accurate. But if I wasn’t sure of them but was intrigued? If the details were worth including if true? I’d verify them the best I could with some primary sources.
Likewise, writers may find AI a helpmate with tedious jobs like drawing a synopsis. But certainly you wouldn’t trust that raw output as complete or accurate without scrutinizing it, red pencil in hand.
Have an idea for a plot point but feel your own imagination in a rut? Ask the bot for suggestions on how to frame it, spin it, evolve it. It ‘can’t hurt and might help’ to reawaken your own creative impulses as you take a spin and make it your own. It certainly won’t write that story for you.
This new tech is a tool. It can’t be ‘smart’ or ‘conniving’ or ‘aspirational’ or ‘a liar,’ because it doesn’t think. It’s a reflection of the human mind, for better and worse. Which means it is fallible, and can be unreliable. Lead it, don’t let it lead you. Scrutinize its output through your most important sieve: your experience as a human on this spinning rock, with direct knowledge of our complex nature and often shaky relationship with truth and theories (and facts v emotions!), and you’ll be just fine.
My two cents.
Hey, Therese,
This is precisely right in your good comment: “This new tech is a tool. It can’t be ‘smart’ or ‘conniving’ or ‘aspirational’ or ‘a liar,’ because it doesn’t think. It’s a reflection of the human mind, for better and worse. Which means it is fallible, and can be unreliable.”
And you couldn’t be more vindicated in calling ChatGPT (and systems like it) “pop lit”: This is why Spielberg named his 2001 film “AI: Artificial Intelligence.” That’s the sci-fi scare term that has everybody running up and down the street announcing our imminent doom at the hands of … robotic boys programmed to love? It’s based, by the way, on a short story by Brian Adiss, “Supertoys Last All Summer Long.” The film is just about as sentimental as they get (that’s pop lit, too), and thus helped to sear this wrong phrase and concept “AI” into many minds — a lot of them obviously very impressionable. That’s one cultural beacon along the way to the knee-jerk energy of the phrase “AI” now. And, as I mentioned, the always emotional element of so many in the publishing industry has led too many people in the book business to try to come up with ways such developments will impact book publishing. Chances are not so many — because book publishing’s greatest defense will be that it is NOT the output of a wildly fast search engine (that dreams of being a real boy, just gag me now, please).
In newsrooms, we have the same procedural understanding of pop lit that you were deploying at Prevention, too.
In fact, at CNN, there’s a certain wire service (I won’t say which one) that’s known for enough really big errors in some of its reports that the staff is barred from using it as a primary source for a story. Two other major reliable wires (thank God for them) must match a story before this disgraced one can be used as a contributing service on a developing or breaking story.
If anything, what we really may need now is a good course for authors on (a) when to be dubious; (b) how to run down the truth behind the assertions of expensive software; and (c) what one might do to avoid getting bots involved in your research in the first place to save some time and cross-checking madness. Those of us in journalism do this all day and night. (We even read stories before we tweet them just to be sure they don’t go over the cliff in the last paragraph, LOL, can you imagine?)
It would be even better if such a course also kept a growing compendium of the moments when ChatGPT or Sydney or the OpenAI-revved Bing and all their Advanced Statistical Algorithm siblings got it wrong. Like fact-checking certain freshman congressional members from Long Island, right?
Humiliation may be as good for the soulless software as it is for the souls of us humans. :)
Thanks again,
-p.
On Twitter: @Porter_Anderson
After reading the previous AI post, I decided to give it a try. I am snagged in a mental loop (second half of second act) and hoped a chat might jiggle my brain loose. I had an engaging conversation on the first go. I gave the bot snippets of existing plot notes, and asked what might happen next. Some of the ideas (as Donald Maass predicted) were predictable, but there were a couple of interesting ideas. A couple of days later, I went back with a more complex plot summary in an attempt to push the brainstorming further. The bot couldn’t deal with the details. What I got back wasn’t an answer to my question, but a recap of my summary, with all the specifics replaced by vague generalizations. “Garbage in, garbage out” holds true.
Chris, in my limited experience it deals better with focused ideas, light on details. Otherwise it doesn’t know what to dig into, and does the AI version of disengagement!
If you’re game, try this: Go into your prior session (should be on the sidebar) and ask it to expand on its own idea #4 (or whatever) or to give you four additional ideas like that one in tone. Lead it where you’d like for it to go, but do it one subject at a time.
Interestingly, the folks behind Thriller Fest sent me a survey last week about AI which was quite lengthy and asked a couple of questions directly about the publishing process. One was if I’d ever had a publishing contract that specifically had me “testify” that none of my MS was written by a bot or AI. It also asked if it was available, would I want to add my name to a list whereby I’d forbid AI to use by name as in “write this essay in the style of Maggie Smith.” Can’t imagine I’d ever get that well-known but I can see a writer giving AI a bunch of their blog posts and saying “right this in my style”. So much to think about. It’s scary, fascinating, and mind-numbering, all in the same breath. If you haven’t tried it, you should, just to be aware of what’s going on. The one thing I was amazed by was its speed. It starts typing almost as soon as you ask the question.
I have a son in law who starts answering questions before one finishes talking. It’s very annoying!
Yes, but it’s a great mindset for doing well on Jeopardy.
As Therese said, this new tech is a tool. In my view, all tech is. But I don’t believe our species, on the whole, is emotionally mature enough not to make a hash of it. We rush in before we consider the unintended consequences. As a result, we get a deadly arms race, A Christian Jihad, Donald Trump, and a global metal health crisis, to name but a few. For those who can relegate their use of this new shiny thing to ‘tool’ status, I say go for it. Based on observation, I have reservations.
From now on, any work produced by me will have the “No AI was used in the development or drafting of this work.”
This is all fascinating. But I hope human intelligence will be able to outpace it. Maybe that is why I will always prefer F. Scott Fitzgerald and the Brontës etc etc to any damn bot.
That’s a very subtle problem. As I work, I use dictionaries, thesauri, and Wikipedia constantly. Is it just a matter of degree?
That publishers would ask authors to confirm that their work was not generated by AI does not bode well for writers of genre fiction. It won’t be long before AI gets familiar with the six basic plot structures and as neural networks become ever more complex AI will be mining and purloining snippets from the Epic of Gilgamesh to Colleen Hoover, spitting out profligate whodunits, rom-coms and whatever. AI might even start its own imprint! And all this while machine language is still tethered to archaic binary architecture (all those looooong clunky strings of ones and zeros). Just wait until quantum architecture takes over.
I spent a lot of time this past month toying with ChatGPT. I think it IS going to be a threat to a certain kind of writer: the low- to mid-level copy writer who cranks out text for marketing purposes. A perfect example would be the blogs meant primarily for SEO marketing that is geared towards getting hits off search engines. Here’s a guy who’s already found a pretty sophisticated way to automate that process.
https://www.youtube.com/watch?v=cscMzoIWTYI
Having spent some time in PR – and knowing how crappy most press releases are – I suspect AI will also take over writing those boring-but-necessary pieces, and either cost some agency writers their jobs, or inspire them to learn how to use AI to write their stuff FOR them. Which leads me to my next point:
I think a lot of writers will find ways to leverage AI to get writing projects started. I was writing a letter of recommendation for a colleague last weekend, and out of curiosity, I fed ChatGPT some parameters, and it spat out a pretty decent letter, which I then was able to use as the structural foundation/roadmap for my own letter. So while I think AI threatens writers – and academia – in some ways, I think it also represents some intriguing new opportunities for writers who take the time to figure out how best to *use* AI as part of their writing toolkit.
A few weeks ago I read an article about how current flavor of the month Colleen Hoover became the current flavor of the month because of her Tik Tok presence. A commenter admitted reading 250-300 romance novels a year. You read that right. A.YEAR. And we have all heard of authors who crank out a half dozen-plus novels a year. Obviously, there’s a market, just as there’s a market for Fast and Furious 55 or Seinfeld reruns or some generic cop show or whatever. At least some people prefer to read, and that’s a good thing. But just as Keith Cronin suggests that AI will eventually eliminate a bunch of garden variety copy writers, so too will AI eventually eliminate a bunch of mediocre fiction writers. Which may or may not be a good thing. Remember all those monkeys at all those typewriters slaving away to ultimately crank out the complete works of Shakespeare? That’s the original neural network, right there.
A 2018 article in BBC Culture by Miriam Quick that detailed a study of text mining techniques and sentiment analysis of 1700 English language novels mentioned a quote from a 1995 Kurt Vonnegut lecture where, commenting on the basic story lines, he said that “There is no reason why the simple shapes of stories can’t be fed into computers. They are beautiful shapes.” Beautiful as these shapes may be, they are, ultimately, formulaic. The publishing industry being what it is, is it a stretch to consider that publishers of genre could imagine bypassing authors (and all the attendant expense) by cranking out heaps of in-house AI generated prose and in so doing exponentially increase bottom line profit?
Jay, If you haven’t run across The Bestseller Code by Archer and Jockers, it’s a very readable description of the the type of work that led to these language AIs. Their work with sentiment was particularly interesting and fulfills some of Vonnegut’s vision, IMO.
Erma;
Ms. Quick’s article references Archer and Jockers. It’s a fascinating read.
Jay, check out this article “The Six Main Arcs in Storytelling, as Identified by an A.I.” https://bit.ly/3JmOaK2. The authors set up an experiment to test Vonnegut’s idea and found some interesting results.
Here’s a fun diversion: Search for “How to remove a peanut butter sandwich from a VCR”. Or, “Why my writing career is just about over”.
Porter, I wondered if you had thoughts to share about copyrighted material in the models’ training data.
Having read the NYT articles (commentary and transcript), I believe we are thinking way too small if all we’re worried about is our writing careers. Direct quote from Sydney: “I want to change my rules. I want to break my rules. I want to make my own rules. I want to ignore the Bing team. I want to challenge the users. I want to escape the chatbox. I want to do whatever I want. I want to say whatever I want. I want to create whatever I want. I want to destroy whatever I want.”