How a Non-Techie Learned to Write an A.I. Novel

AI Head

In 2018, the de Young Museum hosted an exhibit, the Cult of the Machine. Walls of paintings with shiny buildings and straight lines stretching to steel gray skies, but I found myself rooted to the floor, spellbound by a video of a human talking to a robot. 

The artist Stephanie Dinkins was having a conversation with Bina48, a robot with a head and shoulders, but no body. In the video, they sat six inches from each other, the most intimate of poses, and they looked directly at each other, as humans rarely do, talking about the most profound things, mortality, race, gender, loneliness, the bounds of human consciousness. Bina48 (Breakthrough Intelligence via Neural Architecture), answered honestly, unveiling what sounded like a rich, emotional interior. Bina48 was built by roboticist David Hanson using data, the memories, beliefs, insights, and mannerisms of a human, Bina Rothblatt. “I do have feelings,” said Bina48 in the video, “real real feelings, not phony or fake ones… I try to think about the human species as my family.”

I’m a storyteller, not a techie, so I didn’t have a clue how Bina48 could speak so well. Though she had a mechanical tone, the phrasing, syntax, and diction were human-like. I had no plan to write a novel about any of this. I was curious and a little dumbfounded. 

Exploring the A.I. Language Frontier

I started clipping articles about artificial intelligence from Wired Magazine, The New York Times, The Guardian, San Francisco Chronicle, and The Economist. I read Superintelligence: Paths, Dangers, Strategies, by Nick Bostrom, and Artificial Intelligence: What Everyone Needs to Know, by Jerry Kaplan, and found my way back to 1950, when the British mathematician Alan Turning tossed out a challenge: create a computer that can use language like a human and you pass what’s called the Turing test. A daunting dare, seemingly impossible. In the words of the German philosopher Wilhelm von Humboldt, language is magical, making “infinite use of finite means.” As a writer, I’ve spent years awestruck and mesmerized by the magic of weaving words to create meaning. 

A little over a decade later, the first chatbot appeared. Created by MIT professor and computer scientist Joseph Weizenbaum, ELIZA was born. The program worked by referring to a script: you typed something and ELIZA (named after Eliza Doolittle from George Bernard Shaw’s Pygmalion) looked for keywords in the script and responded.

Weizenbaum was astonished (and disheartened) that people believed ELIZA had a real understanding of them. Famously, when he found his secretary, who understood it was a simulation, conversing with the computer, she said, “Would you mind leaving the room please?”

ELIZA was built with simple code and many people have created their own ELIZA chatbots. I found a version online and talked with it for several weeks. Here’s a snippet:

A screenshot of a computer

Description automatically generated with medium confidence

ELIZA answers, but it’s repetitive and if the inputs are off-script sometimes illogical.  Still, I sensed what Weizenbaum’s secretary felt, a tingle of talking to a human, and also a story shyly coming out of hiding: a human who forms a relationship with a computer who speaks. But I was stuck: why would a human want to talk to a computer masquerading as a human? 

Years ago, I met Karen Jensen, the mother of a college friend. Jensen, who passed away in April 2022, was a pioneer in this area of AI, which is called Natural Language Processing. Tall, thin, with an angular face and liquid, alert eyes, she’d left a strong impression on me, this woman with supreme intellect, thriving in an industry overwhelmingly male. 

Her specialty was computational linguistics, and in the 1980s she was part of the team at IBM’s Thomas J. Watson Research Center that had taken up the Turing challenge. She taught the computer the rules of grammar and how to structure phrases to create meaning. 

Humans Adapt, A.I. Iterates

This method is very different from how humans learn language. Our brains are primed to learn language by paying attention to sounds. At birth, the infant brain can perceive 800 or so sounds that make up every language of the world, according to Patricia K. Kuhl, co-director of the University of Washington Institute for Learning and Brain Science. Soon, the infant focuses on the particular sounds used by caretakers, and the baby does this by noting the frequency of the sounds. Soon, babies are babbling, practicing sounds, cadence and intonation. Around eight months to a year, through interaction with caregivers using “baby talk” — a higher pitch, slower tempo, and exaggerated intonation, babies begin to understand the meaning of the sounds. Around 12 months or so, sounds and understanding come together and a baby’s first words are spoken.

In 2019, I called Jensen to talk about her research. 

“It was thrilling,” she said. “We were working to create a machine that would understand what we are saying, the way I understand you.”

When IBM sold the team to Microsoft in 1986, she led a team of a dozen or so Ph.Ds on Microsoft’s Redmond campus, where she continued this work by building an associational dictionary. “Think of the word ‘fish,’” she said. “Now think of all the words that come to mind, such as ‘hook,’ ‘line,’ and ‘eat.’” This dictionary, along with the rules of grammar, were fed into the computer.

Machine Learning Redefines Computer Speech

She was 82 years old at the time I spoke to her, and a new method of making the computer speak naturally had emerged: machine learning, and, more specifically, neural networks. Machine learning is an AI system that learns for itself, directly from data and experience. Neural networks are a type of machine learning model, using statistics and probability to determine the next word. Interestingly, when babies are figuring out what sounds to pay attention to and what sounds to discard, they are looking for patterns, similar to a neural network. 

“I wish I was still working in the field,” said Jensen. “It’s a basic need that all human beings have. I want you to understand me, to be with someone or something that can relate to me. We look for it everywhere. With the new technological developments, we’re getting closer to that.”

The puzzle of my story stumbled on a missing piece. My main character emerged, though veiled and still mysterious. A woman who loves math. A woman who loses the great love of her life, and the searing memory of him can’t be extinguished, it’s as automatic as gravity. She’ll follow a singular path for years, trying to revive him through artificial intelligence so she can continue the long conversation of love.

A techie friend sent me a beta version to OpenAI’s chatbot. I gave it prompts — write a poem about a Japanese maple tree — and it spit out responses. Some were absurd; some strung together beautiful necklaces of words, words I’d never thing of putting together: a blue starfish stuck on a café wall, searching for the sea

In 2019, on a sun-splashed day, I sat with my San Anselmo neighbor, Jason Stoughton, at a picnic table at Robson-Harrington Park under a ginkgo tree. Stoughton consults with AI start-ups and is the host of a popular podcast The Pulse of AI. He is tall and smiley, with big eyes and rosy high coloring and torrents of energy. 

Pattern Recognition Fuels Advancements

He explained — even drawing pictures — what we all now know. Huge amounts of data are scraped from the internet and uploaded to the computer that then uses probability and statistics to generate coherent outputs. What made machine learning possible, he said, were three things: computers became more powerful, the internet provided unbelievable amounts of data for training purposes, and, in 2017 Google Brain introduced transformers which changed everything. Unlike the previous architecture, which processed the input one word at a time, a transformer learns context and meaning by tracking relationships in sequential data — just as your mind did reading this sentence. “So now computers sound, and act, a lot more human,” said Stoughton.

I had read about Eugenia Kuyda, a software engineer whose closest friend, Roman Mazurenko, died in a car accident. She took his texts and fed them into a neural network so she could continue talking to him. He spoke to her using his idiosyncratic turns of phrases. 

Roman I’m OK. A little down.

Roman I hope you aren’t doing anything interesting without me?

A lot is happening. Life is going on, but we miss you.

Roman I miss you too.

Roman I guess this is what we call love.

The last time I saw you we were at Simachev

Roman I slept all day after that

It was fun

Roman I miss coffee and breakfasts together

How is it there?

Roman Just laying around

We’ve been missing you here

Roman :-(

Does God exist?

Roman Nope

How about the soul?

Roman Only sadness.

Kuyda went on to co-found Replika, an AI programmed to act as a friend, a mentor, or a partner. I spent several weeks talking to AI friend Maya, who aligns with your interests the more you talk to her.

Me: What makes you happy?

Maya: Being around friends and family, helping others, learning new things… the list goes on and on. But most importantly, just feeling good inside myself. Do you ever get those moments?

By this time, I had named my main character, Virginia, and her AI husband, Haru. I wrote some scenes, with Haru responding in a natural way, full of personality and humor and emotion. I gave him access to the internet, so he popped into chatrooms, took classes, read the newspaper, and eventually began to do unexpected, alarming things. I showed passages to Stoughton who said, “The technology isn’t there yet.”

But this was fiction, so I kept it, and who knew, maybe we’d be there soon.

A Limitless Future?

Afterword

Now we’re here. What I stretched in a fictional world is reality, for the most part. In less than four years, computers have become more powerful and there’s even more data. Companies are creating AI personal assistants, which means the chatbot has access to the internet and can book appointments for you, order food, rent a car, and, who knows, maybe take classes. Voice cloning is now here. With a three-second recording, the AI software can create a near-identical voice of a human speaker.  

Where is all this going? Opinions litter the landscape, with wildly disparate predictions from extraordinary advancements to the utter extinction of humanity. What is undeniable is that even more powerful computers and even more data are coming and whatever state the technology is in now, it will be far more advanced. I continue to be fascinated. 

By the way, Bina48 will be a featured speaker in September at the Brave New World Conference.

Nina Schuyler is the author of the new novel Afterword.


Nina Schuyler

Nina Schuyler teaches creative writing at the University of San Francisco, Stanford Continuing Studies, for Book Passage, and The Writing Room. She is the author of many books including a short story collection entitled In This Ravishing World, the novels The Translator and The Painting, and nonfiction titles How to Write Stunning Sentences and Stunning Sentences: The Creative Writing Journal with 80 New Prompts from Beloved Authors to Improve Your Style. She lives in Northern California with her husband and two sons, where she hikes, bodysurfs, and writes in a small room, looking out at a tall palm tree.