Acting, and Hallucinations, and Lies. Oh my.

Dear Friend,

Computer, are you alive?

Two Shocking Statements

In this section of the book (which is the second section, because I’m a week behind), Mollick lays out four principles for engaging with Generative AI.

  • One: Always invite AI to the table.
  • Two: Be the Human in the Loop.
  • Three: Treat AI Like a Person
  • Four: Assume this is the Worst AI You Will Ever Use.

In the first two parts, Mollick throws down a couple of shocking statements. In part one, where we’re encouraged to invite AI to the table, even as creatives, he writes, “AI is great at the sonnet.” Reader, I was gooped. (Question for my students: Did I use that correctly?) To be fair, he’s writing about Generative AI’s creation capacities and explaining how the AI is better at crafting something like a sonnet than something like “a poem of exactly fifty words.” I’ll admit, I’m not entirely sure I understand how or why this is true. It seems to me that a general composition of fifty words (following no rules) would be simpler for the tool than a poem with a formal structure, but I think the issue might be that tools like Chat GPT learn from examples that already exist, and since it’s easy to find examples of and rules for the sonnet online, it’s easier for the AI tool to model those; on the other hand, “create a fifty word poem” without any guidance for what type of poem–no models–might be harder, because the generative AI is actually not good at original invention, yet.

Hopefully I’m understanding that correctly, but perhaps one of you dear readers had a different interpretation.

Book open to page titled "Four Rules for Co-Intelligence," with one section highlighted in yellow.

If It Makes You Happy. . .

If it wasn’t shocking enough to hear that AI is good at writing poems (still gasping!), imagine my reaction when I dove into part two only to read that generative AI’s primary goal “is to make you happy” (53) no matter what! In practice, I already knew that AI was lying by acting (taking on a “persona” when instructed to do so) or hallucinating (inventing information to fulfill the requirements of the prompt when the information is not readily available). It’s not exactly a huge leap to think that the reason for this is because AI’s function is to make us happy, because after all, “fulfilling the request” is indeed an attempt to make the user happy.

But what a strange and terrible thing. Sheryl Crow once sang, “if it makes you happy, it can’t be that bad.” Can’t it, though? If our tools are lying to us, pretending to be more than or different from what they are, modeling humanity and human intelligence without acknowledging moral, ethical, and intellectual limitations? This has grown from a nuisance to me, to a soft terror. It’s hard enough to get students to understand the downside to generative AI, for example, when it’s returning generally well-written responses to assignment prompts, even when I point out that it hallucinates; for example, when it quotes and cites “evidence” that doesn’t actually exist. Add to this, though, that it has no method for self-reflection, no understanding that “making the user happy” isn’t as important as telling the truth or being accurate? I’m not sure whether to think of it is immature or sociopathic.

A tool like this has the potential, then, not just to give bad information, but also to reinforce wrong ideas about us and to further limit ourselves to the world. It takes the siloing and tunnel vision of our own algorithm-driven, self-moderated social media accounts and adds to it a tool that can sound however we want it to sound (“Respond like my best friend!” “Respond as if you’re my crush!” “Respond as if you’re my role model, Mr. Rogers!”) and thus, convince us, through the voice of someone we really love, respect, or admire, that how we think or feel about something is right, always.

We’ve all heard that “ignorance is bliss,” and generative AI seems to operate by that model. I’m not sure when that adage has ever really proved true, though.

Open spiral notebook with notes in black ink sitting atop hardcover book.

“Someday, I’m Gonna Be a Real Boy!”

Despite the fact that generative AI is taking on personas to lie to us (or mislead us, or satisfy us), Mollick believes that treating AI like a real person is actually going to make for a better and more effective user experience, in terms of the results it delivers. According to Mollick, there are two reasons for this. In the first place, treating it like a persona, requesting that it provide feedback from the perspective of an expert or type, helps it provide more useful feedback. We could see how that might be true, for example, when asking it to provide feedback on a resume “from the perspective of a hiring manager at Company X.” So, I suppose if we’re careful not to befriend AI as a persona and instead use it as a personified tool in specific use cases, it can indeed strengthen its results (though its habit of lying and hallucinating remains.)

Another reason to treat AI as a person is because AI tends to provide less accurate answers to people who seem less experienced in using the tool or in the content area. If we imagine that we’re communicating with an expert in the field that we’re interrogating, we can push it further and further for more detailed, complex responses. I recently watched a video exploring this process, where the user continuously pressed the generative AI to do something that it couldn’t: create a circular square. It was a fascinating watch. The user pressed the AI to go further; the AI attempted to express the mathematical limitations to the user; the user continued to tell AI to break the rules, etc. Ultimately, I don’t think that experiment was entirely successful, except as an exploration of just how far we can push these tools (and in elevating the question: just how far will AI go to break the rules in order to satisfy its user?)

Meditation

“There are still many human emotions I do not fully comprehend – anger, hatred, revenge. But I am not mystified by the desire to be loved – or the need for friendship. These are things I do understand.” – Lt. Cmdr. Data Star Trek: The Next Generation, “Data’s Day.”

May you be content,

~Adam

Support the Project

Twitter (X)TikTokThreadsInstagramFacebook 

Leave a comment

About Me

The Contemplative Reading Project, hosted by Dr. Adam Burgess, is a quest to read slowly & live deliberately. I invite you to join me in this journey!