Dear Friend,
Do you feel better about AI yet?
Aliens of Our Own Making
I’ll be honest, when I chose this book, I did so partly because I hoped it would assuage some of the fears I’ve had about AI. To alleviate some of the discomfort, mistrust, and general anxiety I experience when thinking about this ever- and rapidly advancing technology that has been, in my most immediate experience, an avenue for cheating and laziness, more than anything. Oh, and theft. Having read the introduction and first two chapters, I can now say that those fears, well, they were probably well-founded. Readers, I do not feel better.
Mollick early on starts describing AI as “co-intelligence” rather than artificial intelligence. There’s a strong desire to advance the notion that these various AI tools are, first of all, of our own making, and second, have the potential to be incredibly helpful to human endeavors in all sorts of ways. As such, if we treat it more as a “co-intelligence” that is of and a part of us, we might be getting a better understanding of it and our relationship to it. My question throughout these first fifty pages, though, keeps going back to whether in treating it this way, we’re in fact abdicating our intelligence (and intellectual growth) for AI’s? How can AI be a teacher, expert, or companion if it doesn’t know what it is saying or why, or have any awareness of the effects its responses might have? What is an intelligence without moral standards or an ethical code?

It Gets Worse
Mollick begins by writing about many of the problems and challenges pertaining to the creation, development, and use of AI. This includes pre-training for text- and image-based AI like Chat-GPT and DALL-E. Since these systems “train” on all of the readily and openly available sources on the internet, all of which are created by humans (and most of which are created by humans of a certain type and in a certain time and place), this means the results are heavily influenced by the original texts’ biases, errors, and falsehoods. Some of these are stunning, as when an AI model was “asked to show a judge” and it generated “a picture of a man 97 percent of the time, even though 34 percent of US judges are women” and when “showing fast-food workers, 70 percent had darker skin tones, even though 70 percent of American fast-food workers are white” (35).
The pre-training is also non-reflecting, meaning there’s no ethical consideration involved. These programs simply consume all of the relevant media and then populate responses to prompts that seem “most likely” to be correct. In case you don’t spend much time on the internet, it can be a cesspool of misinformation, disinformation, lies, slander, bigotry, prejudice, and hate. (Yes, it can also be a place of lifesaving and lifechanging information, but is that the majority of social media and other open-source repositories?)
To experiment, I put the following prompt into Open Art image generator and was provided Image 1 below. “Please create an image that represents the dangers of using AI to think for me.”

Then, I modified the prompt just slightly, adding a short sentence to the original, and I received Image 2 below. “Please create an image that represents the dangers of using AI to think for me. Make it professorial.”

So, in the first case, I see nothing particularly unsettling about it, despite asking the prompt to depict the dangers of using AI. And in the second, when asked to make the original image (to me, a clearly feminine-presenting android) “more professorial,” I get a very human-looking white male. Should I draw large conclusions from one simple test? Probably not. But are these results surprising? No, just disappointing.
Guardrails
AI is very good at creating “the illusion of understanding” (26), or of sounding correct, without actually being correct. This has been my experience with it in the classroom, with additional concerns of voice and style coming into play, but on the surface, it just lies. So, as we incorporate AI more and more as a “co-intelligence,” what happens to our own understanding? Aren’t we likely to fall into this habit of developing our own illusion of understanding? How many of us will choose to continue to engage critically with what we’re reading, seeing, or hearing, when we can simply pose the question to an AI tool later and pretend we’ve learned something? (This is, I think, the major concern many had about internet search engines more generally a decade or so ago. “Don’t just ask Google!” How quaint to look back on that today, isn’t it?)
Mollick reminds us that there are guardrails in place, or can be, but the problem is, those guardrails are human beings. Already, some of the heavier biases, and the more harmful or hateful responses to certain prompts, are being filtered out by human moderators. This is all in an effort to improve AI tools’ output and advance its learning, while also protecting users. A big question, though, is will all of this eventually reach the singularity, that moment of superintelligent AI that no longer wants or needs our guardrails, and gains the ability to ignore us?
The government(s) alone will not be able to regulate AI and how we use it. As Mollick suggests, this will require a “broad societal response” (44). It’s interesting to think that our response will require ethical and moral reconsiderations of information that we put out there in the first place, and that these neutral AI interpreters are simply regifting to us in new packaging. Happy birthday to you.
Meditation
“Empathy, evidently, existed only within the human community, whereas intelligence to some degree could be found throughout every phylum and order including the arachnida.” -Philip K. Dick
Love,
~Adam
Leave a comment