The End of Human Supremacy?

Dear Friend,

In the latter part of Co-Intelligence, Ethan Mollick continues to lay out the ways we could and should think about AI, and he continues to explore the ethical implications of AI, challenging readers to think critically about how we engage with this technology. While much of the book is full of optimism about AI’s potential, there are also some necessary wake-up calls that bring attention to the complexities and responsibilities we face in this new, and hopefully not final, frontier.

Confronting Ethical Dilemmas

Page 161 of Ethan Mollick's book, Co-Intelligence.

Mollick’s candid exploration of ethical concerns is one of the most thought-provoking aspects of the book. He emphasizes that as AI systems become more embedded in our lives, we need to acknowledge their limitations and the biases that can be woven into their algorithms. This isn’t just an abstract concern; Mollick shares real-world examples where bias has led to significant consequences—like AI-driven hiring tools that inadvertently favor certain demographics over others. He also, fittingly for this reader, takes on the cheating issue.

He also discusses the risk of job displacement, an issue that many people fear as automation becomes more prevalent. While Mollick doesn’t suggest we’ll be left jobless, he highlights the importance of preparing the workforce for these changes. He advocates for proactive measures—like reskilling programs and educational initiatives—to help people adapt to the evolving job landscape. It’s a refreshing take that underscores the need for human agency in an increasingly automated world.

The Misinformation Challenge

One of the most pressing issues Mollick addresses is the potential for misinformation. With AI’s capability to generate realistic but false content, the risk of manipulation becomes alarmingly real. He discusses the implications of this for democracy and public discourse, urging readers to consider how easily narratives can be shaped by those with malicious intent, as well as the inevitable rise in faked everything, images and sounds included. There will come a point when we can’t know for sure whether what we’re seeing or hearing is real, let alone whether the text in front of us was constructed by a human or not. This part of the book serves as a powerful reminder of the importance of critical thinking in our interactions with technology.

Mollick doesn’t just highlight the problems; he encourages us to think about solutions. He suggests that transparency in AI development and deployment is crucial, along with robust mechanisms for accountability. He also anticipates antagonistic responses. While reading about the mis- and disinformation likely to proliferate across all media, my initial reaction was to get offline. Perhaps buy some survival books and live a back-to-basics lifestyle. The author counters those gut reactions with sensible examples of why that kind of reaction would be both limiting and mostly fruitless. Instead, his call to action is clear: we must demand better practices from developers and organizations to mitigate these risks.

Embracing Co-Intelligence

As the book progresses, Mollick introduces the concept of co-intelligence—a framework for thinking about how we can work alongside AI in a meaningful way. This idea resonates deeply, especially as he discusses the potential for AI to enhance our capabilities rather than replace them. He posits that the most successful collaborations between humans and AI will stem from a mutual understanding of each other’s strengths and weaknesses.

For instance, Mollick explores how AI can handle data analysis and pattern recognition, freeing humans to focus on creativity, strategic thinking, and emotional intelligence—areas where we excel. He emphasizes the need for a cultural shift in how we view AI: rather than seeing it as a threat, we should embrace it as a partner in innovation. Some of his strongest examples are when he discusses prompt engineering and how us becoming expert prompters can help us utilize AI to its fullest potential.

This perspective is not just theoretical; Mollick shares inspiring case studies from organizations, and from his own students, that have successfully implemented co-intelligent practices. These examples serve as blueprints for what’s possible when we approach AI with a mindset of collaboration.

Second Star to the Right. . .

What I found most compelling was Mollick’s concept of co-intelligence. He encourages us to shift our mindset: instead of viewing AI as a replacement for human effort, we should see it as an opportunity for deeper collaboration. This idea is refreshing and hopeful, reminding us that together, we can tackle complex challenges more effectively. Mollick’s writing strikes a balance between urgency and optimism by highlighting the dual-edged nature of AI, celebrating its potential while cautioning against its pitfalls. He challenges us to reflect on our responsibilities as we integrate AI into our lives, fostering a mindset that’s both proactive and thoughtful.

Was I entirely convinced? Not quite. In fact, I felt quite concerned at various points while reading; but in the end, Mollick’s perspective leaves me feeling mostly curious.

Up Next

I hope you’ll join me on November 1 as we kick-off the next month’s read of Seán Hewitt’s memoir, All Down Darkness Wide. This is a book I’ve been looking forward to reading for a long time, and I’ll be glad to have your company along the way.

Meditation

“Technology is both a tool for helping humans and for destroying them. This is the paradox of our times which we’re compelled to face.” -Frank Herbert

Love,

~Adam

Support the Project

Twitter (X)TikTokThreadsInstagramFacebook 

Leave a comment

About Me

The Contemplative Reading Project, hosted by Dr. Adam Burgess, is a quest to read slowly & live deliberately. I invite you to join me in this journey!