I finished Co-Intelligence by Ethan Mollick. It’s the only book I’ve read about AI. It’s a good primer on the subject, examining key technical, social, and ethical aspects of this technology. Much of the book centers on large language models (LLM) like Chat GPT, but touches on image generation too.

Mollick has four principles for living and working with AI:

1. Always invite AI to the table. You should work with AI tools to see what they’re good at and how they might help you achieve your goals. The “Jagged Frontier of AI” means this tech will be amazing at some things, and quite bad at others (for now).

2. Be the human in the loop. AI tools don’t “know” anything, but they can seem convincingly smart. Apply your curiosity, skepticism, and general knowledge to evaluate output.

3. Treat AI like a person (but tell it what kind of person it is). AI works best with constraints, so asking it to act as if it’s a specific person will generate better results. For instance, if I was researching a social phenomenon I might say, “You’re Malcolm Gladwell, and you’re investigating the unlikely rise of Crocs as fashionable footwear. How could things go so wrong?”

4. Assume this is the worst AI you will ever use. AI’s capabilities can only grow. You should imagine a future where it’s very good at things it may be very bad at right now. Plan accordingly.

There are loads of surprising and unsettling lines in the book, including four potential scenarios for the future of humanity and AI. I won’t get into that now, but I will say: our world has changed (again), and we’re not going back.

Note: This post is 100% human-generated.

Shawn Romano @romano