4 Comments

The book review sounds like a fake book report by someone who only read the Cliff Notes; and to a large extent the subsequent dialog sounds like a linguistically advanced version of Eliza.

Expand full comment
author

Thanks for your comment, Dave. All LLMs are linguistically advanced versions of Eliza. The question always is "How advanced?" Measuring that is the central theme of my book.

The particular back and forth style of these AI assistants is somewhat different from LLM to LLM, but they are mostly trained to have that kind of "Jeeves" British butler tone. When writing my book with Claude, I put in a section in its fine-tuning to turn that off, but let it keep the style in the dialogue it wrote for the character, CASPAR, the AI assistant in the book.

As for how good a review it wrote, have you read reader reviews on Amazon?

Expand full comment

>>As for how good a review it wrote, have you read reader reviews on Amazon?

They're probably ghost written by AI's too. Soon it will be a closed ecosystem, with

junk books written, reviewed, and marketed by AI; all aimed to make the only non-ai element in the loop part with their cash.

It's incredibly impressive that AI's are smarter than a 5'th grader. I'm not ready

to accept their expertise on existential questions.

Expand full comment
author

The idea of progressively worse AI books being published is indeed something I fear is going to happen now that systems are being developed for "push button" book writing. I see it as a form of "Gresham's Law."

The concept of "accepting their expertise" is a critical issue. The public is already having a problem with accepting the expertise of actual experts. Folks have to understand that the LLMs will do good work for you and come up with good answers on even existential questions, but now the work for humans is about fact checking them because they make mistakes and hallucinate (covered in my book). No one should "accept their expertise" but rather, always "trust, but verify."

Expand full comment