ChatGPT is a bullđź’©er.

A very good guesser. That's it.

#19

Is AI overhyped? Under-hyped? Is it just annoying?

I think it’s one of the most powerful tools entrepreneurs can get. It can save hours of work each week - if it’s used well - for the right problems.

But, there are a few things that we should not use AI for.

Truth is one of them.

The Hype.

ChatGPT was released in November 2022. I started using it that month, along with nearly 100 Million other people. The hype was intense. Just two months after it was released, it had the same daily media coverage that Bitcoin had after 13 years in public.

Green line, measured by 30 day avg of media counts.

But maybe it was a little overboard? If you’ve used ChatGPT and gotten answers like this, you might be on the overhype team.

Requested a “product lifecycle”. Got “soll troufric”.

From the developer teams, this is called a “hallucination” where the LLM puts together statistically predictable, but incorrect information. This image does not show anything close to a product lifecycle. Soll troufric sounds like english, but isn’t.

But, on the other side, you can also get outputs like this - that routinely save me an estimated 5hrs a week.

This is a test of an upcoming product from Simple Strategies đź‘€

I honestly use it like this all day.

If we do a conservative price-per-hour of $50, (which is above entry level, below senior level pay) that’s a $65,000 a year tool for only $20 a month.

An outrageous opportunity!

But there are a few things we should never use AI for.

Truth is high among them.

Don’t use AI for truth, relationships or joy.

-Nate

ChatGPT is a bullđź’©er.

It’s important to know how GenAI works. Generative AI is most often powered by a Large Language Model. They have an input layer, a hidden layer and an output.

Input - This is where the LLM is prompted. A task is requested in plain language, which is tokenized into smaller parts.

Hidden layer - Those tokens are run through black box of weights, statistical predictions and patterns. The large model is making mathematical predictions about what will be most reasonable based on the input. It’s sort of a black box though, even the engineers of the algorithm don’t can’t say exactly what’s happening.

Output layer -A response is returned. THIS IS IMPORTANT - This response is a statistically driven guess of the next best word. The response is given to the user, who then needs to judge if the output matches what the input should have returned.

LLMs are very good guessers. That’s it.

They do not check for truth.

They hallucinate, or provide entirely made up answers. With LLMs, truth is in the eyes of the beholder - a terrible prospect for objectivity.

This point was driven home in an article a few weeks ago written in the journal Ethics and IT, titled “ChatGPT is Bullshit”.

Catchy title!

The authors make a strong argument that because LLMs are making a statistical guess, they have no relation to truth. They are guessing, without checking fact.

In the argument of the paper, an LLM is not hallucinating when it produces factual error. A hallucination insinuates that there is an objective reality to have mistaken. Rather all of the outputs have the same disconnect from the truth - some just happen to be true - so in the author’s minds, ChatGPT is bullshitting, not hallucinating.

Frankfurt understands bullshit to be characterized not by an intent to deceive but instead by a reckless disregard for the truth.

A student trying to sound knowledgeable without having done the reading, a political candidate saying things because they sound good to potential voters, and a dilettante trying to spin an interesting story: none of these people are trying to deceive, but they are also not trying to convey facts. To Frankfurt, they are bullshitting.

Hicks, Humphrey, et Slater; “Chat GPT is bullshit”

The output layer could be true, or it could be false. So long as the user continues to prompt, it has no bearing on the LLM.

What this means for me.

So should we stop using Gen AI?

No. But we should be careful not to use it when truth (especially Truth with a capital T) is required.

There are some work activities - in fact maybe most - that have you could argue have no requirement for truth.

They just need to be reasonable.

Project plans for example; they just need to have the right number of steps, reasonable timelines and a good outline of responsible parties.

We can continue to use Generative AI to reduce the time we spend on the drudgery of work - outlining content calendars, brainstorming copy lines for landing pages, detailing project plans - but it always needs a check by a real life human.

Humans naturally have a recursive loop in our thought; consciousness requires it. For me to be aware of my own awareness, I have to be able to simultaneously have a thought and “see it” to critique if it is correct.

An LLM cannot do this.

So before you fire off that next blog post written by AI (these newsletters are all typed by me), ask the questions “Must this be true?”

If it needs to be, don’t let AI do it.

Conclusion.

I hope you enjoyed this week’s critique of AI. I’m feeling a bit fiery!

I still use LLMs every day. They have taken a significant part of my workload and routinely offer cost savings and profit increases for my small business.

If you don’t use GenAI yet, I encourage you to start - just don’t do it where you need truth!

If you’re not sure where to start, or what strategy you should pursue, I just finished creating a course for small business leaders like you to increase profit using AI.

It concisely walks you through:

  • What is AI?

    • The Fundamentals of Generative AI

    • Keys to Writing masterful prompts

  • Increasing Profit with AI

    • The Profit Formula with specific examples to increase Customers, AOV, Frequency and Margin.

  • Creating a Custom GPT

    • A step-by-step walkthrough of how to create your own custom GPT to take some of your work drudgery.

As a small business leader, time is precious, so I worked hard to simplify, simplify, simplify and got the entire run time to just under 1 hr. It moves fast.

And, as a bonus it comes with 5 prompts that I wrote to make the lessons from the Profit Formula section really hit home.

I think you’ll love it!

Until next week!

-Nate

Whenever you’re ready, I help bootstrapped entrepreneurs increase their profit in two ways.

  1. I help small business leaders unlock profitability in their business using AI. This high-impact course has only what you need in it to increase profitability and win your time back, with strategic frameworks for thinking about AI and practical plans to use it. Extraordinary price to value ratio.

    Start the Course Today

  2. Spots are open for this fall’s coaching cohort. It will run from September - November. This is a structured, 12 week program, with weekly 1-on-1 coaching calls with me. We spend the first 4 weeks resetting the foundation of your business, then clarify your definition of success for 2 weeks and spend the last 6 coaching to make it a reality. One of the most recent entrepreneurs brought her profitability up by 500% during the program.

    Join the waitlist here

Simple Strategies is written by Nate Pinches.

He did an MBA so you don’t have to, has consulted for over 50 CEOs and has worked on AI projects for 6 years.

He lives with his wife and kids in Okinawa Japan.

not an AI.

Reply

or to participate.