I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.

Any good examples on how to explain this in simple terms?

Edit:some good answers already! I find especially that the emotional barrier is difficult to break. If an AI says something malicious, our brain immediatly jumps to “it has intent”. How can we explain this away?

  • TheBananaKing@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    7 months ago

    Imagine making a whole chicken out of chicken-nugget goo.

    It will look like a roast chicken. It will taste alarmingly like chicken. It absolutely will not be a roast chicken.

    The sad thing is that humans do a hell of a lot of this, a hell of a lot of the time. Look how well a highschooler who hasn’t actually read the book can churn out a book report. Flick through, soak up the flavour and texture of the thing, read the blurb on the back to see what it’s about, keep in mind the bloated over-flowery language that teachers expect, and you can bullshit your way to an A.

    Only problem is, you can’t use the results for anything productive, which is what people try to use GenAI for.