I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.

Any good examples on how to explain this in simple terms?

Edit:some good answers already! I find especially that the emotional barrier is difficult to break. If an AI says something malicious, our brain immediatly jumps to “it has intent”. How can we explain this away?

  • BlameThePeacock@lemmy.ca
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    3
    ·
    7 months ago

    It’s just fancy predictive text like while texting on your phone. It guesses what the next word should be for a lot more complex topics.

    • kambusha@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      ·
      7 months ago

      This is the one I got from the house to get the kids to the park and then I can go to work and then I can go to work and get the rest of the day after that I can get it to you tomorrow morning to pick up the kids at the same time as well as well as well as well as well as well as well as well as well… I think my predictive text broke

    • k110111@feddit.de
      link
      fedilink
      arrow-up
      3
      ·
      7 months ago

      Its like saying an OS is just a bunch of if then else statements. While it is true, in practice it is far far more complicated.