I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.

Any good examples on how to explain this in simple terms?

Edit:some good answers already! I find especially that the emotional barrier is difficult to break. If an AI says something malicious, our brain immediatly jumps to “it has intent”. How can we explain this away?

  • Zos_Kia@lemmynsfw.com
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    7 months ago

    I think a flaw in this line of reasoning is that it assigns a magical property to the concept of knowing. Do humans know anything? Or do they just infer meaning from identifying patterns in words? Ultimately this question is a spiritual question and does not hold any water in a scientific conversation.

    • bcovertigo@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      7 months ago

      It’s valid to point out that we have difficulty defining knowledge, but the output from these machines are inconsistent at a conceptual level, and you can easily get them to contradict themselves in the spirit of being helpful.

      If someone told you that a wheel can be made entirely of gas do you have confidence that they have a firm grasp of a wheel’s purpose? Tool use is a pretty widely agreed upon marker of intelligence and so not grasping the purpose of a thing that they can describe at great length and exhaustive detail, while also making boldly incorrect claims on occassion should raise an eyebrow.