I hear people saying things like “chatgpt is basically just a fancy predictive text”. I’m certainly not in the “it’s sentient!” camp, but it seems pretty obvious that a lot more is going on than just predicting the most likely next word.

Even if it’s predicting word by word within a bunch of constraints & structures inferred from the question / prompt, then that’s pretty interesting. Tbh, I’m more impressed by chatgpt’s ability to appearing to “understand” my prompts than I am by the quality of the output. Even though it’s writing is generally a mix of bland, obvious and inaccurate, it mostly does provide a plausible response to whatever I’ve asked / said.

Anyone feel like providing an ELI5 explanation of how it works? Or any good links to articles / videos?

  • dumbcrumb@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    11 months ago

    Because when you ask it to solve a math problem it isn’t actually solving the problem like you and I would. It’s taking your input and comparing it to the data that it has previously been trained on to give the most likely response. If it was actually properly solving the problem, then it wouldn’t mess up on simple stuff. You can see this if you just give it a math problem that isn’t commonly completed, like a few multiplications of large numbers in a row. It will just spit out some random large number that isn’t even close.