Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.

  • Zexks@lemmy.world
    link
    fedilink
    arrow-up
    21
    arrow-down
    14
    ·
    3 months ago

    Lemmy is full of AI luddites. You’ll not get a decent answer here. As for the other claims. They are not just next token generators anymore than you are when speaking.

    https://eight2late.wordpress.com/2023/08/30/more-than-stochastic-parrots-understanding-and-reasoning-in-llms/

    There’s literally dozens of these white papers that everyone on here chooses to ignore. Am even better point being none of these people will ever be able to give you an objective measure from which to distinguish themselves from any existing LLM. They’ll never be able to give you points of measure that would separate them from parrots or ants but would exclude humans and not LLMs other than “it’s not human or biological” which is just fearful weak thought.

    • chobeat@lemmy.ml
      link
      fedilink
      arrow-up
      12
      arrow-down
      1
      ·
      3 months ago

      you use “luddite” as if it’s an insult. History proved luddites were right in their demands and they were fighting the good fight.

    • jacksilver@lemmy.world
      link
      fedilink
      arrow-up
      12
      arrow-down
      2
      ·
      3 months ago

      Here’s an easy way we’re different, we can learn new things. LLMs are static models, it’s why they mention the cut off dates for learning for OpenAI models.

      Another is that LLMs can’t do math. Deep Learning models are limited to their input domain. When asking an LLM to do math outside of its training data, it’s almost guaranteed to fail.

      Yes, they are very impressive models, but they’re a long way from AGI.

      • DavidDoesLemmy@aussie.zone
        link
        fedilink
        arrow-up
        4
        arrow-down
        8
        ·
        3 months ago

        I know lots of humans who can’t do maths. At least I think they’re human. Maybe there LLMs, by your definition.

        • jacksilver@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          2
          ·
          3 months ago

          I think you’re missing the point. No LLM can do math, most humans can. No LLM can learn new information, all humans can and do (maybe to varying degrees, but still).

          AMD just to clarify by not able to do math. I mean that there is a lack of understanding in how numbers work where combining numbers or values outside of the training data can easily trip them up. Since it’s prediction based, exponents/tri functions/etc. will quickly produce errors when using large values.

          • Zexks@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            3 months ago

            Yes. Some LLMs can do math. It’s a documented thing. Just because you’re unaware of it doesn’t mean it doesn’t exist.

    • vrighter@discuss.tchncs.de
      link
      fedilink
      arrow-up
      11
      arrow-down
      2
      ·
      3 months ago

      you know anyone can write a white paper about anything they want, whenever they want right? A white paper is not authoritative in the slightest.

    • gravitas_deficiency@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      edit-2
      3 months ago

      Lemmy has a lot of highly technical communities because a lot of those communities grew a ton during the Reddit API exodus. I’m one of those users.

      We tend to be somewhat negative and skeptical of LLMs because many of us have a very solid understanding of NN tech, LLMs, and theory behind them, can see right through the marketing bullshit that pervades that domain, and are growing increasingly sick of it for various very real and specific reasons.

      We’re not just blowing smoke out of our asses. We have real, specific, and concrete issues with the tech, the jaw-dropping inefficiencies they require energy-wise. what it’s being billed as, and how it’s being deployed.

      • Zexks@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        3 months ago

        Yes. Many of you are. I’m one of those technicals you speak of. I work with half a dozen devs that all think like you. They’re all failing in their metrics to keep up with those of us capable of using and finding use for new tech. Including AI’s. The others are being pushed out. As will most of those in here complaining. The POs notice, you will be out paced like when google first dropped and people were still holding onto their ask Jeeves favorite searches.