The subjects that you can’t even bring up without getting downvoted, banned, fired, expelled, cancelled etc.

  • Mothra@mander.xyz
    link
    fedilink
    arrow-up
    18
    arrow-down
    4
    ·
    1 month ago

    “I’ve asked ChatGPT about xyz” , and “how to use chatGPT for xyz” in my experience gets me downvotes fast.

    People are quick to presume you have no ability to fact check anything and that you will be following its advice blindly, (which mind you - you were never asking for in the first place) instead of asking a human, ever ( for example about medical conditions but not limited to that topic). People presume you are trying to eliminate the human factor out of the equation completely and are quick to remind you of your sins, god forbid you ever use a chatbot to test ideas, ask for a summary on a topic so you can expand your research later or get creative with it in any way. If you do, most people don’t like to know.

      • UlyssesT [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        16
        ·
        1 month ago

        I think the bigger problem is that each answer it gives basically destroys a forest

        That, and it’s filling once-useful search engines with useless and even dangerous gibberish.

      • MonkeMischief@lemmy.today
        link
        fedilink
        arrow-up
        4
        ·
        1 month ago

        To be fair: “For each answer it gives”, nah. You can run a model on your home computer even. It might not be so bad if we just had an established model and asked it questions.

        The “forest destroying” is really in training those models.

        Of course at this point I guess it’s just semantics, because as long as it gets used, those companies are gonna be non-stop training those stupid models until they’ve created a barren wasteland and there’s nothing left…

        So yeah, overall pretty destructive and it sucks…

        • wuphysics87@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          1 month ago

          Training a model takes more power than what? Generating a single poem? Using it to generate an entire 4th grade class’s essays? To answer all questions in Hawaii for 6th months? What is the scale? The break even point for training is far far less than total usage.

          Have you ever used one locally? Depending on your hardware it’s anywhere between glacially to a morgue’s AC slow. To the average person on the average computer it is nearly unusable, relative to the instant gratification of the web interface.

          That gives you a sense of the resources required to do the task at all, but it doesn’t scale linearly. 2 computers aren’t twice as fast as one. It’s logarithmic. With diminishly returns. In the end, this means one 100 word response uses the equivalent of 3 bottles of water.

          How many queries are made per hour? How does that scale over time with increased usage of the same model? More than training a model. A lot more.

          • MonkeMischief@lemmy.today
            link
            fedilink
            arrow-up
            1
            ·
            1 month ago

            Yeah you make a really good point there! I was perhaps thinking too simplistically and scaling from my personal experience with playing around on my home machine.

            Although realistically, it seems the situation is pretty bad because freaky-giant-mega-computers are both training models AND answering countless silly queries per second. So at scale it sucks all around.

            Minus the terrible fad-device-cycle manufacturing aspect, if they’re really sticking to their guns on pushing this LLM madness, do you think this wave of onboard “Ai chips” will make any impact on lessening natural resource usage at scale?

            (Also offtopic but I wonder how much a sweet juicy exploit target these “ai modules” will turn out to be.)

            • wuphysics87@lemmy.ml
              link
              fedilink
              arrow-up
              2
              ·
              1 month ago

              It’s really opaque. We won’t know the environmental impact right away. Part of the larger problem is, while folks like you and I make a sizable impact, it’s nothing compared to enterprise usage at scale. Every website, app, and operating system with an AI button makes it even easier for users to interface with AI leading to more queries. Not only that, those queries and responses are collected and used to further make queries.

              Should the usage of AI stay stable, improved hardware would decrease carbon output. We should be cautious coming to that conclusion. What is more likely is that increased efficiency will lead to increased usage. Perhaps at an accelerated rate with the anticipation of even more technological breakthroughs down the line.

              All that said, I’m really not a doomer. It’s important we all consider the cost of our choices. The way I see it, we are all going to die eventually. I’m old enough it will probably be from something else.

    • Skua@kbin.earth
      link
      fedilink
      arrow-up
      12
      arrow-down
      1
      ·
      1 month ago

      If you have fact-checked it, why not just say that wherever you did that is where you got the answer from? People are right to be skeptical of “ChatGPT says so”, and if you’ve used it as the start of your research rather than as your entire research then just saying “I asked ChatGPT” is no different to “I googled it”, and nobody would much like you saying that either. How you found the information is less important than where you found it.

      • Mothra@mander.xyz
        link
        fedilink
        arrow-up
        3
        arrow-down
        2
        ·
        1 month ago

        This are precisely the kind of presumptions people make. I’m never making an argument “because ChatGPT says so”. And yes you are absolutely right - chatbot answers are on par with search engine results if not even less reliable in occasions. My point is that I’m not using any of the information as evidence, counterpoints or even advice. People take a stand as if I were.

        For example, once I asked ChatGPT about a sensation I feel on my skin after heavy exercise, because googling didn’t give me satisfactory results. GPT didn’t either, but it gave me a list of close matches. The sensation itself was never a problem for me, never something I intended to change, was never something I would consider going to a doctor for and if I never knew what was causing it my life would carry on just the same. I was simply curious. And out of curiosity I asked here, and the majority of the answers were “you shouldn’t be asking to randoms online, how dare you”, “this is a question for a doctor, don’t ask for medical advice to a chatbot” - both stances baffled me. Never in my post I said anything that suggested I was in pain, discomfort, or that I wanted to change anything about it, or that I was expecting people to tell me how to make it go away- nothing. I just wanted to know what it was, period. People presume.

    • UlyssesT [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      1 month ago

      and are quick to remind you of your sins

      On the other hand, it’s totally cool and good to drag around a big cross of contrarianism in a totally-not-self-righteous way because your treat printers were criticized, amirite? smuglord