Imagine an AGI (Artificial General Intelligence) that could perform any task a human can do on a computer, but at a much faster pace. This AGI could create an operating system, produce a movie better than anything you’ve ever seen, and much more, all while being limited to SFW (Safe For Work) content. What are the first things you would ask it to do?

  • j4k3@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    edit-2
    1 year ago

    Personalized open source education for everyone, running on fully documented RISC-V open hardware, designed specifically for and by the AGI to run completely independently with no outside connections needed and with full transparency. The weights would be open source and freely available. It would be easily fabricated hardware with high yield on trailing nodes, and all of the software, lithography masks, and digital tooling would be GPL3 free. The hardware would be profitable for anyone to produce but no one can control.

    Also, Open Source all of global politics showing underlying motivations and patterns objectively in an entertaining and easy to watch visual format that appeals to the majority of humans and in a way that motivates non violent change.

  • LogicalDrivel@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    Tell it to figure out zero point energy or whatever else scifi type free energy is possible. Then tell it to figure out the cheapest easiest way to implement the technology. Then have it disseminate those plans world wide to everyone.

    • InternetPirate@lemmy.fmhy.mlOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      This sounds like science fiction. Even if the AGI were capable of creating plans for a fusion reactor, for example, you would still need to execute those plans. So, what’s the point of everyone having access to the plans if the same electrical companies will likely be responsible for constructing the reactor?

      • LogicalDrivel@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        If everyone has the plans for a super efficient easy(ish) to build free energy device, then its existence couldn’t be covered up and the big energy companies and governments around the world would be forced to implement those plans or face civil unrest or revolt.

  • nxfsi@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    The obvious answer is to use that to create an AGPL3-or-newer clean room implementation of itself, then use that to do whatever I want

  • quotheraven404@lemmy.ca
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    I’d want a familiar/daemon that was running an AI personality to act as a personal assistant, friend and interactive information source. It could replace therapy and be a personalized tutor, and it would always be up to date on the newest science and global happenings.

    • InternetPirate@lemmy.fmhy.mlOP
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      1 year ago

      I honestly think that with an interesting personality, most people would drastically reduce their Internet usage in favor of interacting with the AGI. It would be cool if you could set the percentage of humor and other traits, similar to the way it’s done with TAR in the movie Interstellar.

      • quotheraven404@lemmy.ca
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Exactly! I think mental health issues would be reduced drastically if everyone had a devoted friend for support at all times.

        Things like misinformation and radicalization would go down too, if the AI always had global context for everything.

    • SirGolan@lemmy.sdf.org
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      That’s possible now. I’ve been working on such a thing for a bit now and it can generally do all that, though I wouldn’t advise it to be used for therapy (or medical advice), but mostly for legal reasons rather than ability. When you create a new agent, you can tell it what type of personality you want. It doesn’t just respond to commands but also figures out what needs to be done and does it independently.

      • quotheraven404@lemmy.ca
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Yeah I haven’t played with it much but it feels like ChatGPT is already getting pretty close to this kind of functionality. It makes me wonder what’s missing to take it to the next level over something like Siri or Alexa. Maybe it needs to be more proactive than just waiting for prompts?

        I’d be interested to know if current AI would be able to recognize the symptoms of different mental health issues and utilize the known strategies to deal with them. Like if a user shows signs of anxiety or depression, could the AI use CBT tools to conversationally challenge those thought processes without it really feeling like therapy? I guess just like self-driving cars this kind of thing would be legally murky if it went awry and it accidentally ended up convincing someone to commit suicide or something haha.

        • SirGolan@lemmy.sdf.org
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          That last bit already happened. An AI (allegedly) told a guy to commit suicide and he did. A big part of the problem is while GPT4 for instance knows all about all the things you just said and can probably do what you’re suggesting, nobody can guarantee it won’t get something horribly wrong at some point. Sort of like how self driving cars can handle like 95% of things correctly but that 5% of unexpected stuff that maybe takes some extra context that a human has and the car was never trained on is very hard to get past.

          • quotheraven404@lemmy.ca
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            Thanks for the link, that sounds like exactly what I was asking for but gone way wrong!

            What do you think is missing to prevent these kinds of outcomes? Is AI simply incapable of categorizing topics as ‘harmful to humans’ on it’s own without a human’s explicit guidance? It seems like the philosophical nuances of things like consent or dependence or death would be difficult for a machine to learn if it isn’t itself sensitive to them. How do you train empathy in something so inherently unlike us?

            • SirGolan@lemmy.sdf.org
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              In the case I mentioned, it was just a poorly aligned LLM. The ones from OpenAI would almost definitely not do that. That’s because they go through a process called RLHF where those sorts of negative responses get trained out of them for the most part. Of course there’s still stuff that will get through, but unless you are really trying to get it to say something bad, it’s unlikely to do something like in that article. That’s not to say they won’t say something accidentally harmful. They are really good at telling you things that sound extremely plausible but are actually false because they don’t really have any way of checking by default. I have to cross check the output of my system all the time for accuracy. I’ve spent a lot of time building in systems to make sure it’s accurate and it generally is on the important stuff. Tonight it did have an inaccuracy, but I sort of don’t blame it because the average person could have made the same mistake. I had it looking up contractors to work on a bathroom remodel (fake test task) and it googled for the phone number of the one I picked from its suggestions. Google proceeded to give a phone number in a big box with tiny text saying a different company’s name. Anyone not paying close attention (including my AI) would call that number instead. It wasn’t an ad or anything, just somehow this company came up in the little info box any time you searched for the other company.

              Anyway, as to your question, they’re actually pretty good at knowing what’s harmful when they are trained with RLHF. Figuring out what’s missing to prevent them from saying false things is an open area of research right now, so in effect, nobody knows how to fix that yet.

  • kromem@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    Discuss the notion and evidence for us being in an approximate recreation of Earth circa 2020s as recreated by future version of said superintelligence.

    If I’m having an existential crisis, it should too.

      • kromem@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        Clever thinking.

        The generator’s aim is to create a world so convincing that the discriminator can’t distinguish it from a ‘real’ world. This mirrors the GAN architecture where the generator tries to trick the discriminator into believing its generated instances are real.

        While I do think the generator and discriminator perspectives of reality is a great way of thinking of things, I think the details are too obvious once you see them for the purpose to have been to stay hidden.

        In many ways it seems like the nine dolphins illusion.

        You have very interesting thoughts on the topic, and I invite you to share them on !simulationtheory@lemmy.world - which might also have details you’ll enjoy in turn.

  • Meow.tar.gz@lemmy.goblackcat.com
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    Truth be told I don’t know what I would. That much said, I don’t foresee AI replacing us right away. It’s got quite a ways to go before that happens. But when it does, I expect there to be a massive disruption in society because joblessness and homelessness is going to skyrocket because there just is no assistance in a hypercapitalist world.

    • InternetPirate@lemmy.fmhy.mlOP
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      edit-2
      1 year ago

      I wouldn’t be surprised if corporations just asked the AI to make as much money as possible at the expense of everything else. But people like living in capitalist countries anyways, while complaining about the lack of safety nets. Otherwise they would move to countries like China, North Korea or Cuba.

    • InternetPirate@lemmy.fmhy.mlOP
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      1 year ago

      The kind that uses gas? I honestly wouldn’t have thought someone would be interested in open-sourcing this. I would prefer if it designed an open-source Roomba or, while we’re at it, a robot body so that it could perform more tasks. But you would still have to build it yourself.

          • jerry@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            I honestly have no idea, but if I had a super intelligent AI I would try. Maybe a more efficient and cheaper solar cell, maybe some sort of catalytic system?

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      You’re assuming a human could do that on a computer, though. It’s kind of hard to improve on that basic and very mature technology.

        • CanadaPlus@lemmy.sdf.org
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          I put more weight on the description text, but yes that was in the title.

          Even if we assume it’s a god, though, I’m not sure there’s a way to improve on most kinds of generators more than incrementally. I don’t expect it would improve on “the wheel” either.

          • jerry@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            I’m sure there are methods of generating electricity that we haven’t even stumbled on.

              • jerry@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                I think we’re pretty far from the peak understanding of almost everything. There are so many discoveries still to be made.

                • CanadaPlus@lemmy.sdf.org
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 year ago

                  Based on what? Sure, I’m guessing we’re just starting with planetary science and cosmology, but power generation has been explored to death and we’re still using the same basic alternator design as Tesla was.

  • APassenger@lemmy.one
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    I’d have to use it to protect it from all the other private owners of weaponized, advantage-seeking AGI owners.

    They will exist.

  • Barbacamanitu@lemmy.world
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    1 year ago

    I’d ask it to find a proof for the collatz conjecture. And the rieman hypothesis. And all of wolframs rule 30 challenges.

    Basically, id just see if it could find proofs for unproven stuff.

  • argv_minus_one@beehaw.org
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 year ago

    If that exists, it’s curtains for humanity. Not because of the AGI itself killing us all, necessarily, but because that means human labor is forever obsolete and the vast majority of humans, including me, will soon starve to death on the street or be imprisoned for vagrancy.

    So, I wouldn’t ask it anything, except maybe to recommend a suicide method.

    • InternetPirate@lemmy.fmhy.mlOP
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      Hopefully there are some people more positive than that, willing to change society so AGI doesn’t make most humans starve to death or be imprisoned.

      • argv_minus_one@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        1 year ago

        Look around you. Look at all the uprisings that haven’t happened as a result of the latest round of extreme price gouging and resulting public impoverishment. Look at all the homeless people everywhere, sitting quietly in their tents and dying of starvation, instead of standing up and marching and demanding the employment and housing opportunities that they’ve thus far been denied.

        No, society will not change to coexist harmoniously with AGI. The events of the last few years have made this abundantly clear. The whole point of creating AGI is to replace human labor and dispose of the vast majority of humans, and those humans are going to let it happen.

  • elevenant@discuss.tchncs.de
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    I think a lot of things that are proposed here could not be done by an AGI on an computer, no matter how intelligent. Consider this alternative scenario: You have an exceptionally intelligent young human adult with a computer locked in a room. They have no specialized education or anything. They are just extremely intelligent. What could you achieve through such a person?

    Discovery of new physics is out of the question. That would need experiments.

    • InternetPirate@lemmy.fmhy.mlOP
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 year ago

      Locked in a room with an internet connection? A lot. But without any contact with the outside world? Not nearly as much. It could have other people running experiments for it with an internet connection, but not without one.

      Anyway, whether or not the AGI can interact with the real world undermines the purpose of my explicit statement in the question. I specifically mentioned that it only operates as a human on a computer. I didn’t mention it could acquire a physical body, so let’s just assume it can’t and can’t use other people to do physical labor either.

  • AlexWIWA@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Mass Effect 3 with Mass Effect 1’s art and music style. And with a better ending

  • Barbacamanitu@lemmy.world
    link
    fedilink
    arrow-up
    1
    arrow-down
    2
    ·
    1 year ago

    I’d ask it to find a proof for the collatz conjecture. And the rieman hypothesis. And all of wolframs rule 30 challenges.

    Basically, id just see if it could find proofs for unproven stuff.

  • Barbacamanitu@lemmy.world
    link
    fedilink
    arrow-up
    1
    arrow-down
    2
    ·
    1 year ago

    I’d ask it to find a proof for the collatz conjecture. And the rieman hypothesis. And all of wolframs rule 30 challenges.

    Basically, id just see if it could find proofs for unproven stuff.