• 0 Posts
  • 47 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle

  • The first game is a bit different from the rest and it’s greatest strength is the world building of the universe, where it is the strongest title of the trilogy. People mostly like the mass effect series for the companions though, and they are at their best in the second game.

    The first game suffers a bit from being an awkward hybrid between an infinity engine game and a more action-oriented game. It was a rough time for RPGs in general in that respect. ME2 and 3 lean more in to the action game play for better and worse (mostly better).

    Unless you are in hurry to get on to the next game, I’d encourage you to do some of the optional and very easy to miss sidequests that you can get from exploring planets. Its worth checking the wiki for these if you don’t feel like doing enough exploring to stumble across them organically, I can in particular recommend the Cerberus quest chain and Talis geth quest chain.



  • It can be hard to bootstrap yourself up from zero followers. I’d recommend posting something just so that people have an idea of the kind of thing they can expect if they follow you from checking out your profile. But you probably won’t get much engagement from your own posts at first, so it will probably be more fun to just reply to other accounts.

    Bluesky has a feature where you can set up customized feeds to filter for any kind of content you want. The person who saw your post might have seen it in the “newskies” feed which just contains every first post that any account makes for example. So one way to get engagement can be to write posts that show up in a certain feed that people follow, like there exist some feeds that are based around certain topics that usually trigger based on your post containing certain keywords. Most people just use the following feed though, I think.








  • There are a couple of reasons that might not work:

    • Maybe we’ll asymptotically approach a point that is lower than human-level cognitive capabilities
    • Gradual improvements are susceptible to getting stuck in a local maxima. This is a problem in evolution as well. A lot of animals could in theory evolve, say, human level intelligence in principle, but to reach that point they’d have to go through a bunch of intermediate steps that lead to worse fitness. Gradual scientific improvements are a bit like evolution in this way.
    • We also lose knowledge over time. Something as dramatic as a nuclear war would significantly set back the progress in developing AGI, but something less dramatic might also lead to us forgetting things that we’ve already learned.

    To be clear, most of the arguments I’m making aren’t really about AGI specifically but about humanities capability to develop arbitrary in principle feasible technologies in general.



  • A breakthrough in quantum computing wouldn’t necessarily help. QC isn’t faster than classical computing in the general case, it just happens to be for a few specific algorithms (e.g. factoring numbers). It’s not impossible that a QC breakthrough might speed up training AI models (although to my knowledge we don’t have any reason to believe that it would) and maybe that’s what you’re referring to, but there’s a widespread misconception that Quantum computers are essentially non-deterministic turing machines that “evaluate all possible states at the same time” which isn’t the case.