• 1 Post
  • 173 Comments
Joined 2 years ago
cake
Cake day: June 9th, 2023

help-circle

  • For recommendations and discovery (which was a large part of what kept me with Spotify), I’m a big fan of https://listenbrainz.org/ In the time I’ve been using it, the recommendations have gotten way better, and I appreciate their efforts towards transparency. (Yay for open source)

    You can import listen data from music streaming services, so if anyone is curious, I’d recommend setting it up and seeing how it goes; I only recently got round to cancelling my Spotify, but before then, I had it set up so my Spotify listens would show up on my listenbrainz.

    You’re quite right though that there aren’t any straightforward replacements for Spotify. Personally, I’m returning to the seven seas, which is why I’m so appreciative of listenbrainz — that discovery stuff really was the last big thing chaining me to Spotify


  • It’s also that even time off can be difficult to get, because of a lack of acknowledgement of variability in menstruation. I have seen way too many situations where a manager (or whoever is responsible for okaying time off) underestimates how bad it can be for some people, possibly because no-on close to them has bad periods, so they think that everyone who struggles is playing it up for time off work.

    Something that really icks me out is that there have been a few times where I have been used as a comparator to shame colleagues; I have always been blessed with light and pain free periods, and when I was on hormonal contraception, they actually stopped entirely. This meant I never needed time off for menstrual reasons, and this was used to sort of say “well Ann presumably menstruates and just gets on with things, so why can’t you?”. Many of us have had the experience of asshole managers who micromanage employee sickness and are exhausting to deal with, but there’s a subset of those who are extra assholish around menstruation related sicknesses. Something I’ve seen once was someone who seemed to be tracking the periods of her employees, and would call up to query times if you had taken period related time off and it didn’t fit into her predictions. I can only assume that she was fortunate to have super regular periods, but many people who do suffer enough to need time off work can’t predict their periods to that degree of accuracy.

    But as others have said, it’s not just about time off, but sometimes it’s small stuff like taking additional or longer bathroom breaks. Or, when someone has come back from a menstruation related sick day, jokes like “you feeling better? Great, just make sure you don’t bleed on the chair, haha”, to the entire office. Obviously that’s inappropriate and the kind of thing you’d report to HR, but it’d be less prevalent if people were less weird about menstruation in general.

    In a way, I appreciate your being confused by this, because if more managers thought about this like you do, this wouldn’t be nearly as big of an issue. But way too many people make it weird.


  • The British didn’t create the caste system from scratch, but they had a huge role in shaping what became the modern caste system. I’m sleepy, so I’m going to quote direct from this BBC article (though it’s a good amount article, if you have the time. It does a good job for a summary, imo)

    “[Britain’s reshaping of Indian society] was done initially in the early 19th Century by elevating selected and convenient Brahman-Sanskrit texts like the Manusmriti to canonical status”

    .

    " [The caste] categories were institutionalised in the mid to late 19th Century through the census. These were acts of convenience and simplification."

    .

    “The colonisers established the acceptable list of indigenous religions in India - Hinduism, Sikhism, Jainism - and their boundaries and laws through “reading” what they claimed were India’s definitive texts.”

    .

    “There is little doubt that the religion categories in India could have been defined very differently by reinterpreting those same or other texts.”

    .

    “In fact, it is doubtful that caste had much significance or virulence in society before the British made it India’s defining social feature.”

    .

    "The colonisers invented or constructed Indian social identities using categories of convenience during a period that covered roughly the 19th Century.

    “This was done to serve the British Indian government’s own interests - primarily to create a single society with a common law that could be easily governed.”

    “A very large, complex and regionally diverse system of faiths and social identities was simplified to a degree that probably has no parallel in world history, entirely new categories and hierarchies were created, incompatible or mismatched parts were stuffed together, new boundaries were created, and flexible boundaries hardened.”

    “The resulting categorical system became rigid during the next century and quarter, as the made-up categories came to be associated with real rights. Religion-based electorates in British India and caste-based reservations in independent India made amorphous categories concrete. There came to be real and material consequences of belonging to one category (like Jain or Scheduled Caste) instead of another.”

    Apologies for just quoting at length at you. I fear that presenting info this way will give the sense that I am lecturing you, but that is not my intention; a large part of why I share this info is because I learned of this relatively recently and I was astounded by how significant Britain’s role was.




  • I think one of the really neat things about games as a medium is that “the experience” is inherently a super malleable concept. Gaming blows my mind when I think about how adaptive you need to be to run a tabletop roleplaying game, like Dungeons and Dragons — no matter how elaborate your plans are, players will always find a way to throw a spanner in the works. Video games have the same unpredictability of how players engage with the world you’ve made, but a much smaller ability to respond and adapt to ensure that they’re getting the correct “intended experience”.

    In some respects, I agree with you, because when I play games, I care a lot about the intended experience. However, the reality is that I bring too much of myself to any game that I play to be able to think of my experience in that way, and I think that’s probably one of my favourite aspects of games as a medium — a dialogue between gamer and game developers. Especially because sometimes, the intended experience of a game isn’t well executed; there are plenty of times I have gotten lost or confused in games because the game didn’t sufficiently communicate to me (or other players with similar experiences) what it expected us to do. Part of the role of the game designer/developers role is to be guide the players so they get something resembling the intended experience.

    Honestly, part of why I am on the pro-accessibility side of this issue is because I’m a bit of a snob — I think that being able to adapt a message or experience to a diverse audience shows a singularity of vision that’s more powerful than experiences that target a much smaller audience.

    For example, let’s say that the subjective difficulty level of a game (the “experience”) equals the “objective difficulty level” of a game (the difficulty setting) minus the player’s skill level. For the sake of this example, let’s imagine that 10 arbitrary units is the correct level of the subjective difficulty level, and above/below that, the experience is degraded; also, let’s say that player skill ranges from 1-10, with most people clustering in the 4-6 range. In that world, if a game could only have one difficulty mode, 15 ish would probably be best, because 15 (objective difficulty) - 5 (average player skill level) = 10 (intended subjective difficulty level). I don’t begrudge game Devs for targeting limited audiences if that’s what they feel capable of, but I do massively respect the craftsmanship of being able to build a game that can serve a subjective 10 to a wide range of people, by having a range of difficulty settings.


  • I have an experience relating to game difficulty and accessibility that you would probably appreciate.

    I was playing Rimworld for the first time, and because I was aware of how huge disasters that wipe out most of your work (that you can sometimes build back from) is a part of the game, I felt bad about playing the game on the mode that allows you to load earlier saves; I would find losing progress in this way more stressful than fun, so I wanted the ability to reverse poor fortune or choices, even if it felt like I was “dishonouring the intended experience”.

    However, a friend (who was the reason I had bought Rimworld in the first place, and who enjoyed the chaos of no-save mode) pointed out that whilst the no-save mode may be presented as the default, the mode with saves enabled is presented as a perfectly valid way to enjoy the game. This made me feel immensely better about it, and I was able to dispel the silly guilt I was feeling. It highlighted to me the power of how we label difficulty settings and other accessibility settings.

    Games are a funny medium.


  • I enjoyed using phyphox while on a plane recently. I found it fun to track the pressure and to see how it loosely corresponded to my own subjective experience of ascending vs descending.

    I can’t recall any “useful” things I’ve used the app for, but I really enjoy having it — it makes me feel powerful. Like, it’s nice to think that if I did have some ideas of experiments to run, I could. It feels fitting to be able to access the sensors, because there are many ways in which our electronic devices nowadays aren’t (or don’t feel like) our own, so this feels like a small amount of clawing back power, even if I’m not using it for much.





  • To some extent, I don’t.

    Which is to say that in and around my field (biochemistry), I’m pretty good at sort of “vibe checking”. In practice, this is just a subconscious version of checking that a paper is published in a legit journal, and having a sense for what kind of topics, and language is common. This isn’t useful advice though, because I acquired this skill gradually over many years.

    I find it tricky in fields where I am out of element, because I am the kind of person who likes to vet information. Your question about how to identify work as peer reviewed seems simple, but is deceptively complex. The trick is in the word “peer” — who counts as a peer is where the nuance comes in. Going to reputable journals can help, but even prestigious journals aren’t exempt from publishing bullshit (and there are so many junk journals that keeping up even within one field can be hard). There are multiple levels of “peer”, and each is context dependent. For example, the bullshit detector that I’ve developed as a biochemist is most accurate and efficient within my own field, somewhat useful within science more generally, slightly useful in completely unrelated academic fields. I find the trick is in situating myself relative to the thing I’m evaluating, so I can gauge how effective my bullshit detector will be. That’s probably more about reflecting on what I know (and think I know) than it is about the piece of material I’m evaluating.

    In most scenarios though, I’m not within a field where my background gives me much help, so that’s where I get lazy and have to rely on things like people’s credentials. One litmus test is to check whether the person actually has a background in what they’re talking about, e.g. if a physicist is chatting shit about biology, or a bioinformatician criticising anthropology, consider what they’re saying with extra caution. That doesn’t mean discount anyone who isn’t staying in their lane, just that it might be worthwhile looking into the topic further (and seeing who else is saying what they are, and what experts from the field are saying too).

    As I get deeper into my academic career, I’ve found I’m increasingly checking a person’s credentials to get a vibe check. Like, if they’re at a university, what department are they under? Because a biochemist who is under a physics department is going to have a different angle than one from the medical research side, for example. Seeing where they have worked helps a lot.

    But honestly a big part of it is that I have built up loose networks of trust. For example, I’m no statistician, but someone I respect irl referenced a blog of Andrew Gelman’s, which I now consider myself s fan of (https://statmodeling.stat.columbia.edu/). Then from that blog, I ended up becoming a fan of this blog, which tends to be about sociology. Trusting these places doesn’t mean I take them at face value for anything they say, but having that baseline of trust there acts as a sort of first pass filter in areas I’m less familiar with, a place to start if I want to learn about a perspective that I know the rough origin of.

    In the context of news, I might start to see a news outlet as trustworthy if I read something good of theirs, like this piece on 3M by ProPublica, which makes me trust other stuff they publish more.

    Ultimately though, all of these are just heuristics — imperfect shortcuts for a world that’s too complex for straightforward rules. I’m acutely aware of how little spare brain space I have to check most things, so I have to get lazy and rely on shortcuts like this. In some areas, I’m lucky to have friends I can ask for their opinion, but for most things, I have to accept that I can’t fact check things thoroughly enough to feel comfortable, which means having to try holding a lot of information at arms length and not taking it as fact. That too, takes effort.

    However, I got a hell of a lot smarter when I allowed myself to be more uncertain about things, which means sometimes saying “I don’t know what to make of that”, or “I think [thing] might be the case, but I don’t remember where I heard that, so I’m unsure”, or just straight up “I don’t know”. Be wary of simple and neat answers, and get used to sitting with uncertainty (especially in modern science research).


  • You’re right, and thanks for checking me on that. On reflection, I said it was trite because I think I felt uncomfortable with the level of vulnerability I was feeling when writing that comment, so I tacked that onto the end. The vulnerability came from a place of “who am I to give advice when the advice I’m giving myself hardly feels sufficient, because my inner monologue is basically a screaming possum most of the time”. Lots of people are feeling similar, which is why I made my original comment in the first place.

    I think a lot of us are struggling under the pressure about not knowing how to cope with this dreadful situation, and for me, that meant feeling like I needed to come up with the perfect words that would be useful for everyone who is struggling. It is sufficient for me to go “for me, this is a useful way to think (and other people may do also)”. It’s silly for me to dismiss myself as trite just because I feel like I am only valid if I have a Solution. As you highlight, this is a collaborative process, so muddling along together is how this goes.






  • I have to believe in a future where people look back on this from a world with less hatred in it than it currently has. I want to give the perpetrators of hate as little plausible deniability as possible.

    I have to believe that even though looking back on history didn’t seem to help us avoid this situation, that there will be people in the future who are wiser and empowered to make better choices for them and their communities.

    It’s a fantasy, and I honestly don’t care if it’s unrealistic. It’s what I need to believe to keep going. I need to believe there can be something better after this, regardless of whether I’ll get to experience it.