• 6 Posts
  • 42 Comments
Joined 4 months ago
cake
Cake day: November 25th, 2024

help-circle















  • I don’t think unbiased media exist. But some are at least less biased. And you want some bias towards scientific reasoning, honesty and meritocracy. Otherwise you introduce too much noise (which is one reason why being absolute about free speech leads to less free speech, and also the reason electronic warfare is something prioritized by politically weak and/or military weak state actors). Less noise usually correlate with what people perceive as left leaning or liberal bias (in the western political landscape of 2025). Might be very related to this. Also, I think it’s OK with biased media as long as one is open and explicit about it.

    In Sweden I use Omni which is a commercial news aggregator, which I find relatively unbiased or balanced. Public service is pretty good as well.

    For American news, I usually go for NPR first. Don’t know if they are super unbiased, but at least they are not full on crazy.

    I’ve tried Ground News, but I feel it’s a bit too focused on politics of the English speaking sphere.





  • Is that still true though? My impression is that AMD works just fine for inference with ROCm and llama.cpp nowadays. And you get much more VRAM per dollar, which means you can stuff a bigger model in there. You might get fewer tokens per second compared with a similar Nvidia, but that shouldn’t really be a problem for a home assistant. I believe. Even an Arc a770 should work with IPEX-LLM. Buy two Arc or Radeon with 16 GB VRAM each, and you can fit a Llama 3.2 11B or a Pixtral 12B without any quantization. Just make sure that ROCm supports that specific Radeon card, if you go for team red.