Ah, I love to see the “No True Scotsman” fallacy in the wild.
🅸 🅰🅼 🆃🅷🅴 🅻🅰🆆.
𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍 𝖋𝖊𝖆𝖙𝖍𝖊𝖗𝖘𝖙𝖔𝖓𝖊𝖍𝖆𝖚𝖌𝖍
Ah, I love to see the “No True Scotsman” fallacy in the wild.
This is why I love bullpup. The best of both worlds.
Legislation is so far behind these issues, I expect AP to be replaced by whatever comes next before legal considerations have any impact. And what’s Joe Smallserver going to do? Sue Google?
I agree with your theory, but while in theory, theory is the same as practice, in practice, it doesn’t.
Your Mastodon and Lemmy (and all other ActivityPub-talkin’ platforms) posts certainly are. I’m not sure it’s even technically possible to have federation without being open to AI ETLs. A centralized platform, maybe, but I expect this is the price we pay for decentralization.
And that, kids, is a great use of RAID: under some other form of data redundancy.
Great story!
RAID 1 is mirroring. If you accidentally delete a file, or it becomes corrupt (for reasons other than drive failure), RAID 1 will faithfully replicate that delete/corruption to both drives. RAID 1 only protects you from drive failure.
Implement backups before RAID. If you have an extra drive, use it for backups first.
There is only one case when it’s smart to use RAID on a machine with no backups, and that’s RAID 0 on a read-only server where the data is being replicated in from somewhere else. All other RAID levels only protect against drive failure, and not against the far more common causes of data loss: user- or application-caused data corruption.
Are compatibility issues common with ZigBee? I went down there Z-Wave path years ago, somewhat arbitrarily. I’ve never checked devices for compatibility, not encountered any that didn’t work. As of today, I have 58 z-wave devices connected to HA.
My controller supports ZigBee, and the previous owner left some devices in the house (window sensors, mostly), and I’ve tried unsuccessfully to pair them; I haven’t yet really spent any time trying to troubleshoot, but I’ve been contemplating adding ZigBee to the mix because they’re sometimes cheaper. I really don’t want to have to struggle with compatibility, though. It’s just one more thing to have to fuss with.
And that breaks the processor and you have to reboot your listener and it’s such a paaaaaiin.
Until January. Then that will all stop.
Already done.
I mean, you have to use it to get software; and if you’re submitting patches to other people’s software; and I have inherited maintenance of a popular project that would just confuse a ton of people, including several distros, if I moved it. But I never create projects in github anymore. Sourcehut has been great.
I haven’t tried it yet, and I haven’t had a reason to look into it. My experience with Fi was that you pay $10 per Gb - it didn’t come out of your normal bank - and per-minute charges. When I was traveling, I used my company phone, or if on vacation, purely data with heavy up front-caching as much as I could at the hotel. I really don’t like surprise bill sizes.
But to be honest, I haven’t tried Mint internationally, so I can’t say.
Not so bad. I use gmail as a backup for some accounts in case something happens to my VPS or domain, and my Amazon account is still linked to it out of laziness, but otherwise I never use it.
Oh. Except that I have an Android phone, and that’s linked to my gmail, although I don’t use any Google apps or services beyond Play. So I suppose my phone would stop working. Everything’s backed up, though, so maybe it’d be a good thing; maybe it’d motivate me to pull the trigger on a Light Phone. I kinda want a Minimal Phone because my F&F uses Jami, but that’d still be an Android phone, so it wouldn’t work either.
Fi isn’t that great. We were on Fi for years; I switched to Mint, my wife stayed on Fi until I was sure it was going to work. So far, I pay less for more, no gotchas.
It was amazing when it first came out; now it has a lot of competition that beats it.
Yeah, I use systemd for the self-host stuff, but you should be able to use docker-compose files with podman-compose with no, or only minor, changes. Theoretically. If you’re comfortable with compose, you may have more luck. I didn’t have a lot of experience with docker-compose, and so when there’s hiccups I tend to just give up and do it manually, because it works just fine that way, too, and it’s easier (for me).
I started with rootless podman when I set up All My Things, and I have never had an issue with either maintaining or running it. Most Docker instructions are transposable, except that podman doesn’t assume everything lives as dockerhub and you always have to specify the host. I’ve run into a couple of edge cases where arguments are not 1:1 and I’ve had to dig to figure out what the argument is on podman. I don’t know if I’m actually more secure, but I feel more secure, and I really like not having the docker service running as root in the background. All in all, I think my experience with rootless podman has been better than my experience with docker, but at this point, I’ve had far more experience with podman.
Podman-compose gives me indigestion, but docker-compose didn’t exist or wasn’t yet common back when I used docker; and by the time I was setting up a homelab, I’d already settled on podman. So I just don’t use it most of the time, and wire things up by hand when necessary. Again, I don’t know whether that’s just me, or if podman-compose is more flaky than docker-compose. Podman-compose is certainly much younger and less battle-tested. So is podman but, as I said, I’ve been happy with it.
I really like running containers as separate users without that daemon - I can’t even remember what about the daemon was causing me grief; I think it may have been the fact that it was always running and consuming resources, even when I wasn’t running a container, which isn’t a consideration for a homelab. However, I’d rather deeply know one tool than kind of know two that do the same thing, and since I run containers in several different situations, using podman everywhere allows me to exploit the intimacy I wouldn’t have if I were using docker in some places and podman in others.
2¢
Location services in Android are in-phone, and they’re definitely accurate and reporting to Google. I only clarified that your cell provider probably can’t locate you using triangulation via your cell Signal. Turn data off, and you’re fine; otherwise, Google is tracking you - and from what I’ve read, even if you have location services turned off.
They can’t, tho. There are two reasons for this.
Geolocating with cell towers requires trilateration, and needs special hardware on the cell towers. Companies used to install this hardware for emergency services, but stopped doing so as soon as they legally could as it’s very expensive. Cell towers can’t do triangulation by themselves as it requires even more expensive hardware to measure angles; trilateration doesn’t work without special equipment because wave propegation delays between the cellular antenna and the computers recording the signal are big enough to utterly throw off any estimate.
An additional factor in making trilateration (or even triangulation, in rural cases where they did sometimes install triangulation antenna arrays on the towers) is that, since the UMTS standard, cell chips work really hard to minimize their radio signal strength. They find the closest antenna and then reduce their power until they can just barely talk to the tower; and except in certain cases they only talk to one tower at a time. This means that, at any given point, only one tower is responsible for handling traffic for the phone, and for triangulation you need 3. In addition to saving battery power, it saves the cell companies money, because of traffic congestion: a single tower can only handle so much traffic, and they have to put in more antennas and computers if the mobile density gets too high.
The reason phones can use cellular signal to improve accuracy is because each phone can do its own triangulation, although it’s still not great and can be impossible because of power attenuation (being able to see only one tower - or maybe two - at a time); this is why Google and Apple use WiFi signals to improve accuracy, and why in-phone triangulation isn’t good enough: in any sufficiently dense urban or suburban environment, the combined informal of all the WiFi routers the phone can see, and the cell towers it can hear, can be enough to give a good, accurate position without having to turn on the GPS chip, obtain a satellite fix (which may be impossible indoors) and suck down power. But this is all done inside and from the phone - this isn’t something cell carriers can do themselves most of the time. Your phone has to send its location out somewhere.
TL;DR: Cell carriers usually can’t locate you with any real accuracy, without the help of your phone actively reporting its calculated location. This is largely because it’s very expensive for carriers to install the necessary hardware to get any accuracy of more than hundreds of meters; they are loath to spend that money, and legislation requiring them to do so no longer exists, or is no longer enforced.
Source: me. I worked for several years in a company that made all of the expensive equipment - hardware and software - and sold it to The Big Three carriers in the US. We also paid lobbyists to ensure that there were laws requiring cell providers to be able to locate phones for emergency services. We sent a bunch of our people and equipment to NYC on 9/11 and helped locate phones. I have no doubt law enforcement also used the capability, but that was between the cops and the cell providers. I know companies stopped doing this because we owned all of the patents on the technology and ruthlessly and successfully prosecuted the only one or two competitors in the market, and yet we still were going out of business at the end as, one by one, cell companies found ways to argue out of buying, installing, and maintaining all of this equipment. In the end, the competitors we couldn’t beat were Google and Apple, and the cell phones themselves.
For my CLI homies, there’s syncedlyrics.
Be advised: several Subsonic servers (including gonic and Navidrome) do not support lyric files unless they’re embedded, and syncedlyrics will only put the lyrics in .lrc files. So getting lyrics in clients can be a two-step process: download the .lrc’s, then run a script to embed them in the song files. I’ve seen a script to do the latter, but I haven’t tried it. I’ll send a patch to gonic to read lrc files, during the Christmas holiday most likely.
Don’t worry; a bailout is coming in January.
Ukrainians are some ingenious mofos.