

Because I cook, and need that stuff back. I don’t have all day, I gotta cook again in a few hours.
Because I cook, and need that stuff back. I don’t have all day, I gotta cook again in a few hours.
What are you trying to guard against with backups? It sounds like your greatest concern is data loss from hardware failure.
The 3-2-1 approach exists because it addresses the different concerns about data loss: hardware failures, accidental deletion, physical disaster.
That drive in your safe isn’t a good backup - drives fail just as often when offline as online (I believe they fail more often when powered off, but I don’t have data to support that). That safe isn’t waterproof, and it’s fire resistance is designed to protect paper, not hard drives.
If this data is important enough to back up, then it’s worth having an off site copy of your backup. Backblaze is one way, but there are a number of cloud based storages that will work (Hetznet, etc).
As to your Windows/Linux concern, just have a consistent data storage location, treat that location as authoritative, and perform backups from there. For example - I have a server, a NAS, and an always-on external drive as part of my data duplication. The server is authoritative, laptops and phones continuously sync to it via Syncthing or Resilio Sync, and it duplicates to the NAS and external drives on a schedule. I never touch the NAS or external drives. The server also has a cloud backup.
From a cooling standpoint, you probably don’t want go any smaller than a Small Form Factor desktop. These are large enough to have a proper heatsink and fan on the cpu, enough space for a dedicated video card, have the motherboard connections for a card, large enough power supply, and can support a case fan.
Mini desktops have minimal cooling capacity, definitely no case fan.
For example, I run a Dell SFF (OptiPlex 7050) as a server for virtual machines, Jellyfin host, file server, and media converter. It’s an older machine with an 80 watt power supply (barely enough for my use case), no case fan, and the stock cooler/fan is fortunately well designed.
That stock cooler also evacuates the case, but can’t move enough air to keep the large drive I installed at reasonable temps. Adding a case fan (centrifugal, which can handle restrictions) dropped the drive temps by more than 20F.
Without the sizeable cpu cooler and it’s fan, there’s no way to keep the cpu cool when doing anything more than basic desktop functions. A mini pc would quickly overheat, unless it had a good fan.
Hahaha.
I just replaced a 20 year old dishwasher with it’s newer equivalent: it has a grand total of 3 cycle options.
Screw this surveillance nonsense. Why does a dishwasher need connectivity? It’s a box that sprays water.
A friend has one that the fastest cycle is 1.5 hours. One cycle is four hours… Wtf?
Holy shit, that’s insane…1992? Back then setting up a drive meant configuring interleave and some other stuff.
Wow, that says a lot for Bandcamp
There’s and endless supply of guides for ripping.
On Windows just use Exact Audio Copy - It can pull all the track info from multiple sources. I forget what I used on Linux.
A type of document?
Now I’m really confused.
Air will stagnate in a confined space - even with the PC fans, as they’re designed to move air, not generate pressure.
I find it really annoying that pc makers defaulted to fans instead of compressor wheels, which can move the same volume of air with less noise, in my experience… Technically regular fans are less noisy for the same CFM but I’ve found in most PC’s you need far more fan to achieve the airflow needed because they lack static pressure.
Nice! I would’ve done it high on a side/back wall, just to not interfere with desktop space.
In fact, I put a compressor fan in the top back of a desk cabinet for the same reason, and wired it to a USB plug so it ran from the power on the back of the PC.
I wouldn’t expect a transformer anywhere, I was just shooting in the dark. That would make no sense to me, but I’ve seen crazy stuff in appliances, and I certainly don’t know it all…
A better test may be to forcibly energize the relay so it closes. If it closes and the motor starts, the problem is in the thermostat. If it closes but the motor doesn’t start, the relay is the problem.
Is there a step-down transformer anywhere? That relay isn’t big enough to have such a transformer or switching power supply (I don’t think).
It would be strange (to me) to build everything for 220, including the t-stat, but not the start winding. That would then require either a transformer or switching power supply for that one thing.
The thermostat should be easy enough to test - you know where the supply is, and which wires energize this relay. Test voltage at the relay when the thermostat closes.
If something is dropping 220 to 20v, surely it’s got to be heating up?
Edit: That start relay is designed to control current, not voltage. It initially allows full current and then drops the current available to the start winding. It seems more likely this relay has failed than the t-stat, as the t-stat also controls the defrost cycle and any fans.
It’s not required, it just seems required to non-technical people (I know, potato/potato, it’s effectively required).
Linux is servers.
Hell, VMware migrated to a Linux base a while back, and with their new exorbitant pricing, large environments are switching to things like Proxmox.
The next ten years, VMware will be second string virtualization, even in data centers.
I’m not sure what’s going to happen, but there was a “BIOS War” in the 80’s,when IBM wouldn’t release their BIOS code, so other devs reverse engineered it. No reason why that couldn’t happen again.
Sync is not backup.
Let’s repeat that - sync is not backup.
If your sync job syncs an unintentional deletion, the file is deleted, everywhere.
Backup stores versions of files based on the definitions you provide. A common backup schedule for a home system mat be monthly full, Daily incremental. In this way you have multiple versions of any file that’s changed.
With sync you only have replicants of one file that can be lost through the sync.
Now, you could use backup software to a given location, and have that synchronized to remote systems. Syncthing could do this, with the additional safety of “send only” configured, so if a remote destination gets corrupted, it won’t sync back to the source.
Edit: as for Pi NAS, I’ve found Small-Form-Factor desktops to be a better value. They don’t have much physical space for drives, but I’ve been able to use two 3.5" drives or four 2.5" drives in one. My current one idles at <15w.
Or mini pc with one drive. Since you’re replicating this data to multiple locations, having local redundancy (e.g. Mirroring) isn’t really necessary.
Of course this assumes your net backup requirements are under about 12TB (or whatever the latest single drive size is).
Sure I can.
You’re complaining about needing 4gb of RAM on a virtualized platform in 2025, when 4gb of ram was common on a laptop (which is heavily space constrained) thirteen years ago.
It’s a fair comparison.
When I spin up a VM for Linux, it’s 4gb - that’s the minimum today, because the virtualization platform will over-commit ram as it knows how to best utilize it.
I can run a Linux box in 2gb, but as soon as I start doing anything with it, more ram will be required.
And?
VPS it’s trivial to have the ram you need. My laptop had 2 memory slots. A VPS has how many? Oh, yea, it’s virtualized. 🤦🏼
Er, phones have had 4gb for years.
2gb for a system… My 2012 laptop has 4gb (Yes, 2012, 13 years old).
Yes.