**beep ** bop.
I had exactly the same use case and I ended up with a 40G DAC fiber for that case. It ended up cheaper than converting the whole lan to 10G.
That said, it feels like used 10G equipment is easier to come by than 2.5G for now, and if you have 2G fiber uplink and only 1G past the router then it’s a waste.
Garage is trivial to get up and running and it’s more lightweight than minio nowadays.
I would not recommend unifi for a mature solution. It sure works nice as a glass panel, but it will get limiting if you will have a desire to hack around your network. Their APs are solid, though, it’s just the USG/Dream machine that I wouldn’t recommend.
Mikrotik software is very capable and hackable and you can run it in a vm if you feel like bringing your own hardware.
Apparently traefik might be better if you run docker compose and such, as it does auto-discovery, which reduces the amount of manual configuration required.
Clearly I mean Garage in here when I write “S3.” It is significantly easier and faster to run hugo deploy
and let it talk to Garage, then to figure out where on a remote node the nginx k8s pod has its data PV mounted and scp files into it. Yes, I could automate that. Yes, I could pin the blog’s pod to a single node. Yes, I could use a stable host path for that and use rsync, and I could skip the whole kubernetes insanity for a static html blog.
But I somewhat enjoy poking the tech and yes, using Garage makes deploys faster and it provides me a stable well-known API endpoint for both data transfers and for serving the content, with very little maintenance required to make it work.
S3 storage is simpler than running scp -r to a remote node, because you can copy files to S3 in a massively parallel way and scp is generally sequential. It’s very easy to protect the API too, as it’s just HTTP (and at it, it’s also significantly faster than WebDAV).
Of course it does AI now!
But seriously, the easiest guide for minio setup meant using their operator. The garage guide was: spin up this single deploy and it works from there.
I remember when minio just started and it was small and easy to run. Nowadays, it’s a full-blown enterprise product, though, full of features you’ll never care about in a homelab eating on your cpu and ram.
Garage is small and easy to run. I’ve been toying with it for several months and I’m more than happy with its simple API and tiny footprint. I even run my (static html) blog off it because it’s just easier to deploy it to a S3-compatible API.
Specifically, use home.arpa, if you must use a private domain.
FWIW that java app isn’t much memory hungry and it’s not cpu-intensive at all. There are no issues with running java apps at all if you spend 5 minutes figuring the basix flags on how to set the memory limits or run it in a memory-limited cgroup via some containers runtime.
I run k3s in my homelab as a single node cluster. I’m very familiar with kubernetes in general, so it’s just easier for me to reason with a control plane.
Some of the benefits I find useful:
k3s is, of course, a memory hog, I’d estimate it and cilium (my CNS of choice) eat up about 2Gb ram and a bit under one core. It’s something you can tune to some extent, though. But then, I can easily do pod routing via VPN and create services that will automatically get a public IP from my endless IPv6 pool and get that address assigned a DNS name in like 10 lines of Yaml.
IIRC they demonstrated an interaction with Siri where it asks the user for consent before enriching the data through chatgpt. So yeah, that seems to mean your data is sent out (if you consent).
If you drop the projector, then airpods already do it better when paired with the watch. There’s no point in such a device at all, then.
Is there anything interesting at all reported in /proc/spl/kstat/zfs/dbgmsg?
I did ran out of pcie, yeah :-( the network peaks at about 26gbit/s, which is the most you can squeeze out of pcie 3.0 x4. I could move the nvmes off the pcie 4.0 x16 (I have two m2 slots on the motherboard itself), but I planned to expand the nvme storage to 4x SSDs and I’m out of the pci lanes on the other end of the fiber either way (that box has all x16 going to the gpu)
I run 3900X with a 40Gbit fiber, packed with HDDs and nvmes. The box fluctuates around 90-110W use.
when you said that Nextcloud might not meet your needs, was your concern specifically the server-side data format?
I’d prefer them as plain files. Technically it doesn’t matter much to me if it’s a database, if I have to spin up an S3-compatible API, or if I need to slice up a zvol for it, but I just prefer the files because then I can do zfs snapshots (in which I trust) and backup with restic (in which I trust)
That gives me hope, thanks. I’ll try it, then.
ECC is slightly more required for ZFS because its ARC is generally more aggressive than the usual linux caching subsystem. That said, it’s not a hard requirement. My curent NAS was converted from my old windows box (which apparently worked for years with bad ram). Zfs uncovered the problem in the first 2 days by reporting the (recoverable) data corruption in the pool. When I fixed the ram issue and hash-checked against the old backup all the data was good. So, effectively, ZFS uncovered memory corruption and remained resilient against it.