• 1 Post
  • 11 Comments
Joined 1 year ago
cake
Cake day: June 20th, 2023

help-circle



  • Earthbound is eternally on my list of games i play through every couple of years. Its such a great game. Some aspects of it are a tad clunky by modern sensibilities (inventory management, going through the menus for a lot of things, etc.), but overall it holds up really well. Also if you liked earthbound, mother 3 is also 100% worth playing. Mother 1 (or beginnings, or whatever you wanna call it), is hard to recommend to anyone but the most diehard fans, though.

    I like earthbound the most of all of em, but thats purely for nostalgia reasons. From a critical perspective, i think mother 3 is the superior game.




  • Running arr services on a proxmox cluster to download to a device on the same network. I don’t think there would be any problems but wanted to see what changes need to be done.

    I’m essentially doing this with my set up. I have a box running proxmox and a separate networked nas device. There aren’t really any changes, per se, other than pointing the *arr installs at the correct mounts. One thing to make note of, i would make sure that your download, processing, and final locations are all within the same mount point, so that you can take advantage of atomic moves.




  • I have mediacom as well, but in a larger city of the midwest. They have datacaps here too, and i was paying about $100 for exactly this same plan up until a couple years ago. They started upgrading our speeds/caps because a new fiber company (metronet) is building in the area. Now i’m on 1 gbps down and a 4 TB cap. I still plan to switch to metronet when they finally light up my area, as its cheaper for the same speeds (plus no data caps)


  • Even more frustrating when you realize, and feel free to correct me if I’m wrong, these new “AI” programs and LLMs aren’t really novel in terms of theoretical approach: the real revolution is the amount of computing power and data to throw at them.

    This is 100% true. LLMs, neural networks, markov chains, gradient descent, etc. etc. on down the line is nothing particularly new. They’ve collectively been studied academically for 30+ years. It’s only recently that we’ve been able to throw huge amounts of data, computing capacity, and time to tweak said models to achieve results unthinkable 10-ish years ago.

    There have been efficiencies, breakthroughs, tweaks, and changes over this time too, but that’s just to be expected. But largely its just sheer raw size/scale that’s just been achievable recently.