It’s a good thing that real open source models are getting good enough to compete with or exceed OpenAI.
It’s a good thing that real open source models are getting good enough to compete with or exceed OpenAI.
Lan-mouse looks great but keep in mind that there’s no network encryption right now. There is a GitHub ticket open and the developer seems eager to add encryption. It’s just worth understanding that all your keystrokes are going across the network unencrypted.
More than distro hopping maybe try out a zen kernel or compiling kernel yourself and changing kernel config and scheduler, or a newer version of the stock kernel?
I’m not super current on what’s in each kernel but I’d expect latest mainline to handle newer processors better than some of the older stable kernels in some of the more mainstream slower releasing distros.
Ran Asahi for several months, tried it out again recently. It’s good/fine, I just don’t love fedora.
There’s some funkiness with the more complicated install, the AI acceleration doesn’t work, no thunderbolt / docking station.
MacBooks are great hardware but I don’t think they’re the best option for Linux right now. If you’re never going to boot into macOS then I’d look for x13, new Qualcomm, isn’t there a framework arm64 option now or was that a RISC module?
I’m also assuming you’re not looking to do any gaming? Because gaming on ARM is not really a thing right now and doesn’t feel like it will be for a long while.
Taking ollama for instance, either the whole model runs in vram and compute is done on the gpu, or it runs in system ram and compute is done on the cpu. Running models on CPU is horribly slow. You won’t want to do it for large models
LM studio and others allow you to run part of the model on GPU and part on CPU, splitting memory requirements but still pretty slow.
Even the smaller 7B parameter models run pretty slow in CPU and the huge models are orders of magnitude slower
So technically more system ram will let you run some larger models but you will quickly figure out you just don’t want to do it.
Respect, but…
FWIW they didn’t merge it, they closed the PR without merging, link to line that still exists on master.
The recent comments are from the announcement of the ladybird browser project which is forked from some browser code from Serenity OS, I guess people are digging into who wrote the code.
Not arguing that the new comments on the PR are good/bad or anything, just a bit of context.
I’ve been tempted to try and install plasma mobile on a tablet.
Why no arch install?
Been 100% linux for like 6-9 months now, these stories make me thankful for finally making the switch.
I’ve tried to make the switch 3-4 times in the past and was stopped by 2 main things:
The experience was so much better this time and I really have no regrets. I don’t imagine I’ll ever run Windows again outside of a VM
Elon “Nick Cannon” Musk
Rip up the Reddit contract and don’t use that data to train the model. It’s the definition of a garbage in garbage out problem.
Asahi only partially supports the M3 and I guess now the M4 is out (though only in iPad)?
I like that. If there was a site that did like The Razzies for movies but for technology enshitification, I would definitely watch, and probably follow a blog if it was done well
Just a note, the orange pi drivers are not in great shape. It’s getting better but I have a cluster of raspberry pi’s for development, bought an orange pi without first checking out much about them and it’s rough. Rockchip CPUs are great, and the driver / firmware situation is getting better, but something I’d read up on before buying one.
I’d still look at the N100, it’s about 2.5x the performance of raspberry pi 5, and being x86 you have more options than arm.
There are a lot of tiny PCs these days that can output 4k video and audio. Look for something with an N100 or N200 CPU if you want to go as cheap as possible, they tend to be super-cheap and perform well. I’ve got one of the GMTecs and this wireless keyboard+mouse, works really well from the couch.
There are cheaper/other options but to get you started: https://www.amazon.com/GMKtec-Windows-Computer-Business-G3-dp-B0CQ4XQ2WG/dp/B0CQ4XQ2WG https://morefine.com/collections/pc-box (specifically the M9)
I’m far from an expert in init systems, but there are some benefits to declarative approaches for configuration. It’s one of the main reasons yaml and toml are as popular as they are. The short version is, declarative configuration tends to be less verbose, and the declarative contract defines what state you want things to be in, not how to get there which makes it easier on the person writing the unit file, and on the implementers of systemd in that there’s a smaller surface-area to test
Generally declarative:
Got hyprland running on the macbook, have tested it out on desktop. Not quite the daily driver, plasma 6 on X is still the norm there, but I think as soon as synergy works in Wayland I’ll make the switch everywhere
dumbest fucking timeline. A subscription for a feature that requires no infrastructure and is part of the physical thing you just paid $40k for.
First a caveat/warning - you’ll need a beefy GPU to run larger models, there are some smaller models that perform pretty well.
Adding a medium amount of extra information for you or anyone else that might want to get into running models locally
Tools
Models
If you look at https://ollama.com/library?sort=featured you can see models
Model size is measured by parameter count. Generally higher parameter models are better (more “smart”, more accurate) but it’s very challenging/slow to run anything over 25b parameters on consumer GPUs. I tend to find 8-13b parameter models are a sort of sweet spot, the 1-4b parameter models are meant more for really low power devices, they’ll give you OK results for simple requests and summarizing, but they’re not going to wow you.
If you look at the ‘tags’ for the models listed below, you’ll see things like
8b-instruct-q8_0
or8b-instruct-q4_0
. The q part refers to quantization, or shrinking/compressing a model and the number after that is roughly how aggressively it was compressed. Note the size of each tag and how the size reduces as the quantization gets more aggressive (smaller numbers). You can roughly think of this size number as “how much video ram do I need to run this model”. For me, I try to aim for q8 models, fp16 if they can run in my GPU. I wouldn’t try to use anything below q4 quantization, there seems to be a lot of quality loss below q4. Models can run partially or even fully on a CPU but that’s much slower. Ollama doesn’t yet support these new NPUs found in new laptops/processors, but work is happening there.