• 0 Posts
  • 23 Comments
Joined 1 year ago
cake
Cake day: November 19th, 2023

help-circle
  • I ran out of crtcs, but I wanted another monitor. I widened a virtual display, and drew the left portion of it on one monitor, like regular. Then I had a crown job that would copy chunks of it into the frame buffer of a USB to DVI-d adapter. It could do 5 fps redrawing the whole screen, but I chose things to put there where it wouldn’t matter too much. The only painful thing was arranging the windows on that monitor, with the mouse updating very infrequently, and routinely being drawn 2 or more places in the frame buffer.




  • Modern operating systems have made it take very little knowledge to connect to WiFi and browse the internet. If you want to use your computer for more than that, it can still take a longer learning process. I download 3D models for printing, and wanted an image for each model so I could find things more easily. In Linux, I can make such images with only about a hundred characters in the terminal. In Windows, I would either need to learn powershell, or make an image from each file by hand.

    The way I understand “learning Linux” these days is reimagining what a computer can do for you to include the rich powers of open source software, so that when you have a problem that computers are very good at, you recognize that there’s an obvious solution on Linux that Windows doesn’t have.






  • What we have done is invented massive, automatic, no holds barred pattern recognition machines. LLMs use detected patterns in text to respond to questions. Image recognition is pattern recognition, with some of those patterns named things (like “cat”, or “book”). Image generation is a little different, but basically just flips the image recognition on its head, and edits images to look more like the patterns that it was taught to recognize.

    This can all do some cool stuff. There are some very helpful outcomes. It’s also (automatically, ruthlessly, and unknowingly) internalizing biases, preferences, attitudes and behaviors from the billion plus humans on the internet, and perpetuating them in all sorts of ways, some of which we don’t even know to look for.

    This makes its potential applications in medicine rather terrifying. Do thousands of doctors all think women are lying about their symptoms? Well, now your AI does too. Do thousands of doctors suggest more expensive treatments for some groups, and less expensive for others? AI can find that pattern.

    This is also true in law (I know there’s supposed to be no systemic bias in our court systems, but AI can find those patterns, too), engineering (any guesses how human engineers change their safety practices based on the area a bridge or dam will be installed in? AI will find out for us), etc, etc.

    The thing that makes AI bad for some use cases is that it never knows which patterns it is supposed to find, and which ones it isn’t supposed to find. Until we have better tools to tell it not to notice some of these things, and to scrub away a lot of the randomness that’s left behind inside popular models, there’s severe constraints on what it should be doing.







  • The major strategy on CWR is pretensioning, but there are also multiple kinds of expansion joints used in different circumstances. I’m not saying it’s impossible to do the same with a vacuum chamber, but I am saying there’s no simple reliable answer, and certainly no answer so obvious and bulletproof that it doesn’t even require testing before you could start construction.

    Elon Musk either didn’t know or didn’t care that his company wasn’t doing the required engineering and testing to make a real functioning hyperloop.


  • If any part of the hundreds of miles of tube suddenly stops being a vacuum chamber, every train all along the tube is going to be hit by air rushing in, at the speed of sound, with all the turbulence that implies, while its already moving at full speed. It might be possible to engineer a capsule that will keep the people inside alive when that happens, but it is not at all the same as e.g. rail, where “stop moving fowards” depletes essentially all the energy in the system.




  • Ok, so I think the timeline is, he signed up for an unlimited storage plan. Over several years, he uploaded 233TB of video to Google’s storage. They discontinued the unlimited storage plan he was using, and that plan ended May 11th. They gave him a “60 day grace period” ending on July 10th, after which his accouny was converted to a read only mode.

    He figured the data was safe, and continued using the storage he now isn’t really paying for from July 10th until December 12th. On December 12th, Google tells him they’re going to delete his account in a week, which isn’t enough time to retrieve his data… because he didn’t do anything during the period before his plan ended, didn’t do anything during the grace period, and hasn’t done anything since the grace period ended.

    I get that they should have given him more than a week of warning before moving to delete, but I’m not exactly sure what he was expecting. Storing files is an ongoing expense, and he’s not paying that cost anymore.


  • My favorite ML result was (details may be inaccurate, I’m trying to recall from memory) a model that analyzed scan images from MRI machines, that would have far more confidence of the problems it was detecting if the image was taken on a machine with an old manufacture date. The training data had very few negative results from older machines, so the assumption that an image taken on an old machine showed the issue fit the data.

    There was speculation about why that would happen in the training data, but the pattern noticing machine sure noticed the pattern.