

Well, if it’s good enough for quartz mining…
I’m also on Mastodon as https://hachyderm.io/@BoydStephenSmithJr .


Well, if it’s good enough for quartz mining…


Good, the children yearn for the lithium mines. /s
Maybe I should re-train from computer programmer to lithium miner?


Authoritarian means will not generate the anarchist ends.


Putting aside the problems in the current system, let’s not call Thiel’s system a justice system until we can see some results and verify they are just, 'k?


That’s basically the start of the Shadowrun dystopia. There were a lot of other things that “went wrong”, but when the government removed the liability from private security that has been protecting a hazardous materials transport from workers attacking it in the belief that it contained foodstuffs, it legitimized the “megacorp”: a corporation sufficiently powerful to impose their own legal system on their private real estate.
EDIT: In the previous histories the “Seretech Decision” was on 1999-10-26. Sources: 1 2 3 Looks like “6th edition” retconned the fictional history to start 2001-09-11 (Never Forget), so it’s unclear what and when the equivalent event is. Source: 4


The TL;DR is that training AI on copyrighted works falls under the Fair Use exemptions in copyright law
This judgement was reversed by the next federal judge that reviewed AI, in the Meta case.
It is far from legally settled whether training is fair use or not.


I thought that was just the Cybertruck, which yes, I wouldn’t drive even if someone gave me one. I’d flip it and buy something else.
I think both the sedan and roadster are okay electric cars, and I think they have enough range I could use them to reduce the amount of gas I burn in my Volt for longer trips.
But, I haven’t really been paying attention to Tesla recently, and Elmu has certainly been looking horrible to me.


Once companies started suing people trying to practice “responsible disclosure”, I stopped attacking people that choose maximum disclosure.
Responsible disclosure has always been a bit of a hedge. It’s rare to be able to show you are actually the first person/organization to discover a vulnerability.


we are going to need to develop a different model of learning, using, and processing information that considers the provenance of where the information came from and how it got there
They used to teach this in schools under “critical thinking skills”. Following the chain of sources to the primary sources was a task I had to to (at least in part) more than once in secondary school.
Authoritarians don’t like that tho.


I just bail on any site that requires age verification. It sucks, but there are still some that work. I do hear that using a VPN can often help.


Yeah, there was some phonics in my primary school education, and I continue to approach new words in that way sometimes. But, they said Phonetically.


Cave 1.0 scored 1000000% but also force fed the proxy lemons, so it was treated as a failure.


Wait, I thought phonetically (example: papa hotel oscar novermber echo tango india charle alfa lima lima yankee) meant using a phonetic alphabet, not using word(s) with the same Soundex encoding.


If you look at the list of tasks, you can see how the 4 frontier models did. Some of them did complete one or two levels of one or two tasks. None of them completed a whole task. Some of the reasoning logs are funny in the replays.


Here’s another reply where the model mistakes running out of time/move for making progress


Yeah, for a fixed ruleset that can be provided up front the Alpha-Zero approach seems to work great.
These tasks strike me as a bit different. I’m sure the ruleset is fixed somewhere, but it’s not disclosed to the participants. In the task I walked myself through, there was a new wrinkle in each part – a new interactable, a (more) hidden goal, or an information limit. And, of course, part of the task is “discovering” all that from the bitmap frame(s) provided.
I’m unconvinced of the hype around “AI”, but this does seem like a legitimate research target that might stymie the Alpha{Go,Zero,Fold} series at least a bit.


The founder of ARC worked at Google until 2024 and wrote 2.5+ books in Deep Learning. So, I expect some of these benchmarks are based on limitations seen in Deepmind.
That said, it would be interesting to see how well Deepmind does at these tasks. My understanding is that the private tasks would still be dynamic enough to require “on the job training” so an Alpha-Go / Alpha-Zero / Alpha-Fold approach is unlikely to do well on ARC-AGI-3.
Still, I think commentary around models (including, but not limited to something from Deepmind) attempting these tasks would be much more interesting than most of the discourse around generative AI, whether text, image, video, or code generation.


https://arcprize.org/arc-agi/1
https://arcprize.org/arc-agi/2
(They were more static, but yes, eventually frontier models got good at them.)
We don’t even have standards that strong in programming languages or even fucking machine code (ISAs) anymore.
I think I would like to return to that ideal time (if it ever existed), but… I feel like I’m in a vanishingly small minority.
I think it comes down to incentive structure, and the most clear incentives push away from strong stnadards. The big advantage to (a) strong standard(s) is(are) interoperability, but that’s something end users have to demand because it’s an anathema to rent-seeking-behavior (a central facet of surveillance capitalism, choke-point capitalism, enshittification, and technofuedalism). But, even there, natural incentives fail us, since most users get more utility from “innovative” features instead of low switching costs – or at least the think they do until they actually try to exit a platform/service.