• 0 Posts
  • 205 Comments
Joined 2 years ago
cake
Cake day: July 14th, 2023

help-circle

  • Citation Needed (by Molly White) also frequently bashes AI.

    I like her stuff because, no matter how you feel about crypto, AI, or other big tech, you can never fault her reporting. She steers clear of any subjective accusations or prognostication.

    It’s all “ABC person claimed XYZ thing on such and such date, and then 24 hours later submitted a report to the FTC claiming the exact opposite. They later bought $5 million worth of Trumpcoin, and two weeks later the FTC announced they were dropping the lawsuit.”



  • kibiz0r@midwest.socialtoLinux@lemmy.mlShare a script/alias you use a lot
    link
    fedilink
    English
    arrow-up
    46
    arrow-down
    1
    ·
    edit-2
    9 days ago

    I often want to know the status code of a curl request, but I don’t want that extra information to mess with the response body that it prints to stdout.

    What to do?

    Render an image instead, of course!

    curlcat takes the same params as curl, but it uses iTerm2’s imgcat tool to draw an “HTTP Cat” of the status code.

    It even sends the image to stderr instead of stdout, so you can still pipe curlcat to jq or something.

    #!/usr/bin/env zsh
    
    stdoutfile=$( mktemp )
    curl -sw "\n%{http_code}" $@ > $stdoutfile
    exitcode=$?
    
    if [[ $exitcode == 0 ]]; then
      statuscode=$( cat $stdoutfile | tail -1 )
    
      if [[ ! -f $HOME/.httpcat$statuscode ]]; then
        curl -so $HOME/.httpcat$statuscode https://http.cat/$statuscode
      fi
    
      imgcat $HOME/.httpcat$statuscode 1>&2
    fi
    
    cat $stdoutfile | ghead -n -1
    
    exit $exitcode
    

    Note: This is macOS-specific, as written, but as long as your terminal supports images, you should be able to adapt it just fine.





  • I’d say that scraping as a verb implies an element of intent. It’s about compiling information about a body of work, not simply making a copy, and therefore if you can accurately call it “scraping” then it’s always fair use. (Accuse me of “No True Scotsman” if you would like.)

    But since it involves making a copy (even if only a temporary one) of licensed material, there’s the potential that you’re doing one thing with that copy which is fair use, and another thing with the copy that isn’t fair use.

    Take archive.org for example:

    It doesn’t only contain information about the work, but also a copy (or copies, plural) of the work itself. You could argue (and many have) that archive.org only claims to be about preserving an accurate history of a piece of content, but functionally mostly serves as a way to distribute unlicensed copies of that content.

    I don’t personally think that’s a justified accusation, because I think they do everything in their power to be as fair as possible, and there’s a massive public benefit to having a service like this. But it does illustrate how you could easily have a scenario where the stated purpose is fair use but the actual implementation is not, and the infringing material was “scraped” in the first place.

    But in the case of gen AI, I think it’s pretty clear that the residual data from the source content is much closer to a linguistic analysis than to an internet archive. So it’s firmly in the fair use category, in my opinion.

    Edit: And to be clear, when I say it’s fair use, I only mean in the strict sense of following copyright law. I don’t mean that it is (or should be) clear of all other legal considerations.


  • I say this as a massive AI critic: Disney does not have a legitimate grievance here.

    AI training data is scraping. Scraping is — and must continue to be — fair use. As Cory Doctorow (fellow AI critic) says: Scraping against the wishes of the scraped is good, actually.

    I want generative AI firms to get taken down. But I want them to be taken down for the right reasons.

    Their products are toxic to communication and collaboration.

    They are the embodiment of a pathology that sees humanity — what they might call inefficiency, disagreement, incoherence, emotionality, bias, chaos, disobedience — as a problem, and technology as the answer.

    Dismantle them on the basis of what their poison does to public discourse, shared knowledge, connection to each other, mental well-being, fair competition, privacy, labor dignity, and personal identity.

    Not because they didn’t pay the fucking Mickey Mouse toll.




  • kibiz0r@midwest.socialtoTechnology@lemmy.worldThe Copilot Delusion
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    1 month ago

    So if library users stop communicating with each other and with the library authors, how are library authors gonna know what to do next? Unless you want them to talk to AIs instead of people, too.

    At some point, when we’ve disconnected every human from each other, will we wonder why? Or will we be content with the answer “efficiency”?





  • I don’t believe the common refrain that AI is only a problem because of capitalism. People already disinform, make mistakes, take irresponsible shortcuts, and spam even when there is no monetary incentive to do so.

    I also don’t believe that AI is “just a tool”, fundamentally neutral and void of any political predisposition. This has been discussed at length academically. But it’s also something we know well in our idiom: “When you have a hammer, everything looks like a nail.” When you have AI, genuine communication looks like raw material. And the ability to place generated output alongside the original… looks like a goal.

    Culture — the ability to have a very long-term ongoing conversation that continues across many generations, about how we ought to live — is by far the defining feature of our species. It’s not only the source of our abilities, but also the source of our morality.

    Despite a very long series of authors warning us, we have allowed a pocket of our society to adopt the belief that ability is morality. “The fact that we can, means we should.”

    We’re witnessing the early stages of the information equivalent of Kessler Syndrome. It’s not that some bad actors who were always present will be using a new tool. It’s that any public conversation broad enough to be culturally significant will be so full of AI debris that it will be almost impossible for humans to find each other.

    The worst part is that this will be (or is) largely invisible. We won’t know that we’re wasting hours of our lives reading and replying to bots, tugging on a steering wheel, trying to guide humanity’s future, not realizing the autopilot is discarding our inputs. It’s not a dead internet that worries me, but an undead internet. A shambling corpse that moves in vain, unaware of its own demise.



  • It’s the #1 thing that drives me crazy about Linux.

    It seems obvious. You’ve got a Windows/Apple/Super key and a Control key. So you’d think Control would be for control characters and Windows/Apple/Super would be for application things.

    I can understand Windows fucking this up, cuz the terminal experience is such a low priority. But Linux?

    There’s some projects like Kinto and Toshy which try to fix it, but neither work on NixOS quite yet.



  • The author seems to have fallen for two tricks at once: The MPAA/RIAA playbook of seeing all engagement with content through the lens of licensing, and the AI hype machine telling everyone that someday they will love AI slop.

    He mentions people complaining that stock photo sites, book portals, and music streaming services are all degrading in quality because of AI slop, but his conclusion is that people will start seeking out AI content because it’s not copyrighted.

    Regardless… The position of those in power has not changed. They never believed in copyright as a guiding concept, only as a means to an end. That end being: We, the powerful, will control culture, and we will use it to benefit ourselves.

    Before generative AI, the approach was to keep the cultural landscape well-groomed – something you’d wanna pay to experience. Mindfully grown and pruned, with clear walking paths, toll booths at each entrance, and harsh penalties for littering or stepping on the grass. You were allowed to have your own toll-free parks outside of the secure perimeter, that continue the walking paths in ways that are mutually beneficial, as long as visitors don’t track mud in as a result.

    But now? The landscape is no longer about creating a well-manicured amusement park worth the price of admission. There’s oil under the surface. And it’s time to frack the hell out of it. It’s too bad about the toxic slurry that will accumulate up top, making the walled and unwalled parks alike into an intolerable biohazard. There are resources to extract. Externalities are an end-user problem.

    Yeah, turning culture into an expensive amusement park was a horrible mistake. But I wouldn’t get too eager to gloat over seeing the tide of sludge pour over their walls. We’ll still be on the outside, drowning in it.