• 0 Posts
  • 34 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle
  • I have ADHD and didn’t get diagnosed or medicated until after I was out of school.

    I basically had two options: pay attention in class or attempt to take notes.

    I had so many teachers in grade school complain I didn’t take notes, or do homework but that was a different complaint. The issue was that when I took notes I would miss chunks of information as I was writing and my writing was basically illegible because I was trying to put it down fast. If I slowed down to make it neat I would miss even more information. So any notes I took would be next to useless and I wouldn’t remember anything. And that’s without even determining what I needed to write down.

    Grade school was also slow passed and repetitive enough that most of the time I could sit and watch or doodle while listening and retain the information. Basically the only thing I struggled with was spelling because it was just rote memorization.

    College was a bit harder in some cases beyond general ed, but for the classes I needed to study for I was able to re-watch the recorded lectures and take the time to write stuff out since I could rewind and pause.


  • Yeah, I think the more accurate title would be “mass marketing” or something. There are certainly marketing campaigns that work, but they are more catered to the audience.

    Valve markets to nerds all the time, but they have enough good will with their target audience so it’s more assumed to be “good faith” marketing, like they don’t misrepresent what they are trying to sell.

    Look at the Steam Deck. They made announcements and over then worked with creators in the PC gaming space to do interviews and reviews and it felt much more organic. Rather than reading some dry ad or annoying banners and interruptions. It was a marketing campaign of sorts that engaged with the audience and made them want to seek it out.

    Where I don’t know many people who are receptive to buzzword salads that are mass blasted over everything and just interrupt everything.



  • Realistically, no one should love how easy it is for anyone of any age to go to any search engine and search for “boobs” and just get a million images of boobs.

    First. let’s not pretend the idea of a kid seeing “boobs” is in any way shape or form actually harmful. Pushing that taboo is why there is any issue in the first place.

    Second: This is always a slippery slope. Even if we gave the benefit of the doubt that these things are done in with honest intentions, someone will abuse the system eventually. At least in the US the fascists have already laid out intention to classify LGBTQ people as “porn” in an effort to both silence us online and ban us in public. And what of the countless queer kids in an abusive home?

    And even without someone explicitly exploiting it, there had already been instances where kids who were being actively sexually abused by the adults in their life were blocked from resources that could get them help because of content blocking like this.

    Thirdly: People can take responsibility for their crotch spawn and be a fucking parent.







  • The problem is for organizations it’s harder to leave because that is where the people you want to reach are. That’s the only reason any org or company is on social media in the first place. If they leave too soon they risk too many people not seeing the things they send out to the community.

    It’s more an individual thing because so many people just have social inertia and haven’t left since everyone they know is already there. The first to leave have to decide if they want to juggle using another platform to keep connections or cut off connections by abandoning the established platform.


  • If you are blindly asking it questions without a grounding resources you’re gonning to get nonsense eventually unless it’s really simple questions.

    They aren’t infinite knowledge repositories. The training method is lossy when it comes to memory, just like our own memory.

    Give it documentation or some other context and ask it questions it can summerize pretty well and even link things across documents or other sources.

    The problem is that people are misusing the technology, not that the tech has no use or merit, even if it’s just from an academic perspective.


  • There’s something to be said that bitcoin and other crypto like it have no intrinsic value but can represent value we give and be used as a decentralized form of currency not controlled by one entity. It’s not how it’s used, but there’s an argument for it.

    NFTs were a shitty cash grab because showing you have the token that you “own” a thing, regardless of what it is, only matters if there is some kind of enforcement. It had nothing to do with rights for property and anyone could copy your crappy generated image as many times as they wanted. You can’t do that with bitcoin.


  • Been playing around with local LLMs lately, and even with it’s issues, Deepseek certainly seems to just generally work better than other models I’ve tried. It’s similar hit or miss when not given any context beyond the prompt, but with context it certainly seems to both outperform larger models and organize information better. And watching the r1 model work is impressive.

    Honestly, regardless of what someone might think of China and various issues there, I think this is showing how much the approach to AI in the west has been hamstrung by people looking for a quick buck.

    In the US, it’s a bunch of assholes basically only wanting to replace workers with AI they don’t have to pay, regardless of the work needed. They are shoehorning LLMs into everything even when it doesn’t make sense to. It’s all done strictly as a for-profit enterprise by exploiting user data and they boot-strapped by training on creative works they had no rights to.

    I can only imagine how much of a demoralizing effect that can have on the actual researchers and other people who are capable of developing this technology. It’s not being created to make anyone’s lives better, it’s being created specifically to line the pockets of obscenely wealthy people. Because of this, people passionate about the tech might decide not to go into the field and limit the ability to innovate.

    And then there’s the “want results now” where rather than take the time to find a better way to build and train these models they are just throwing processing power at it. “needs more CUDA” has been the mindset and in the western AI community you are basically laughed at if you can’t or don’t want to use Nvidia for anything neural net related.

    Then you have Deepseek which seems to be developed by a group of passionate researchers who actually want to discover what is possible and more efficient ways to do things. Compounded by sanctions preventing them from using CUDA, restrictions in resources have always been a major cause for a lot of technical innovations. There may be a bit of “own the west” there, sure, but that isn’t opposed to the research.

    LLMs are just another tool for people to use, and I don’t fault a hammer that is used incorrectly or to harm someone else. This tech isn’t going away, but there is certainly a bubble in the west as companies put blind trust in LLMs with no real oversight. There needs to be regulation on how these things are used for profit and what they are trained on from a privacy and ownership perspective.






  • Even using LLMs isn’t an issue, it’s just another tool. I’ve been messing around with local stuff and while you certainly have to use it knowing it’s limitations it can help for certain things, even if just helping parse data or rephrasing things.

    The issue with neural nets is that while it theoretically can do “anything”, it can’t actually do everything.

    And it’s the same with a lot of tools like this. People not understanding the limitations or flaws and corporations wanting to use it to replace workers.

    There’s also the tech bros who feel that creative works can be generated completely by AI because like AI they don’t understand art or storytelling.

    But we also have others who don’t understand what AI is and how broad it is, thinking it’s only LLMs and other neural nets that are just used to produce garbage.