• 4 Posts
  • 831 Comments
Joined 2 years ago
cake
Cake day: August 5th, 2023

help-circle
  • To create an effective burner account you need an effective burner device and a burner network to use it on. Otherwise it is trivial for companies that collect your data to figure out who that data belongs to.

    This is more technologically difficult than the average person is willing to deal with. It’s too high of a bar to clear when your browser is being fingerprinted, your devices are being fingerprinted, every new device you buy has some app or subscription, and algorithms collect and anonymize your data with such recklessness that it’s basically trivial to unanonymize it.

    Use the same network as your parents and you’ll get ads for the toothpaste they use, and maybe what they plan to buy you for Christmas.

    Try to remove or block trackers? That just makes it easy to single you out as a specific individual. Try to firehouse those trackers with garbage data? Same problem.

    If you think using a dummy Facebook account on the same device you use for regular accounts means Facebook doesn’t track you or know who you are? That’s a pipe dream.

    It’s the same with other apps too.

    Especially Google and their app network.

    Understand that it’s not that I don’t think this is a good idea (to remove certain services from your electronic life, and to curtail the use of others). But I think your strategy will give people a false sense of security.












  • That’s why I said an app like signal. People assume that every app works the same. Telegram had issues with encryption where all parties didn’t have encryption enabled but one or more of the parties involved assumed the chat was still encrypted.

    However I should probably change that to read more along the lines of: know the features and settings of your app and ensure that encryption settings are set to maximize the protection of privacy.

    I’m gonna have to workshop that. It’s a mouthful.

    Either way, thank you for pointing that out.




  • We’re not talking about firing up chat gpt in a web browser here. Microsoft is installing “agentic AI” on windows machines regardless of whether or not customers want it. They don’t have a say in the matter except the more tech savvy of them who will find ways to edge around the restrictions on how long you can delay downloads or whether or not certain features get downloaded at all.

    Saying otherwise (that it’s just consumers deciding to use this “feature”) is as disingenuous as your first bad analogy about the lock. Especially since you haven’t explained what function this AI performs. The lock performs a singular function adequately enough for the risk involved for most people. And it does it passively. The AI is not the same no matter how often or how hard you try to shoehorn it into your silly analogy.

    You explained your doubling and tripling down quite adequately when you said you work in AI. It would be helpful to this conversation if you could stop drinking the flavorade for five minutes and just think about the fact that people don’t want this and Microsoft is saying that they know it’s problematic but they are forcing it on people anyway.

    This conversation is over though because you want to be right more than you want to be logical and correct and so now you are neither. Have a nice life.


  • To be fair (even though I also am both happy and relieved to see articles like this), just because you convert to Linux, that doesn’t mean everyone else will. I have used so many guides to help debloat windows computers, and turn off nonsense I don’t want (mostly so I can use proprietary software for work). My choice to not use windows in my personal life on my personal devices doesn’t really change my situation with needing those guides to help others circumvent windows BS.

    I wish we didn’t have to live in interesting times and all that, but the guides are helpful.



  • But it can let in a burglar who can find your credit card inside and do the same. And why are you giving AI access to your CC#? You’d better post it here in a reply so I can keep it safe for you.

    You aren’t giving your door lock access to your credit card information. And it didn’t “let the burglar in” so much as it has a failure ceiling. Meaning that there is more of a chance that a burglar can get in than zero, but less of a chance than if you didn’t have a lock at all. An outside party is circumventing the protections you put into place to protect your credit card number. Or perhaps (possibly) you are circumventing it by accident by leaving the door lock unlocked.

    However, in both those cases, the door lock is not doing anything of its own volition, and won’t be doing that outside your control. The AI LLM is doing stuff both of its own volition (perhaps within parameters you set, but more likely outside of parameters you set, but within parameters the company that makes it set and only that to a degree).

    You don’t do any banking except in person? Any shopping except in person with cash? Because that’s what you’re suggesting when you say things like “why are you giving it access to your credit card”.

    Microsoft is suggesting that they will run “Agentic AI” on the windows 11 computers of hundreds of millions of peoples personal devices in the background without their direct input, and that this AI may download malware or be a threat vector that malicious apps, services, etc can take advantage of. But they’re going to do it anyway.

    Microsoft is not installing door locks in my house, and if they tried I’d kindly escort them off the property, by force if necessary.


  • atrielienz@lemmy.worldtoTechnology@lemmy.worldLLMDeathCount.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    15 days ago

    The negligence lies in marketing a product without considering the implications of what it can do in scenarios that would make it a danger to the public.

    No company is supposed to be allowed to endanger the public without accepting due responsibility, and all companies are expected to mitigate public endangerment risks through safeguards.

    “We didn’t know it could do that, but we’re fixing it now” doesn’t absolve them of liability for what happened before because they lacked foresight, did no preliminary testing, and or planning to mitigate their liability. And I’m sure that sounds heartless. But companies do this all the time.

    It’s why we have warning labels and don’t sell specific chemicals in bulk without a license, or to children etc. it’s why, even if you had the money, you can’t just go buy 20 tonnes of fertilizer without the proper documentation and licenses, as well as an acceptable use case for 20 tonnes.

    The changes they have made don’t protect Monsanto from litigation for the deaths their products caused in the before times. The only difference there is that there was proof they had knowledge of the detrimental affects of those products and didn’t disclose them.

    So I suppose we’ll see.


  • I like your username, and generally even agree with you up to a point.

    But I think the problem is there are a lot of mentally unwell people out there who are isolated and using this tool (with no safeguards) to interact with socially as a sort of human stand in.

    If a human actually agrees that you should kill yourself and talks you into doing it, they are complicit and can be held accountable.

    Because chatbots are being… Billed as a product that passes the Turing test, I can understand why people would want the companies that own them to be held accountable.

    These companies won’t let you look up how to make a bomb on their LLM, but they’ll let people confide suicidal ideation and not put in any safeguards for that, and because they’re designed to be agreeable, the LLM will agree with a person who tells it they think they should be dead.