• 0 Posts
  • 52 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle

  • There is so much wrong with this…

    AI is a range of technologies. So yes, you can make surveillance with it, just like you can with a computer program like a virus. But obviously not all computer programs are viruses nor exist for surveillance. What a weird generalization. AI is used extensively in medical research, so your life might literally be saved by it one day.

    You’re most likely talking about “Chat Control”, which is a controversial EU proposal to scan either on people’s devices or from provider’s ends for dangerous and illegal content like CSAM. This is obviously a dystopian way to achieve that as it sacrifices literally everyone’s privacy to do it, and there is plenty to be said about that without randomly dragging AI into that. You can do this scanning without AI as well, and it doesn’t change anything about how dystopian it would be.

    You should be using end to end regardless, and a VPN is a good investment for making your traffic harder to discern, but if Chat Control is passed to operate on the device level you are kind of boned without circumventing this software, which would potentially be outlawed or made very difficult. It’s clear on it’s own that Chat Control is a bad thing, you don’t need some kind of conspiracy theory about ‘the true purpose of AI’ to see that.


  • I have a similar hesitancy, but unfortunately that’s why we can’t even really trust ourselves either. The statistics we can put to paper already paints such a different image of society than the one we experience. So even though it feels like these people are everywhere and such a mindset is growing, there are many signs that this is not the case. But I get it, that at times also feels like puffing some hopium. I’m fortunate to have met enough stubborn people that did end up changing their minds on their own personal irrationality, and as I grew older I caught myself doing the same a couple of times as well. That does give me hope.

    And well, if you look at history, the kind of shit people believed. Miasma, bloodletting, superstitious beliefs, to name a few. As time has moved on, the majority of people has grown. Even a century where not a lot changes in that regard (as long as it doesn’t regress) can be a speed bump in the mindset of the future.


  • While I share this sentiment, I think/hope the eventual conclusion will be a better relationship between more people and the truth. Maybe not for everyone, but more people than before. Truth is always more like 99.99% certain than absolute truth, and it’s the collection of evidence that should inform ‘truth’. The closest thing we have to achieving that is the court system (In theory).

    You don’t see the electric wiring in your home, yet you ‘know’ flipping the switch will cause electricity to create light. You ‘know’ there is not some other mechanism in your walls that just happens to produce the exact same result. But unless you check, you technically didn’t know for sure. Someone could have swapped it out while you weren’t looking, even if you built it yourself. (And even if you check, your eyes might deceive you).

    With Harris’ airport crowd, honestly if you weren’t there, you have to trust second hand accounts. So how do you do that? One video might not say a lot, and honestly if I saw the alleged image in a vacuum I might have been suspicious of AI as well.

    But here comes the context. There are many eye witness perspectives where details can be verified and corroborated. The organizer isn’t an habitual liar. It happened at a time that wasn’t impossible (eg. a sort of ‘counter’-alibi). It happened in a place that isn’t improbable (She’s on the campaign trail). If true, it would require a conspiracy level of secrecy to pull of. And I could list so many more things.

    Anything that could be disproven with ‘It might have been AI’, probably would have not stuck in court anyways. It’s why you take testimony, because even though that proves nothing on it’s own, if corroborated with other information it can make one situation more or less probable.


  • Depends on what kind of AI enhancement. If it’s just more things nobody needs and solves no problem, it’s a no brainer. But for computer graphics for example, DLSS is a feature people do appreciate, because it makes sense to apply AI there. Who doesn’t want faster and perhaps better graphics by using AI rather than brute forcing it, which also saves on electricity costs.

    But that isn’t the kind of things most people on a survey would even think of since the benefit is readily apparent and doesn’t even need to be explicitly sold as “AI”. They’re most likely thinking of the kind of products where the manufacturer put an “AI powered” sticker on it because their stakeholders told them it would increase their sales, or it allowed them to overstate the value of a product.

    Of course people are going to reject white collar scams if they think that’s what “AI enhanced” means. If legitimate use cases with clear advantages are produced, it will speak for itself and I don’t think people would be opposed. But obviously, there are a lot more companies that want to ride the AI wave than there are legitimate uses cases, so there will be quite some snake oil being sold.




  • Yes, it would be much better at mitigating it and beat all humans at truth accuracy in general. And truths which can be easily individually proven and/or remain unchanged forever can basically be 100% all the time. But not all truths are that straight forward though.

    What I mentioned can’t really be unlinked from the issue, if you want to solve it completely. Have you ever found out later on that something you told someone else as fact turned out not to be so? Essentially, you ‘hallucinated’ a truth that never existed, but you were just that confident it was correct to share and spread it. It’s how we get myths, popular belief, and folklore.

    For those other truths, we simply ascertain the truth to be that which has reached a likelihood we consider it to be certain. But ideas and concepts we have in our minds constantly float around on that scale. And since we cannot really avoid talking to other people (or intelligent agents) to ascertain certain truths, misinterpretations and lies can sneak in to cause us to treat as truth that which is not. To avoid that would mean the having to be pretty much everywhere to personally interpret the information straight from the source. But then things like how fast it can process those things comes in to play. Without making guesses about what’s going to happen, you basically can’t function in reality.


  • Yes, a theoretical future AI that would be able to self-correct would eventually become more powerful than humans, especially if you could give it ways to run magnitudes more self-correcting mechanisms at the same time. But it would still be making ever so small assumptions when there is a gap in the information it has.

    It could be humble enough to admit it doesn’t know, but it can still be mistaken and think it has the right answer when it doesn’t. It would feel neigh omniscient, but it would never truly be.

    A roundtrip around the globe on glass fibre takes hundreds of milliseconds, so even if it has the truth on some matter, there’s no guarantee that didn’t change in the milliseconds it needed to become aware that the truth has changed. True omniscience simply cannot exists since information (and in turn the truth encoded by that information) also propagates at the speed of light.

    a big mistake you are making here is stating that it must be fed information that it knows to be true, this is not inherently true. You can train a model on all of the wrong things to do, as long it has the capability to understand this, it shouldn’t be a problem.

    The dataset that encodes all wrong things would be infinite in size, and constantly change. It can theoretically exist, but realistically it will never happen. And if it would be incomplete it has to make assumptions at some point based on the incomplete data it has, which would open it up to being wrong, which we would call a hallucination.


  • I’m not sure where you think I’m giving it too much credit, because as far as I read it we already totally agree lol. You’re right, methods exist to diminish the effect of hallucinations. That’s what the scientific method is. Current AI has no physical body and can’t run experiments to verify objective reality. It can’t fact check itself other than be told by the humans training it what is correct (and humans are fallible), and even then if it has gaps in what it knows it will fill it up with something probable - but which is likely going to be bullshit.

    All my point was, is that to truly fix it would be to basically create an omniscient being, which cannot exist in our physical world. It will always have to make some assumptions - just like we do.


  • Hallucinations in AI are fairly well understood as far as I’m aware. Explained in high level on the Wikipedia page for it. And I’m honestly not making any objective assessment of the technology itself. I’m making a deduction based on the laws of nature and biological facts about real life neural networks. (I do say AI is driven by the data it’s given, but that’s something even a layman might know)

    How to mitigate hallucinations is definitely something the experts are actively discussing and have limited success in doing so (and I certainly don’t have an answer there either), but a true fix should be impossible.

    I can’t exactly say why I’m passionate about it. In part I want people to be informed about what AI is and is not, because knowledge about the technology allows us to make more informed decision about the place AI takes in our society. But I’m also passionate about human psychology and creativity, and what we can learn about ourselves from the quirks we see in these technologies.



  • It will never be solved. Even the greatest hypothetical super intelligence is limited by what it can observe and process. Omniscience doesn’t exist in the physical world. Humans hallucinate too - all the time. It’s just that our approximations are usually correct, and then we don’t call it a hallucination anymore. But realistically, the signals coming from our feet take longer to process than those from our eyes, so our brain has to predict information to create the experience. It’s also why we don’t notice our blinks, or why we don’t see the blind spot our eyes have.

    AI representing a more primitive version of our brains will hallucinate far more, especially because it cannot verify anything in the real world and is limited by the data it has been given, which it has to treat as ultimate truth. The mistake was trying to turn AI into a source of truth.

    Hallucinations shouldn’t be treated like a bug. They are a feature - just not one the big tech companies wanted.

    When humans hallucinate on purpose (and not due to illness), we get imagination and dreams; fuel for fiction, but not for reality.


  • Its funny how something like this get posted every few days and people keep falling for it like its somehow going to end AI. The people that make these models are acutely aware of how to avoid model collapse.

    It’s totally fine for AI models to train on AI generated content that is of high enough quality. Part of the research to train models is building data sets with a text description matching the content, and filtering out content that is not organic enough (or even specifically including it as a ‘bad’ example for the AI to avoid). AI can produce material indistinguishable from human work, and it produces material that wasn’t originally in the training data. There’s no reason that can’t be good training data itself.



  • That’s a pretty sloppy reason. A nuanced topic is not well suited to be explained in anything but descriptive language. Especially if you care about people’s livelihoods and passion. I care about my artist friends, colleagues, and acquaintances. Hence I will support them in securing their endeavors in this changing landscape.

    Artists are largely not computer experts and artists using AI are buying Microsoft or Adobe or using freebies and pondering paid upgrades. They are also renting rather than buying because everything’s a subscription service now.

    I really don’t like this characterization of artists. They are not dumb nor incapable of learning. Technical artists exist too. Installing open source AI is relatively easy. Pretty much down to pressing a button. And because it’s open source, it’s free. Using them to it’s fullest effect is where the skill goes, and the artists I know are more than happy to develop their skills.

    A far bigger market for AI is for non-artists and scammers to fill up Amazon’s bookstore and the broader Internet full of more trash than it already was.

    The existence of bad usage of AI does not invalidate good usage of AI. The internet was already full of bad content before AI. The good stuff is what floats to the top. No sane person is going to pay to read some no name AI generated trash. But people will read a highly regarded book that just happened to be AI assisted.

    But the whole premise is silly. Did we demonize cars because bank robbers started using them to escape the police? Did we demonize cameras because people could take exact photo copies of someone else’s work? No. We demonized those that misused the tool. AI is no different.

    A scammer can generate thousands of garbage images and text without worth, before an artist being assisted by AI can make a single work. Just like a burglar can make more money easily by breaking into someone’s house and stealing all their money compared to working a day job for a month. There’s a reason these things are illegal and/or unethical. But those are reflections of the people doing this, not the things they use.


  • I mean, you ignored the entire rest of my comment to respond only to a hyperbole to illustrate that something is a bad argument. I’m sure they are making money off it, but small creators and artists can relatively make more money off it. And you claim that is not ‘actually happening’. But that is your opinion, how you view things. I talk with artists daily, and they use AI when it’s convenient to them, when it saves them work or allows them to focus on work they actually like. Just like how they use any other tool to their disposal.

    I know there are some very big name artists on social media who are making a fuss about this stuff, but I highly question their motives with my point of view in mind. Of course it makes sense for someone with a big social media following to rally up their supporters so they can get a payday. I regularly see them speak complete lies to their followers, and of course it works. When you actually talk to artists in real life, you’ll get a far more nuanced response.