• 0 Posts
  • 34 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • I agree that’s it’s a “hate the game, not the player”. The issue is how much influence he could have to steer the market to favor his product vs. the competition. It’s happened so many times in history where the better product fails because they can’t play the game like the inferior company.

    To quote “Pirates of Silicon Valley”:

    Steve Jobs: We’re better than you are! We have better stuff.

    Bill Gates: You don’t get it, Steve. That doesn’t matter!

    So is it fair for the consumer for big companies to be able to influence the game itself and not just play within the same rules? I’d say no.



  • There are two dangers in the current race to get to AGI and in developing the inevitable ANI products along the way. One is that advancement and profit are the goals while the concern for AI safety and alignment in case of success has taken a back seat (if it’s even considered anymore). Then there is number two - we don’t even have to succeed in AGI for there to be disastrous consequences. Look at the damage early LLM usage has already done, and it’s still not good enough to fool anyone who looks closely. Imagine a non-reasoning LLM able to manipulate any media well enough to be believable even with other AI testing tools. We’re just getting to that point - the latest AI Explained video discussed Gemini and Sora and one of them (I think Sora) fooled some text generation testers into thinking its stories were 100% human created. In short, we don’t need full general AI to end up with catastrophe, we’ll easily use the “lesser” ones ourselves. Which will really fuel things if AGI comes along and sees what we’ve done.



  • Nothing that high level. Different systems are running independently, some may be redundant to each other in case one fails. But run something long enough especially in extreme conditions and things can drift from the baselines. If a power off and on regularly prevents that it’s a lot easier than trying to chase down gremlins that could be different each time they pop up for different reasons.

    Even NASA I believe has done such resets from Apollo through the unmanned probes from time to time. Mentioning Windows, the newest versions don’t really do this baseline reset if you just shut them down, even if you disable the hibernate/sleep modes, while a restart does.



  • The sell of the paper is a new fuel storage medium. The positive part is that creating a fuel from existing carbon sources means (hopefully) less petroleum pumped out of the ground to contribute more carbon. The negative is that it leans more to that than the permanent sequestering, and I can’t seem to pick out a net energy use anywhere, but basic physics tells us it will take more energy to do the process in entirety, even if most of it results in large scale storage. I doubt that happens because removal of carbon vs. putting into a new form to be used is like burying money. Which leads to something I’ve noticed pop up only in the past month or so…a new term added. “Carbon capture, utililization, and storage”. CCS has already been very heavily into the production of carbon products to support their efforts, after all they have to make a profit, right? The only real storage done is a product to inject into the ground to help retrieve more oil. Again, they aren’t going to just bury the money, that’s foolhardy for a business.

    Sorry for more negativity in the thread. Just calling a spade a spade. Those who don’t like the feeling that gives can just ignore it and focus on the new science that will save us.





  • It’s not AGI that’s terrifying, but how people are so willing to let anything take over their control. LLMs are “just” predictive text generation with a lot of extras to make things come out really convincing sometimes, and yet so many individuals and companies basically handed over the keys without even second guessing its answers.

    These past few years have shown how if (and it’s a big if) AGI/ASI comes along, we are so screwed, because we can’t even handle dumber tools well. LLMs in the hands of willing idiots can be a disaster itself, and it’s possible we’re already there.






  • There is existing, and there is being effective for the advertised job. Carbon capture certainly exists in different forms and makes sense as an addon to an existing emitter. It’s hyped to be a lot more than what it does, even used to excuse more emissions growth, and that’s the snake oil being talked about. In the end the only true “solution” is to reduce the actual production of emissions, something that the overall world is not will to do. And I put solution in quotes because we’re decades behind on action that would be meaningful, having exponentially increased the pollution since then. We’d have to do far more than just stop emissions to fix anything.





  • I’m in the same school of thought of not dismissing things that would actually change many of the problematic issues that all have contributed their part to create the predicament we find ourselves in. If it’s not just another greenwashing to profit off climate change awareness, then let’s do it.

    But it is a predicament. While I think we should do everything constructive to stop damaging the environment, I don’t think this with the illusion that there are solutions, but just that we should do the right things regardless of their effectiveness.