I was shocked as I went through the source struggling to find any modules that had C. Craziness.
I was shocked as I went through the source struggling to find any modules that had C. Craziness.
There are shitty people on YouTube too, why hate on a platform just because shitty people use either one of them? Beats giving money to YouTube, we have to start somewhere to decentralize more.
https://odysee.com/ – this one is also worth checking out, Louis Rossmann even posts there.
What about Biden signing in the new spying bill recently that enhances wiretapping of US citizens?
I agree, wish this was the actual goal but it’s going to be hard to pry those rights out of their hands.
It’s weird seeing comments that outline the actual problem getting downvoted here more than the superfluous comments that do not address the real problem at all. Bizarroworld.
Would you rather a hostile foreign entity do it instead, who have vested interest in sewing destructive chaos as a goal, though? That’s the alternative.
I’m still on the google prompt bandwagon of typing this query:
stuff i am searching for before:2023
… or ideally, even before COVID19, if you want more valuable, less tainted results. It’s only going to get worse from here, 2024 is the year of saturation with garbage data on the web (yes I know it was already bad before, but now AI is pumping this shit out at an industrial scale.)
Try phind.com, it’s got an insanely advanced model trained on a ton of their own proprietary code, and free too (or paid with more features and more prompts per day, etc.)
deleted by creator
I think it comes down to the tens of millions of dollars that the reddit executives sold out to. It’s easy to not care when someone is throwing $100 million at you. Also: fuck spez.
There’s probably even a ‘sentiment’ tracking system to automatically remove negative comments at this point.
I’ve been doing this for over a year now, started with GPT in 2022, and there have been massive leaps in quality and effectiveness. (Versions are sneaky, even GPT-4 has evolved many times over and over without people really knowing what’s happening behind the scenes.) The problem still remains the “context window.” Claude.ai is > 100k tokens now I think, but the context still limits an entire ‘session’ to only make so much code in that window. I’m still trying to push every model to its limits, but another big problem in the industry now is effectiveness via “perplexity” measurements given a context length.
https://pbs.twimg.com/media/GHOz6ohXoAEJOom?format=png&name=small
This plot shows that as the window grows in size, “directly proportional to the number of tokens in the code you insert into the window, combined with every token it generates at the same time” everything that it produces becomes less accurate and more perplexing overall.
But you’re right overall, these things will continue to improve, but you still need an engineer to actually make the code function given a particular environment. I just don’t get the feeling we’ll see that within the next few years, but if that happens then every IT worker on earth is effectively useless, along with every desk job known to man as an LLM would be able to reason about how to automate any task in any language at that point.
You just described all of my use cases. I need to get more comfortable with copilot and codeium style services again, I enjoyed them 6 months ago to some extent. Unfortunately current employer has to be federally compliant with government security protocols and I’m not allowed to ship any code in or out of some dev machines. In lieu of that, I still run LLMs on another machine acting, like you mentioned, as sort of my stackoverflow replacement. I can describe anything or ask anything I want, and immediately get extremely specific custom code examples.
I really need to get codeium or copilot working again just to see if anything has changed in the models (I’m sure they have.)
I use AI to write code for work every day. Many different models and services, including https://ollama.ai on my own hardware. It’s useful for a developer when they can take the code and refactor it to fit into large code-bases (after fixing its inevitable broken code here and there), but it is by no means anywhere close to actually successfully writing code all on its own. Eventually maybe, but nowhere near anytime soon.
While that is true, a lot of death and suffering was required for us to reach this point as a species. Machines don’t need the wars and natural selection required to achieve the same feats, and don’t have our same limitations.
https://github.com/jdhao/nvim-config#features
Highly recommend this.
A modern Neovim configuration with full battery for Python, Lua, C++, Markdown, LaTeX, and more…
This is enough to get the intellisense and linters up and running. Only takes ~5 minutes to configure by installing prerequisites, it’s worth it though.
You mean the whole licensing ordeal? Retroactive type crap? I know a few developers personally that dumped it entirely because of that. Although I heard they backpedaled a little bit on that part because of the backlash, but the damage is done, trust is gone.
FF has way too much groundwork laid and way too much mindshare currently (especially given the rust language and all…) If, for some reason, thousands of devs just gave up on mozilla, more would continue the path and fork it most likely.
Sweet, do you have any links on how to set that up? My next goal is to set up my own lemmy.<mydomain> instance up so I can pull various things for my own aggregation. Last I tried, I had errors after the Rust compiling steps, need to try it agian.