LLMs can reason about information. It’s fine to call them intelligent systems.
LLMs can reason about information. It’s fine to call them intelligent systems.
It’s reasonable to refer to unsupervised learning as “learning on its own”.
An LLM trained exclusively on Facebook would be hilarious. It’d be like the Monty Python argument skit.
Installs, not downloads.
That’s my understanding as well. You could have a game on Steam that you haven’t even updated in years, and then you suddenly have to start paying for new installs from existing owners.
Actually, it’s potentially even worse. You could have a game that you released and then later removed from every storefront, but if people keep installing it, Unity will demand payment.
How does your engine compare to MonoGame?
Still sucks if you’ve got a team that’s really good at Unity, but yeah
Microsoft has Unity games. I can’t imagine they’re happy.
Same guy: https://kotaku.com/unity-john-riccitiello-monetization-mobile-ironsource-1849179898
Hindsight is 20/20, but maybe devs should have seen this coming 😑
My hypothesis is that wealth causes brain damage.
It’s an obvious overreach.
An AI generated image is essentially the solution to a math problem. Say the images are/become illegal. Is it then also illegal to possess the input to that equation? The input can be used to perfectly replicate the illegal image after all. What if I change a word in the prompt such that the subject of the generated image becomes clothed? Is that then suddenly legal?
I understand the concern, but it’s just incredibly messy to legislate what amounts to thought crimes.
Maybe we could do something to discourage distribution, but the law would have to be very carefully worded to prevent abuse.
Not so. There are plenty of use cases that already have better solutions.
https://en.m.wikipedia.org/wiki/Dodge_v._Ford_Motor_Co.
Among non-experts, conventional wisdom holds that corporate law requires boards of directors to maximize shareholder wealth. This common but mistaken belief is almost invariably supported by reference to the Michigan Supreme Court’s 1919 opinion in Dodge v. Ford Motor Co.
Lol
Because DRM usually gets cracked within weeks, if not days?
Humanity is not a hive mind. We can’t just inform everyone instantly.
I can tell GPT to do a specific thing in a given context and it will do so intelligently. I can then provide additional context that implicitly changes the requirements and GPT will pick up on that and make the specific changes needed.
It can do this even if I’m trying to solve a novel problem.
GPT can write and edit code that works. It simply can’t be true that it’s solely doing language patterns with no semantic understanding.
To fix your analogy: the Spanish speaker will happily sing along. They may notice the occasional odd turn of phrase, but the song as a whole is perfectly understandable.
Edit: GPT can literally write songs that make sense. Even in Spanish. A metaphor aiming to elucidate a deficiency probably shouldn’t use an example that the system is actually quite proficient at.
https://en.m.wikipedia.org/wiki/Supernormal_stimulus