At Least Be Honest About AI

in #ai2 days ago

I like AI. Even though I have worries about it, I've been looking forward to it for decades now. I love sci-fi after all.

Plus, what's it gonna do, learn from humans too well? Do an even worse job than we have? Cause out extinction? Like we aren't close to doing that ourselves, and have been numerous times in the past for all sorts of reasons.

Some of the problems that people purport that AI has boil down to either the fact that it's not intelligent enough, or that it uses too much energy, or that these large companies aren't respecting copyright.

Because I'm sure you respect copyright so much and never illegally download anything. I'm sure you wouldn't download a car.

AI Isn't Intelligent Enough

Well yeah, because we only just recently got to the point where we can even do chat bots to a somewhat convincing degree. People keep forgetting that the language model chat bots have no actual thinking process and making gotchas about how they can't do math or hallucinate answers.

The entire text is a "hallucination" based on data fed into it. It's just a pattern regurgitation machine. They train it to pump out certain responses to certain patterns. That's it.

The fact that it's this close to legible, with how basic it is, is astounding. People get fooled into thinking that they're thinking all the time.

Initially many of these initial chat bots seemed to have very limited or no database behind them. That seems to have changed at some point. Now a few of them reference some sort of database in the background, it seems, and also seem to perform searches and use references.

Considering how fast they're evolving, it's astonishing. We might have another breakthrough soon. Or we might just have talking toasters. No clue to be honest. We are dealing with corporations pushing much of this development, after all.

It Uses Far Too Much Energy!

Does it? These corporations are using a hell of a lot of energy, sure, but is that the fault of the AI or the corporate choices? They're throwing a fuckton of data at these AIs to train them, but not all of that is necessary for much of the responses that the AI has to give. It's a choice. That choice leads to the amount of training it needs.

That energy usage to train the AIs is then used to make bloated figure about the energy usage of using the AI. In essence, it's a lie.

If the AI used as much energy as some claim, and cost as much to run, they would be completely unaffordable and no one would be offering for people to use them for free on their websites, or with cheap subscriptions. You can even run many models locally on your computer, without blowing up the transformer on your house.

It reminds me of many of the lies they've told about crypto, making it out like it uses more energy than the sun and costs the moon. Of course they're untrue figures but they rile people up. In actuality, crypto uses less energy than the banking industry, far less. I will concede that the idea of number crunching on GPUs may have been a mistake however. It did lead to those GPUs being available for AI though.

We can't do anything about corporations using far too much energy and money to train AIs. They're already trained. That energy is already used. The only thing we can do is improve the AI so that it doesn't require as much data or computation to train. Maybe we can improve it's replies and how much energy that uses as well, even though it's a fraction of whats required to train.

AI Doesn't Respect Copyright

No, the people that trained it may not have. Many that use it probably don't. Many that use the internet don't.

If you try using many of these AIs it's clear they haven't even been trained on much of the data out there that's in the public domain. Which is really unfortunate, because they could have ensured Archive.org has some nice funding by paying for a copy of their database. Which would help with their current issues due to some assholes DDOSing them, claiming it was because they're part of the US government.

If you use the standard model for Stable Diffusion 1.5 though, you might actually recognize some of the images it was trained on for certain subjects as being Creative Commons though! Sure, you won't notice if you're trying to create a famous actor or something like that, that's in the data they specifically sought out, but if you try to create anime with the basic 1.5 model, it occasionally spits out something akin to the very limited set of images available from Creative Commons libraries online. Which I find kind of funny, because there has to be more in the public domain than that.

These are all choices that the companies and employees and researchers made! They could just as easily train the AI on Creative Commons, Public Domain, and Permissive Licensed data. It would simply respond differently.

It isn't that AI is breaking copyright, it's that it's a tool created by amoral corporations. They care far more about censoring the output than they care about the legality of copyright. They're amoral, just like the AI itself. The corporations only care about the potential financial costs, not the morals of downloading a car.

Be Honest and Focus

There are problems with AI. Primarily being that it's run by amoral corporations. They care more about the financial cost verses the potential profit than any sort of morality or the environmental cost of running it.

That doesn't mean you should bloat the numbers though. If you're quoting some, make sure you check them and double check them and make sure there isn't some issue with how they're presented.

If you specifically have a problem with people using art of someone without permission, talk about that. Don't conflate that with AI as a whole.

Much of that copyright violation is being done by individuals. That won't change even if these companies burn, likely. As long as we have computers and video cards, we'll now be able to train our own AI.

It's just going to be an endless batter to get the genie back in the bottle from now on. Pandora's Box is open.

There are ways to mitigate it however. There are programs people have created to try to poison the AI that people try to train on specific art, for example.

It doesn't help when people aren't being honest about the issues and exaggerate things. While many people may be swayed, it just looks ridiculous to anyone that knows better. Of course, propaganda works. Just realize you're fighting a propaganda war, and you'll likely lose, because corporations have bigger budgets and more experience.

There are many things that can likely be done however. There can be limits put on corporations, and what is in the models that are distributed publicly, for example. There will likely still be a lot of things done illegally though. But there are things that can be done.