top of page
Search

Artificial Intelligence: What is the New Normal?



The following piece is from Purposeful Prose's guest writer under the pen name, Chartres Royal. We sincerely thank them for lending us their words and insights on AI. It was a pleasure to work with them, and we look forward to further collaboration.


When I was a kid, this thing seemed like a pipe dream. Sure, we had computers that could do things if you told them to, but could it think like a human, act like a human, or pose as a human? Nah. That’s not real. Is it? Surprisingly, yes. According to Stanford University, OpenAI's Chat GPT model 4.5 has passed the Turing test. What does this mean for the rest of us? Like it or not, it looks like AI is here to stay. It seems that every website, app, and company has been working on training and building their own to compete with the others.

 

            Is Artificial Intelligence, at its core, a bad thing? Personally, not really. A computer that can do tedious or complex tasks for you would be something that many people could use, not just me. Even now, AI is being integrated into some devices to make doing things easier. Can you imagine a world where all you have to do is open your phone and tell your lawn mower to go out and cut the grass for you? That’s now. There are numerous devices on the market using AI to do exactly that.

 

            Sounds good, yeah? Well, the issue with a lot of this is the same issue we have with so many specific automotive models. Sure, pretty much every car on the radio runs on the same principles and the same functions. However, not all of them have parts that are interchangeable. You can’t just take pieces from one truck and put them in another. Why not? Well, the company that owns that model wants you to use their parts only. Now, expand that into AI. You can’t just use any AI, you must use a certain one, and because it’s watching and learning all the time, it will know if you use something else. Suddenly, that “push button, go mower” got a lot more expensive and complicated to operate and maintain.

 

            AI can be fun, though. Even if unintentionally. Some time ago, my friends and I would have nights where we’d have an AI model make a picture or write a story, and we’d laugh at how absurd and silly the results were. Some art of people with insane proportions or extra limbs, stories that would start with a hedgehog and his friend going to the store and, somehow, end with John Wayne. These things are funny because the system was so flawed. That really isn’t the case anymore, though, is it? Oh, sure, there are still models making silly things, but they’re getting better. Closer to real, closer to human. That’s when it stops being funny and starts getting concerning.

 

PART ONE: They Paved Paradise

 

            So many discussions have already been had about this, but I’d be a terrible writer if I didn’t at least start on it. AI isn’t good for the environment. As it stands right now, the amount of water and electricity that these models consume in order to operate is intense. How intense, you ask? That’s a great question. I wish I could tell you. Most of these companies who operate these models aren’t required to report those numbers, so, of course, they don’t.

 

            According to the United States Government Accountability Office: “Generative AI uses significant energy and water resources, but companies are generally not reporting details of these uses. Most estimates of environmental effects of generative AI technologies have focused on quantifying the energy consumed, and carbon emissions associated with generating that energy, required to train the generative AI model. Estimates of water consumption by generative AI are limited. Generative AI is expected to be a driving force for data center demand, but what portion of data center electricity consumption is related to generative AI is unclear. According to the International Energy Agency, U.S. data center electricity consumption was approximately 4 percent of U.S. electricity demand in 2022 and could be 6 percent of demand in 2026.”

 

            Up to 6% of total US electricity by 2026. That may not seem like a lot, but to put it into perspective, only 1% of the US population is approximately 3,467,909 people according to usafacts.org.

That’s a pretty big number, and that’s just people.

 

            Now, I have a small background in computer repair and IT. I’ve worked on regular data centers and servers before. I know they take in a lot of power and put out a lot of heat just to do their basic work. Cooling systems and facilities to house just that kind of tech do a lot of work all day long. AI demands more of a system than your average server or datacenter. That’s more energy, more cooling, more work in general. Usually, that just means we need to work on more innovative ways to make a system better. We make the computers use less power, run cooler, expel heat better. That innovation isn’t coming fast enough for a lot of tech companies and businesses who need their AI model to work NOW. So, they resort to keeping the environmental groups off their backs with “Carbon Offsets”.

 

            According to Chapter Zero: “Carbon offsetting is a process that involves a reduction in, or removal of, carbon dioxide or other greenhouse gas emissions from the atmosphere in order to compensate for emissions made elsewhere. Carbon offsetting generally involves companies paying other entities to reduce carbon emissions that they cannot currently reduce themselves. The company may then count the emissions reductions they have paid for towards their own climate targets.”

 

            That all sounds pretty good, right? AI company compensates for the resources they’re using by getting some carbon credits, and then they plant some trees or something. No worries. Except...maybe some worries.

 

            From The Australian Institute: “Our research found that ‘avoided deforestation’ projects, which make up 1 in 5 of all carbon credits in Australia, do not represent genuine abatement. In most cases, credits were issued for protecting areas were never going to be cleared. That’s like reducing tobacco use by paying non-smokers not to smoke.”

 

            Now, that seems more like the corporate mindset we all know, doesn’t it? “Throw money at it until it stops making noise.” Now, not all AI companies are like this. There have to be a few who are doing things right. Doing their best to work with sustainable energy and helping to offset whatever energy they use in order to keep themselves as neutral as possible. Sure, I have no reason to doubt that there might be some doing that, but it certainly isn’t all of them, and we may never know for certain one way or another because, again, this isn’t something they’re compelled to report. Another mystery for the pile.

 

PART TWO: Wait, You’re Not Richard Bachman!


            How does AI learn? It’s a learning model, yes? So, how does it learn? That’s simple, really. It learns in a similar way we all do. If exposed to certain things, it explores those things, taking them in, learning them. We, as children, learned about colors by being exposed to them, then being told their names while looking at them. Slowly, we make the connection that this big, bright color is “red”, and we have a concept of that. From there, we build on what “red” can represent and mean. Red can mean “hot”, it can mean “danger”, it can mean “ripe”. AI learns in a similar fashion. Once exposed to enough information, it can start “learning” it and using the pieces of what it was “taught” to make something it’s asked to.


            So, does that imply that AI goes to a school of some kind? Some kind, yes. Oftentimes, the AI learns from information shown to it, or in the instance of language models, people talking to it. This helps the program to understand more of what it is expected to do and build on that. Sometimes, however, this data it’s learning on isn’t obtained entirely legally.


            From The Author’s Guild: “The Authors Guild and 17 authors filed a class-action suit against OpenAI in the Southern District of New York for copyright infringement of their works of fiction on behalf of a class of fiction writers whose works have been used to train GPT.”

 

            Yeah, sometimes this happens. Have you ever been on a social media site that’s really excited for you to use their new AI software? You might want to check your settings. That machine might be learning from everything you’ve ever posted or messaged to anyone you know. It’s called “data scraping”, and it’s extremely common. A lot of sites will cover themselves by simply putting it in the TOS agreement that you had to agree to when you signed up. All you wanted to do is share silly cat pictures with your friends and, suddenly, everything you ever said is used to train a machine to talk like you.

 

            It’s not limited to text posts, either. The pictures you share, the messages you send, even your selfies. All of that can be used to train an AI to make things that can imitate people better. A better machine to write your books, make your art, even talk to you. But remember who is at the helm of these machines. It’s not all the benevolent living in space while WALL-E cleans up for us. These AI creations, designed by humans to help make things easier, will not be immune from advertisement and paywall.

 

            But why stop there? Why not use it to replace every “squeaky wheel” that the system owners dislike. Who needs writers for movies anymore? We have AI! Scoring the movie? AI! Acting in this movie? A. I. Does that seem farfetched to you? Somehow all too unbelievable? The Screen Actors Guild doesn’t think it’s that insane. They’ve been protesting and raising awareness about this problem for a long time now. Sure, when AI first started it seemed silly, but it’s getting more and more real with every passing year. New models, new learning, one more step to replacing human involvement because AI doesn’t need to get paid, now, does it?

 

PART THREE: That Is Not My Beautiful Wife!


            Deepfakes.


            Some of you just reacted to that, others are lost. Let’s put it out in the open. The definition of “Deepfake”, according to Merriam-Webster, is: “an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.”

 

            This is similar to what we talked about earlier with AI replacing actors. However, this system can be used for more nefarious purposes as well. Deepfaking doesn’t just affect famous people. With enough information, you could deepfake pretty much anyone. Gathering the data needed may be a difficult task, but the fact that it isn’t impossible should be concerning enough.

 

            Already, deepfaking technology has been used in various scams. The likenesses of celebrities, generated by AI, have been used to garner trust from unsuspecting shoppers, convincing them to purchase products. International political figures have been used to peddle cryptocurrency.

 

            Even if a platform has policies preventing it, without active monitoring and work towards keeping that sort of technology in check, those policies don’t mean anything. So many of these scammers will be banned from apps like TikTok only to make a new account and keep going. For them, being removed temporarily is a minor issue because they’ll be back the next day with another deepfake add and another account to scam. Simply telling them that they’re bad isn’t enough.

 

PART FOUR: Well, What Now?


            That is the question, isn’t it? AI isn’t going away no matter what we say or do about it. The genie is out of the bottle and there’s no going back. We need to work towards a future that makes working with this new part of our society safer, cleaner, and more efficient. However, that’s not a one-person job. This kind of work will take everyone doing what they can together. Not just a call to get things made better, but a demand.


            If you see a protest about this stuff, support them. Sure, this technology keeps saying it’s safer, easier, and more available, but if it’s taking jobs and livelihoods from people, is it really making things better? How is that a better future? Push back against this idea that people can just be replaced with AI programs. Human interaction is what makes human society work, so let’s work.

 
 
 
bottom of page