(Image courtesy of Google Deepmind on Pexels)
The current trajectory of technology is genuinely frightening, and we should be more concerned about it.
Artificial Intelligence (AI)
It seems like you blink, and the next thing you know, a new technological fad has sprung up completely out of nowhere. This was the case with software assistants in the early 2000’s with the likes of Microsoft Office’s Clippy, NFT’s during late 2020, and now– it’s artificial intelligence.
AI in the current internet zeit geist is almost like a buzzword for technology that cuts corners in work. Think Google’s AI, “Gemini” as a latent example. Instead of looking through different sources, AI can summarize the information for you.
Another major example is generative AI, with the likes of ChatGPT or Midjourney, websites that can generate images in a span of a few seconds.
This is, however, not without controversy. Typically, when most people talk about AI, they think of the generative kind– The ones that have threatened animators, artists, writers, and generally creative people out of jobs.
It’s one of the major reasons why people can’t stand AI– it steals other people’s hard work (typically without their consent) in order to fuel their algorithms that horrifically mangle their hard work.
This also isn’t to mention the environmental impacts of just using the software in the first place.
Using something like OpenAI or ChatGPT as a prime example, just entering a question into the prompt box uses the equivalent of an average store-bought water bottle, which is the equivalent of 16.9 ounces.
On its own, that doesn’t sound so bad… until you realize that these aren’t just websites. It’s something that has to be in a data center with copious amounts of electricity and heat to run. Thus, water has to be used to cool down the data centers.
If millions of people are using ChatGPT (as they have) all at once over a 24-hour period (and each store-bought water bottle is roughly 17 ounces), the average water wasted on the data centers would be in the millions. It is a lot of water being used just to cool down a system after answering one question from each device that’s accessing the website.
So, AI actively steals the work and data of other people without their explicit permission or consent, AND it’s terrible for the environment. Yet, at least it’s a functioning software, right?
It’s not, half the time.
More often than not, you’ll find these AI chatbots or search functions give out blatantly wrong information. It’s almost seen as a contest, to see how messed up the information the AI will spit back at you.
Ironically enough, Google’s very own AI overviews are one of the most prominent examples of incorrect search results.
This is because Google’s AI, Gemini, doesn’t actually evaluate sources across the web. All it can properly do is read sources. It can’t pick up on certain cues in the source itself.
Typically, these AI-powered search functions are powered by something called “retrieval-augmented generation.” This is what Gemini is trained on.
Think of it this way: Say you look up a situation, and it turns out, that situation is a query from a Reddit post. Gemini will give you the answers from the Reddit post rather than actually answer your question directly, because it can only detect words due to the retrieval-augmented generation (or RAG) they’re modeled on.
So, how does AI function, then? How do any of these built-in softwares that are essentially in every single app even work?
Simply put, by data mining. Or data scraping. Or data harvesting. There are many ways to describe these methods, but they all mean relatively the same thing:
They gather data from various users, whether they use the AI in their program or not. Sometimes, companies will use AI in order to obtain this data from users. This is typically then used for targeted advertising, or to train various AI services and technologies.
And guess what? Most of the time, the companies that do this don’t properly disclose that this data is even being harvested. Or, they use the tried and true method of burying it in the privacy policy and terms of service. Yes, they do have to disclose that user data is being used to train AI, but it doesn’t mean it’s disclosed properly.
A prominent example of this is Meta Technologies, the corporation behind Facebook and Instagram. They openly admit they use AI not just for marketing purposes, but they scrape “public” data present on their platforms to fuel their own AI software, like their image generation through Meta AI.
Being public does not equal consent to steal other people’s data. That is a clear point that Meta doesn’t understand…
Except they do. They just make the “opt out” option as difficult as possible to navigate.
In order for Meta to not use your data to train their AI, they make the process incredibly difficult to opt out from.
Anti-Intellectualism
Along with the rise of the improper use of AI in spaces like work or school, there have been questions in regards to how AI is affecting intelligence.
Several studies have been conducted about how AI could possibly be making people, put in the most blunt way possible, stupid.
One study conducted by Michael Gerlich on MDPI researched the ways that AI impacted cognitive thinking and critical thinking skills impacted on different age groups.
“The analysis revealed a highly significant effect (p < 0.001), indicating that increased reliance on AI tools is associated with reduced critical thinking abilities. These findings align with theories of cognitive offloading, where the automation of analytical tasks reduces the need for independent reasoning. The residual variance suggests the influence of additional factors, such as educational background and cognitive engagement, on critical thinking. This underscores the need for strategies that balance the benefits of AI integration with the development of independent analytical skills, particularly in educational and organisational settings.”
“Participants with advanced education levels and those in managerial roles exhibited higher levels of deep thinking, likely due to greater exposure to cognitively demanding tasks. Conversely, gender did not significantly influence deep thinking activities, suggesting that other factors may play a more prominent role. These findings underscore the interplay between demographic variables and cognitive engagement, offering actionable insights for educational and occupational strategies aimed at fostering critical thinking.”
Education levels and age seem to play a major factor in AI usage, with younger users utilizing AI more than older participants in the study. However, this is at the expense of people’s ability to critically think about, well, anything.
Another study reported on how AI almost impacts one’s own self-confidence in their critical thinking skills.
“Surprisngly, while AI can improve efficiency, it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI, raising concerns about long-term reliance and diminished independent problem-solving.” This came from the researchers themselves, and where was it reported from?
Microsoft.
Yes, one of the big corporations behind all the rapid AI advancements, have had researchers report how AI affects their own critical thinking skills. It’s quite telling how detrimental AI is on one’s own brain.
But… a dysfunctional brain isn’t entirely useless. After all, minds are not only easy to deteriorate, but also easy to manipulate. And what political ideology thrives off of the uneducated?
Fascism
“Trying to define ‘fascism’ is like trying to nail jelly to the wall.”
Those are the words of Ian Kershaw in his book called “To Hell And Back.” There is a truth in his statements, as fascism varies between leaders in execution of beliefs. However, these methods are commonplace between most, if not all fascist dictators. These methods are categorized as “centralized, totalitarian governance, strong regimentation of the economy and society, and repression of criticism or opposition.
So, what does this have to do with the current state of technology?
We are already at a point in internet history where online safety has always been a concern. It’s basic internet etiquette– never share your personal information online, but now, apps, corporations, and even states are requiring your personal information just to be accessed. Not to mention, they’re dictating how certain media on the internet is described.
This is in line with the concerning rise of conservatism, especially in the modern day internet, where everything that isn’t pure or Christian is “pornographic” and “a threat to children everywhere.”
We’ve seen this with the likes of the U.K. ‘s Online Safety Act, Collective Shout’s efforts in Australia, and even in the U.S. ‘s very own KOSA (or Kid’s Online Safety Act).
This extremist way of policing and controlling the internet has been what people are labeling as “technofacism,” or “techno-poplulism” if you’re scared of facism being attached to technology in that way.
The best way you can describe this is from the Wikipedia page, say its “either a populism in favor of technocracy or a populism concerning certain technology – usually information technology – or any populist ideology conversed using digital media.”
Technofacism isn’t even a new concept either. It’s been a term since the 1990s, where modern technology was very much in its infancy.
The term was coined in response to a magazine article from “Upside.” One of the initial writers of said statement, Michael Malone, later retracted that statement, claiming that the tech industry (especially the Silicon Valley) could easily “fall into technofacism.”
So, what is “technofacism” exactly?
Again, facism is hard to define. But technofacism is described as, “A form of facism that uses technology to meet its ends.”
The Connection And Why It Matters
As mentioned prior, the common tactics of facism can be chalked up to oppressive and abusive uses of power. This has definitely leaked their way into the modern day.
Ever since the inception of the internet, things such as social media, to the news have been censored by various different people, whether its hackers or outside organizations.
Of course, certain countries have different laws in regards to internet usage. Places like Canada and Iceland have little to none, while places like China have what is known as the “Great Firewall” to some, which greatly prohibits Chinese citizens from accessing the internet beyond what they’re supposed to.
However, this is more than just internet censorship, it’s also constant surveillance.
Do you know how many websites and apps require locations to be on while using them? Or how about the secret permissions that are turned on anyway?
One example with this has been Instagram. One of their latest updates has been a map, where people can see theirs and other’s locations.
This sparked pushback from Instagram users, as they felt this could open up possibilities of users getting stalked or even harassed in real life. All it would take is for one person to follow someone just to see where they lived.
Not to mention, loads of apps require location services. Some of which aren’t even necessary.
Another thing of note is the fact that data is sacred in the current day. It’s how most of these websites make their money, or revenue. They steal the data, whether it be search data, activity on the application, or even purchases, and sell it to advertisers.
On top of that, we live in the age of AI, which is dependent on the data of others, whether the people want their data used for it or not. It doesn’t matter. We already have company giants made solely for the purpose of scraping the internet for data to train AI.
Which is why we need to be more mindful and wary of how our personal data is being used. Opt out of these data collections that are solely for AI, stop using ChatGPT, find ways to hide your identity online, practice basic internet safety, anything to stay safe online, and to not fall victim to this big wave of technofascism.
Because this is all just a number’s game, and a very bad one at that. Don’t play it.