TLDR: AI is being used to steal from us and make our lives worse, all while charging us and spying on us

I have an Idea why AI is the next big thing in tech. It is sadly dystopian.

If I remember correctly, the last 'big' thing in Tech was Blockchains, which in short are cynically terrible databases. There was a bunch of hype around it because some people got very rich (the bitcoin value still hovers at 60K) and allowed a bunch of rich people to avoid taxes and also aid in all kind of money laundering schemes. All pretty terrible but at least the harm was generally restriced to the people who bought in to the scheme. Now in 2024, AI is seemingly everywhere, with the evangelsts thinking it will replace all our jobs and the preppers thinking it will come to kill us all. I like many people have played around with these freely available tools and must say that at least at first glance they seem to be impressive tools that allow me to do the thinking part, while the computer does most of the repetitive typing part. I quickly found out however, that for anything more than simple problems they lack the specificity that my work as a programmer requires. Additionally they also seem to halucinate things when asked to generate longer sequences, meaning that I can't really trust the output. So for my particular use case, they are a nice toy and can be put to good use in certain circumstances, where well established patterns simply need to be repeated for a specific context. I don't think that this is the real reason this is pushed into the mainstream however. I think, as with most things humans do, it's all aboout wealth and power. AI, is again a somewhat veiled wealth transfer and a means to centralize power and control (the only things people actually seem to care about and have an uncannily good at understanding). Let me elaborate...

Power and AI:

Despite AI only moving into public attention recently, the underlying Ideas and methods where invented in the 50s. Due to the very limited computational resources that where available anywhere on earth during that period, theseinventions while intelectually interesting, where constrained to showing that these systems coud in deed statistically optimize a given target metric when given enough data to 'learn' from. Then there where about 60 years of slow progress, mostly inside university laboratories. Now with computers having grown exponentially more capable, the technology finally exists to put the ideas from the 50s into practice. Many useful things have been builtfor a variety of applications and do very helpful things that would have both been very hard to write 'traditional' programs for and make things feasable that before that would have been prohibitively expensive. Also all of these things could be trained by renting out some high performance servers for a few days and then run on anything from a high end Desktop to a mobile computer.

However... These are not the systems that are being proposed to the public, those are of an entire different character. LLMs and friends are truly gargantuan. They have billions of parameters, and need absolutely insane amounts of carefully labeled data to be useful (because they are so 'large' (size is measured in the numbers of parameters that define the final trained network). This means that these systems need hardware that costs in the order of millions or tens of millions of euros to construct and consumes staggering amounts of electricity. Obviously this is something that only few can afford. This is where the power dynamics come in to play. Sell a system to a customer once, you get one sale. Convince a customer, that they need this terribly overengineered solution, that you must rent out, you charge them for a lifetime, resulting in another instance of the current neocapitalist way of making money by collecting rent on things. The other thing is that as the conglomerates operate these LLMs, they get free reign on all the information that the people 'provide' by interacting with the system, thus inadvertendly collecting more data that can be sold, or otherwise exploited.

Wealth an AI

But wait, there is more... To top it all of, there is one more brilliant scheme at the core of all this, and that is the way in which they totally circumvent copyright protections. See, copyright was used by asocial rich people to allow them to profit from the work of others, as long as they could prove that they had the 'original' idea (a very tricky topic in deed and essentially impossible). This worked reasonably well in a world where publishing something meant print presses and lots of ink and paper, but in the digital age, where publishing can cost fractions of a cent per copy, suddenly corporations would stumble over the laws that they created to be able to exploit others ideas. Due to the fact that the laws where very beneficial to the corporations, it made it difficult to circumvent them, that is until now. Generative AI also needs to be trained, and for that new, good source material is needed. This source material of course is under copyright protection. But because only the wealthy can afford to build LLMs, and it is near impossible to prove that an LLM used your particular blog post to train on, all the hours or work that went into creating that source material was used without consent or licence. A big wealth transfer to the wealthy from, well everyone that cant afford to track down the transgression or pay for the obviously very expensive legal procedure. Sadly the reason AI is so popular, is that it allows the wealthy to surveil us and sell our data while at the same time repeating our own words back to us all while enabling scammers and grifters to make the internet a much worse place for everyone else of us. And then, sell us an AI to try and wade through the mountains of garbage for us, a solution that generates it's own problems to solve. A capitalist dream. All while polluting our pale blue dot...

Capitalism makes everything worse.