LLMs and their long term effects
Many predictions have been made about how AI will change our future. The revolution happened with the introduction of transformer-based LLMs of course, but what is behind an LLM? If you continue reading, please bear in mind that I’m not an AI expert, just a seasoned programmer and science hobbyist.
That being said, I prefer to call the existing rollout of AI tools by their more specific name: LLMs, because the level of intelligence in these tools is very limited. I will explain the reason for that, and hopefully convince you that LLMs are amazing assistants for a lot of human activities. I hope to also convince you that we are changing our business environments too hastily, expecting too much from LLMs, and we’ll soon encounter a severe skill gap in our population because of these decisions.
Contents:
- The LLM, an analogy;
- A warning: we expect too much from AI;
- Programming is still a good hobby or career choice;
- Effects: AI is an excellent assistant for many jobs and activities;
- Effects: we need to rethink education, teamwork, security, privacy and perhaps even money, to better adapt to a world with AI;
- Effects: I propose we start a new form of collaboration, one that is better suited to working not just with people, but with person-AI pairs.
How to explain the LLM in simple terms.
TLDR: I’m explaining here how an LLM works, with an analogy rather than code and math. Also, I’m pointing that LLMs don’t need to lookup answers on the web, like humans do.
Condensed to its core essence, the Large-Language-Model is an enormous sieve matrix used to predict the correct response to any input you may give it. This is a very important observation:
The LLM is a prediction machine, not a thinking one.
What is thinking, if not prediction? you may quip. That’s a very deep topic, controversial and ambiguous, so we’ll leave it for another day.
One way to visualize an LLM, is to imagine a very complex, multi-layered sieve, like a Galton board, that takes your input, usually a piece of text, and makes each letter fall through a series of complex domino-like mechanisms that influence each other, producing in the end a new string of letters. That’s the LLM’s response to your input.

There are a few more details to this image, like the fact that sometimes the letters/balls fall down through some path and then get back into the upper layers again, repeating the process through various paths until a certain condition is met. Another detail is that we don’t really have each letter rolling down into the matrix, sometimes they do and sometimes whole words or whole phrases are considered a unit. That unit is called a token. An LLM collects enough data to create a vocabulary of such tokens, a set of linguistic chunks that it can then use to store and compose any piece of text.
And of course, today’s LLMs roll out with larger and larger context windows, which is to say, they can follow a pretty long conversation. A previous input text (and the answer) reconfigures the sieve board in a small way, keeping the concepts and concerns “alive” for the next input text and answer.
Another interesting aspect is that, whatever your input, there is no single perfect match, so the LLM will choose one random “close enough” match instead. The LLM doesn’t give you the same answer twice, even if you asked the same question within 10 seconds, using copy paste. Try it, for example ask ChatGPT “What is the pair representation in Alpha Fold?” several times, either in the same thread or in a new thread every time. You’ll get slightly different responses. This implies that the LLM has a random selector, when the curve fits several combinations of internal elements within a marginal distance. This marginal distance is related to a parameter you’ll find sometimes mentioned in generative AI as the “chaos factor”, for example in Midjourney.
Here’s a truly revolutionary aspect of the LLM:
A trained LLM does not go on the web to look for answers.
It can, and some do, but a trained LLM is perfectly capable of responding to any questions you might have while being completely offline. In other words, it’s not like a human assistant, who needs to go do some research online, read a few books, and come back with a written report.
The LLM has “hidden” in its matrix all the information it needs to give you a new poem, or a summary of all physics literature (try it!), or a piece of Python code for your new app, or converse with you on the risks and benefits of sky-diving.
To do all that, the LLM needs to somehow have stored all the information needed, and indeed the largest ones reportedly exceed 200 GB in size. However, this information is not “compressed” in the usual sense of the word. None of the text you read in their answers is present “as is” in that 200 GB database. The relationships between various concepts are stored in such a way that, for most any combination of letters in your input text, the balls will roll and roll through the sieve matrix until there will be a correct combination of letters in the output, forming an answer that might as well have originated from your university professor (not really, but that’s one of the goals of LLMs in the next few years).
How does the LLM decide when the “correct combination of letters” is in the output? Well, remember I said it’s a prediction machine, not a thinking one. The matrix is such finely tuned that all the questions you may ask of it have a predefined answer, or a predefined path the balls can roll through, producing a limited set of answers, usually correct.
My own limits
TLDR: I don’t understand all the depths of AI today, but I can see some pretty severe limitations. We’ll need a second AI revolution to arrive at AGI.
This is where my limited understanding stops. There is something else in the LLM architecture and algorithms, that allows for very basic logical inference, which is required to assemble code and make certain predictions based on the information you give it in the prompt.
That logic is very basic however, the LLM can’t really do math, or physics, or programming, not in the same way we do. If a math question has been asked on the internet and answered correctly by a human contributor, then most likely that content made its way into the training set of a modern LLM and you could get the correct answer too. In actuality, it’s not the LLM that answered your math question, it’s that person’s answer to that very same question or one that’s similar enough.
The LLM is not capable of full logical inference, deduction, 2D and 3D visualization (try asking it to draw you two intersecting toruses — or tori), and perhaps more aptitudes that us humans possess. LLMs have now exceeded 150 billion neurons. A human brain only has 80 to 100 billion neurons, and yet the LLM can’t do much more than summarize a large body of text for you, and answer questions that have already been answered. However these hundreds of billions of digital neurons can provide you with answers to a myriad of questions from all subjects and disciplines. No human being is capable of that feat, but we tend to be more adept at using limited knowledge in very creative ways.
Hasty decisions
TLDR: Massive move towards AI adoption and pulling young minds out of fundamental programming could hurt us in the long run. Plus, a little bit of my own experience as a programmer and why I think this art is so important.
We seem to be moving everyone to this side of the fence, working with LLMs, firing anyone who’s been doing regular, menial programming chores, and going so far as to encourage young people not to learn programming at all. I can only warn you of a terrible risk:
Inside the AI obsession lies the downfall of all tech.
With AI doing everything for us, we’ll stop working on anything fundamental, and focus instead on playing with complexities. This trend can lead to a generation that lacks any of the required skills to maintain and improve our digital infrastructure.
If I predict anything, I would predict that in the year 2050 our systems will still use Java 26 or ECMAScript 2030, the last time when open source contributions were made by human programmers. Private source code will be even more difficult to maintain and evolve, because unlike open source, they lack the human chatter over every little module and function. If I’m not mistaken, LLMs specialized for coding are trained a little bit on the code and documentation itself, but mostly on that human chatter around it all.
And yes, in the year 2050, banks will still run on Cobol.
I strongly believe that programming is a rare pearl, one of the very few human disciplines that significantly contributes to our intellectual maturity.
Struggling with programming is a much faster iteration than struggling with a human process, such as verbal arguments or paperwork. This leads to faster improvement on many aspects of the mind.
Anecdotal as it may be, I would like to share my own experience with programming and the effects it had on my personality, my metal acuity, and my social skills. Feel free to skip over this chapter if not interested.
When I was young, still in high-school, I could only experiment with programming whenever I visited my neighbor friend who had a PC. We were very excited about code, about the machine revolution, dreaming about AI until the sun rise. Trying to code in Assembly, and later in C and Delphi, we often found ourselves stuck with “weird errors” upon compilation, and I had the tendency of thinking “I’m sure this is correct, the compiler must be faulty!”.
It took me a long time to smooth out the “I’m right! You’re broken.” smug confidence and accept that I’m frequently making a dozen mistakes on a single line of code before it finally compiles. I may exaggerate a little bit, but it took me years of fighting with the machine, before my mind molded itself onto a new truth: “You know nothing, John Snow!”.
I became a lot more open to different opinions, a lot more tolerant of people’s mistakes, and vastly more experienced, more insightful in my work and in my view of the world too. And yes, some people have the opposite reaction, and become closed and fierce in their online arguments and code-style wars. There are other things to consider besides fighting with the machine, other factors that shape a human being, but for me programming was the best teacher in how to be a better human. I’ve learned more languages, both machine and human kinds, I’ve learned more and more abstract concepts and widened my net to more than just programming, and all of it because I’ve had an excellent start. One in which I had no frameworks, no libraries, no AI coding assistants, no internet. It was just me and the machine.
And today we advise our children to forgo this art, this mental discipline, and stick with our age-old, loose verbal arguments in which the winner is the most persistent, not the most rational …
There is a ray of hope though, if the LLM could one day be used as a mediator in the communication between two human beings.
Human creativity
The AI revolution, as most revolutions go, has now entered a relatively stable period, where the concepts and mechanisms are refined at smaller and smaller scales, in a fractal-like pattern, much like what happened in the auto industry. A car has an engine, a chassis and 4 wheels. The rest is just incremental technological improvement. We’ll be seeing more and more such incremental improvements to LLMs, things like better fine tuning, prefix tuning, adapters, low rank adaptations and more. However, the foundation has already been set. And this foundation is limited to producing combinations of existing art.
We are waiting for the next revolution that will permit the AI to evolve on its own beyond the human limitations. In the meantime, human creativity is still very much needed. And because human creativity is often enhanced by communication with another human, I think we’ll need to keep working on our collaboration tools and integrate LLMs in those tools.
For example, teams in various companies are using Slack and video conferences to coordinate work, but the actual individual tasks are done in isolation by each person. This creates an environment where the teamwork is split so much that we don’t really have active collaboration anymore. Instead of a nice canvas painted carefully, we have puzzle pieces that we repeatedly try to fit together. Some pieces are stronger than others, and some pieces keep falling off.
What’s changed? It used to be we could look at each other’s work and notice things, and help each other out. The results came out a lot smoother, and of higher quality. And we also grew more experienced, with a lot more depth in our understanding and our approach to life and work.
I propose that we use the LLMs to enhance our collaboration. In creating these AI models we’re learning many interesting things about communication and collaboration between agents. These lessons can be applied, perhaps in some adapted form, to human collaboration as well. As humans, our superpower is the ability to come together and work towards a common goal (even though it’s really difficult when we’re all comfortable and full of dreams).
Among the predictions about AI you’ll find one that says we’ll be able to generate ideas using AI, and use them as a starting point for creative work. While I can see certain underground trends forming, I think these will create their own space in our social structures, completely separate from traditional creative works. Transformer-based generative AI can only combine existing objects to generate something, whereas human beings are creative in unexpected ways. You probably won’t be able to get an LLM to create a crocheted doggie all on its own, even if you prompt it with a very specific need, like “Please give me a creative idea for a crocheting club brochure”. You’ll have to specifically ask for a dog that looks like it’s made by crocheting, and even then it may not come up with the result you’re expecting. Still, if you have some original ideas, you can probably, maybe, obtain a close-enough visualization of it. You see, it works the other way around: to be a truly original creator, you have to be the originator of new ideas, and use the LLM to try and provide a quick materialization of that idea for validation purposes, after which it’s still your job to refine it, or better yet, to create a masterpiece from scratch.
In light of all this, and thinking of how we could improve our own internal capabilities as humans, I started working on a few collaboration tools, integrating the LLM advantage, paving the way for enhanced teamwork between human beings, and bringing more joy into the workplace.
Stay tuned for my next post detailing how I envision the future of work.