Ay!, A.I.
There has been a lot of noise around Artificial Intelligence since around November / December 2023. Specially revolving around the new Generative Artificial Intelligence (abbreviated GenAI) technology that seems to have a mind of its own and being able to chat almost like a human, generate images from a short text description, replicate photos as comics and even compose and generate music. All of these capabilities are powered by something called Large Language Models (LLM), so, there's no surprise these algorithms are also able to produce code from a short description of what we want the program to do because that's what code is, a language.
A lot of people are scared A.I is going to take their jobs, specially people from the knowledge industry (designers, writers, programmers...) if there's a machine that can produce a whole article based on a description of what we want to say and the writing style we are aiming for, where's the need for a writer?, if the same machine can be trained to produce images based on a given description, where's the need for graphic designers?, if the same machine can produce code and build an application based on a given prompt, why do we need software engineers?. That's why I titled this post Ay!, A.I., see, Ay is a Spanish expression to say many things, depending on the context it can mean surprise, disappointment, pain, sadness or many other things. In this case, latinos might read it as "oh sh*t... A.I. is coming", non latinos, well, now you know what I was aiming for, even ChatGPT could understand it.
GenAI will take my job
Most of the concern around GenAI is that it will take a lot of people's jobs, if part of the staff can be replaced by a machine that doesn't charge you a freelancer fee or a salary, then why not?, let's let AI take over and operate my business, right?
Well, not really, it's very naive to think this technology is, or will ever be, at a stage where it will replace a human at their job, of course, with the industrial revolution and the advancements in robotics, a lot of the manual work was automated and made more efficient, yes, people lost jobs to robots but there was also a demand for new roles. All machines wear out and, in the end, are built and programmed by humans, so there also margin for errors, recalibration, supervision and quality control, while robots (and machines in general) can be extremely efficient, fast and accurate, they don't have a sense for quality, for example, so if a screw is too tight or losen, it won't know. So, there was a need for people to monitor the production lines and recalibrate the machines as needed, humans did not disappear entirely, we just shifted to different roles.
Understanding this, the rest is really up to us, GenAI, even though it seems like magic and having a brain of its own, what it really does, in a nutshell, is finding patterns in the data, how entities relate to each other and the probability of two entities (words, images, pixels, musical notes) to go together one after another. This means, there's still chances for errors (hallucinations in GenAI lingo), there's a need for human eyes to spot these errors and correct them and recalibrate the model.
It's a brave new world
The same as happened in the Industrial Revolution, will happen now with the so called GenAI Revolution, those who refuse to adapt and learn new skills to fill in those new roles to, in some way, make sure machines are doing a good job and recalibrate them if they're not, will be left behind by those who embrace the chance and leverage these new technologies (machines or algorithms) as tools to perform better at their jobs.
Quality control, prompt engineering, proof-reading and fact checking are some of the skills that come to my mind, not to mention more technical ones like data engineering, LLM training and data science, in my opinion are skills that will be highly on demand to support and produce end-user applications based on GenAI.
Not everything will be about Artificial Intelligence, there's also other types of systems that don't require so much sophistication, but the process of writing the code can be supported by GenAI, tools like Copilot and even ChatGPT can help you as a Software Engineer to produce better code, learn new tools or kickstart a test suite faster and more efficiently, however, as an Engineer, you need to keep an eye on the output and adapt it as needed. LLMs can produce code, but it's up to us as Engineers to understand how to make it efficient and integrate it into the rest of the codebase in a way that it doesn't look like a Frankenstein of multiple LLM outputs. Here, prompt engineering can come handy so we know how and what to ask the LLM to do.
What does this mean for Software Engineers?
If you believe GenAI will make you obsolete and take your job just because it can produce code, you're either a very naive person or a terrible Engineer. Coding is the easiest part of a Software Engineering role, we are more than a code-writing machine because Software Engineering is more than only coding. There are multiple dimensions we need to think of when we are working on a software solution.
Of course, maybe entry-level roles could be affected by GenAI in a way that a model would generate the initial code and then it's more a matter of tuning, more senior roles will still be needed to steer projects towards the right technical direction, GenAI could be used to gather information, but the ultimate decision would have to be taken by a human.
To me, what all the GenAI revolution means is that, as Engineers, we need to:
- Learn how to leverage on LLMs to accelerate my work and add value on top of that.
- Learn how to run and maintain GenAI platforms.
I would recommend doing both...
Most of the roles related to GenAI have to do with building software systems that rely on prompting LLMs to get answer to questions based on content fed into the models, this would ensure we stay current and always have opportunities coming our way, on the other hand, there will be a saturation of people who specialize on that, building LLM-backed applications.
I see there's an underserved sector in all this, there needs to be people capable of setting up, maintaining and scaling the infrastructure where these models are running, yes, there are third-party services like openai, anthrophic and azure that offer endpoints to consume language models, but there are government and other types of organizations that handle sensitive data and cannot use a third-party cloud service to upload it, hence, they must have their platforms on-premises and this is where these roles will be useful and, I believe, well paid. Everyone is focussing on building apps and this side of the market will be saturated, there's little knowledge on how to run and manage LLM platforms and the scarcity of talent in this area will make it a highly valuable skill.
I might be wrong, but I'm focusing more on the second option, even though it's not as fun as building apps, it's been a super interesting journey.