I think about AI a lot. Probably because I am embedded deeply in the online AI discourse. I was listening to a podcast over the weekend discussing Microsoft’s Gaming Division and their new application of an LLM they have trained on game play data. Their stated goal is to help game developers iterate through new ideas for game play more quickly. I think most people can see where LLM tools will be a boon to some in the gaming industry to help realize large open games more quickly by automating some of the more tedious tasks. But using LLMs to create fresh game play concepts and new intellectual property? If you know anything about how an LLM is trained and the responses it produces to prompts, then you know it’s basically just giving you an averaged out or token response that happens to be the most frequently occurring bit of data that was in its training set. It’s constrained by rules and while it sometimes might produce unexpected or novel responses, largely they are geared to reducing things to the mean.
Last year I started interviewing candidate companies to perform an annual penetration test. Two of those three had offers that relied heavily upon automation and their sales pitches made a very big deal about the AI and machine learning their products used. As I pressed harder to get clarification, I found a real lack of transparency over what the AI was actually doing. In the end I determined that the products they were selling, as described, were more like extremely advanced vulnerability scans. Vulnerability scans can be extremely useful, but they aren’t the same thing as a pen test. So, I opted for the other company that was providing a more traditional service that included an experienced and certified pen tester that would be applying their expertise to identify gaps in our security controls.
During the course of the pen test the tester communicated openly with our personnel which led to some great collaboration that wouldn’t have been possible with automated tooling. We also got a nice peek under the hood as they explained some of the pivots and course corrections they made as they simulated an advisory who hit a wall. I’ve taken some courses on pen testing, so some of this was familiar to me, but I was impressed with the ingenuity and outside the box thinking that was on display. I found myself being reminded of a quote from my favorite movie, The Matrix. Midway through the film Morpheus is explaining the nature of The Matrix to Neo including the programs that guard it, The Agents. He makes a point about that nature of The Agents and their inflexibility; “I've seen an agent punch through a concrete wall; men have emptied entire clips at them and hit nothing but air; yet, their strength, and their speed, are still based in a world that is built on rules. Because of that, they will never be as strong, or as fast, as *you* can be.” The Agents only seem like they are invulnerable because they have been programmed to be. But the rules of that programming are a constraint. A constraint that human beings don’t have. It’s a pivotal realization for the film’s protagonist. I think it applies to AI and LLMs pretty seamlessly.
AI and LLM tools can certainly be additive. They can, like a lot of technological tools, augment our capabilities. A recent study from the MIT Sloan School of Management* demonstrated that when AI tools when given to highly skilled workers, in this case software programmers, the affect was net positive increase in productivity. Software development seems like a pretty natural fit for these types of tools. I’ve certainly used them on many occasions to create basic scripts in for Bash, Python, and PowerShell. But, these are highly skilled workers who already have the knowledge to know the right prompts to use to get the results they want and the ability to tell when the output is bad.
Some people on social media have suggested that AI tools can be handed to anyone of any education level and it will instantly make them capable of performing almost any highly skilled technical role. Quite the opposite, actually. Two recent studies, one from Microsoft**, showed strong negative correlations between the use of AI tools and the erosion of critical thinking skills and a reduction in cognitive effort***. These tools may wind up creating long term damage to our pool of skilled workers. I am not convinced we’ll ever develop a sufficiently advanced model that can actually replace workers in many fields. So where this could end up is a situation where children and students become so dependent on these tools that we have a labor shortage of skilled and highly skilled workers over the next decade.
Hopefully a decade from now I can still find a good pen tester that can think outside the box. Because that’s something even the very best “AI” powered vulnrability scanner will never be capable of. “You have to let it all go, Neo.”
*https://mitsloan.mit.edu/ideas-made-to-matter/how-generative-ai-affects-highly-skilled-workers
***https://phys.org/news/2025-01-ai-linked-eroding-critical-skills.html