Cyber security tools vs people is the debate most organisations avoid. Instead, we buy reassurance through spectacle: wall-sized screens, live maps, and endless dashboards glowing blue in darkened rooms. The implicit promise is simple: if we can see everything, we must be in control. Today, that assumption is dangerously wrong.
I was reminded of this when I read an article by Glenn S. Gerstell, describing the National Security Operations Centre at the heart of the NSA. A vast, windowless room staffed around the clock. Senior officers surrounded by real-time intelligence feeds. Missile launches tracked. Aircraft movements monitored. Threats mapped and escalated with military precision.
It sounds like the ultimate command centre. And yet, even here, there are threats it cannot stop.
A cyber attack on a power grid is detected only once it is already under way. A hypersonic missile may be visible seconds before impact, if at all. Awareness does not equal prevention. Visibility does not equal control.
That same mistake is repeated daily inside corporate security programmes.
The comfort of the silver bullet
At its core, cyber security tools vs people is not a technology problem, but a leadership one.
Technology is often treated as a cure rather than a component. Buy the right platform, deploy the right tool, and risk is assumed to dissolve. I have heard this framed as a "single pane of glass", a "one stop shop", or the latest must-have capability that will finally close the gaps.
These are comforting stories. They are also misleading.
If technology alone solved the problem, breaches would now be rare. Instead, they are routine. Predictable. Almost expected. Not because organisations are careless, but because the problem has been framed incorrectly.
Cyber crime evolves faster than procurement cycles, faster than training programmes, and often faster than organisational understanding. Tools lag behind behaviour. Attackers adapt more quickly than platforms.
This is why the cyber security tools vs people argument keeps resurfacing whenever confidence starts to erode.
We are not short of innovation. We land machines on Mars. We build autonomous systems. We push artificial intelligence forward at breathtaking speed. The issue is not capability. It is misplaced confidence. When judgement is missing, even the most advanced tools create noise rather than control.
How security became noisier, not safer
Information security went through its own gold rush, much like Y2K. Start-ups multiplied. Enterprise vendors expanded. Each new product promised to be best in class. Each new platform claimed to simplify everything that came before it.
The result is familiar to anyone working inside a modern security team.
- Tool sprawl.
- Alert fatigue.
- Millions of events per hour.
- More platforms deployed to make sense of the outputs of the previous platforms.
- Layers of complexity added in the name of control.
Regulation and best practice reinforced the trend. Buy this product. Implement that control. Prove it through tooling. Hire people just to keep the machinery running. The intent was sound. The execution often was not.
Security programmes became louder, heavier, and more expensive, without becoming proportionally safer.
What was lost was judgement.
Tools don't fail. Context does.
Most security platforms are not bad. Many are genuinely useful in the right environment. The problem is how they are positioned and how they are relied upon.
- Tools do not understand your business.
- They do not understand trade-offs.
- They do not understand which risks matter now and which can wait.
And they certainly do not understand people.
Attackers do. That is why social engineering remains so effective, despite what looks like adequate training. Phishing emails, urgent payment requests, authority-based manipulation - the list is endless. These attacks bypass even the most sophisticated defences by exploiting the one variable technology cannot predict: human behaviour. As much as I loathe the "human firewall" phrase, it is factually correct.
This is also why the persistent myth equating hackers with criminals is so damaging. There are fundamental differences.
Hackers expose assumptions. They show you where your mental model breaks. They think like adversaries so you do not have to learn the hard way. Criminals exploit for gain. Hackers expose for improvement. Conflating the two has left many organisations blind to their own weaknesses.
If you remove the red team mindset, all you are left with is hope and dashboards.
Senior judgement over flashing lights
Many modern platforms suffer from the same failure mode. They are complex to configure, overwhelming to operate, and prone to alert overload. Teams drown in noise until the one signal that matters is lost.
You do not need another flashing light to tell you something might be wrong.
You need to understand how an adversary would approach your environment, where your real exposure sits, and which weaknesses are worth addressing now versus later. A vulnerability cannot be exploited if it does not exist. Better still, if it was never introduced in the first place.
That kind of clarity does not come from tooling alone. It comes from experience. From pattern recognition. From having seen the consequences of decisions made years earlier and understanding how today's choices will age under pressure.
Organisations stuck in the cyber security tools vs people dilemma rarely suffer from lack of data, but from lack of clarity.
Final thoughts
This is where Phenomlab fits.
Not as another platform. Not as another layer of noise. But as a source of calm, senior-level judgement when technology and security decisions start to carry weight.
Phenomlab works with leadership teams who already have tools, teams, and data, but lack clear answers. Where visibility exists, yet confidence does not. Where the question is no longer "what should we buy?" but "what actually matters now?"
- Flashing lights look impressive.
- Clear thinking is what reduces risk.