NetSec Breaking Apps Better than AppSec?

July 7, 2011
By

Post to Twitter Post to Facebook Post to Reddit

Fat Suit WrestlersFirst let me define “NetSec” as a professional, full scope, network penetration tester hell bent on punching holes in your organization’s network. I’ve come to an interesting conclusion recently after working with and discussing web application exploitation with NetSec folks. Often they are simply doing it better.

(Note: As part of a campaign to bring forward some of our older posts that we feel still benefit the community, we’ve added this article to our Best Of category that will periodically get tweeted out. Please mention it to me on Twitter or contact us if there are any other posts you feel we should include in this category. This post was previously categorized under Infosec Blogs/Podcasts. -@grecs)

Would you consider annotating a finding in a pen-test report whose title was something like “Cookie is missing an HttpOnly flag”? If you answered yes you probably work in modern day application security.

It is almost a thing of beauty to watch the NetSec guys I know attack an application. The mind-set is different from those of us in AppSec. They aren’t looking at all the components of the web application in the way that we AppSec guys are. To them it is just another potential opportunity to punch a hole in the network. They get tunnel vision and it works.

Take the following scenario for example. During a penetration test, we may have found a weakness in the application where the code behaves strangely and allows us to access another user’s information. When we do, we find it holds data that is classified as Private Personal Information (PII) but really it is data that two seconds on Google or some quality time with Maltego would uncover just the same.

While the previously mentioned scenario could be thought of as slightly dangerous, the database is still out of our reach; client-side systems are still safe from us yet we would probably consider this finding a high-risk one and report it as such (PII and all).

Now consider the mentality of a network penetration tester going up against a web application. These folks are not concerned with the application’s design flaws unless it directly leads to something they can use to further their goal. The application is nothing more than a stepping-stone.

From the words of a full scope penetration tester, [It is about] “finding something in the app that allows access to the network or a chink in the armor, information leakage, a credential to use, a directory listing or directory traversal, XSS that can be used for a client-side attack, admin bypass to get access to the app to get DB strings, and of course SQLI to extract tasty data, etc”.

My interpretation of that statement is, “if it doesn’t allow me to get in, take away sensitive data or exploit client-side systems I’m not interested”.

I find it fascinating that if, generally speaking, I ask a fellow AppSec colleague about a new SQLi exploitation tool or some script written to help with some form of web exploitation I get a shrug. If I fire up an IM session with a NetSec friend they seem to know immediately what I’m talking about.

I propose that these folks are paying more attention to a smaller amount of flaws but only flaws that are critical in severity. NetSec seems to be spending much more time focusing on various techniques for exploiting this subset of vulnerabilities.

Whether an assessment approach is better than a penetration test approach is not at all what this article is about. Argue amongst yourselves about the value of varying methodologies.

Bottom-line, if you are an application security consultant stuck while trying to compromise an application, ask your network pen-test friends for help. The difference in perspective helps us break out of a narrowed field of vision and might lead to some serious 0wnage.

Tags: , ,

35 Responses to NetSec Breaking Apps Better than AppSec?

  1. novainfosec (@novainfosec) on July 7, 2011 at 10:48 am

    #NOVABLOGGER: NetSec Breaking Apps Better than AppSec? http://bit.ly/oqzQZU http://j.mp/nispblog

  2. grecs (@grecs) (@grecs) on July 7, 2011 at 10:48 am

    #NOVABLOGGER: NetSec Breaking Apps Better than AppSec? http://bit.ly/oqzQZU http://j.mp/nispblog

  3. grecs (@grecs) (@grecs) on July 7, 2011 at 11:26 am

    BLOGGED: NetSec Breaking Apps Better than AppSec? http://bit.ly/oqzQZU

  4. Ken Johnson (@cktricky) (@cktricky) on July 7, 2011 at 10:02 pm

    “@novainfosec: #NOVABLOGGER: NetSec Breaking Apps Better than AppSec? http://t.co/7ChxUD6 http://t.co/xpA1avR”

  5. Rob Fuller (@mubix) on July 8, 2011 at 12:55 am

    Time to weigh in appsec and netsec guys (great article by @cktricky) http://bit.ly/pEZQqp

  6. Andrew Wilson on July 8, 2011 at 1:19 am

    While I certainly appreciate Chris’s perspective– I’d like to offer that is merely a choice of the individual and not a NetSec vs. AppSec issue. To his points on how we write up missing security flags vs. using the app at the stepping stone– I’d like to offer that not having flags IS a chink in the armor. Coupled with XSS it’s the means that allows me to retrieve sessionId and potentially hijack another account. It might not be my first concern on how an app is built, but surely it’s a concern in how far I can reach due to another exploit. I have coupled informational rated issues with medium rated ones to fully compromise a box before. I’ve never done NetSec– I just want to break in.

    The converse relationship is also true. For a skillful NetSec guy, using the app as a stepping stone might entail little more than finding a singular vulnerability. While that might be all they need to move on– that doesn’t mean it represents the full scope of risk to the application. I think of it like this, if you hire me to evaluate the security of my home and all I do is walk in the unlocked front door– that demonstrates a single weakness which has major ramifications. If the intent was to prove you can steal my tv– you win. But that does NOT mean an unlocked front door is the only way in, or even representative of the total risk to my home. If you stop there, and I believe myself safe due to now locking that door– I will leave with a false sense of confidence in my home.

    I do not mean that to say that is all a NetSec person does– only that there are trade-offs and dangers in the various approaches. FWIW– I think it’s sad when a tester cannot see past “HTTPOnly” flags being missing– and is not focused on compromising the database/box/what-have you. They are doing a disservice to their client and, in my opinion, themselves.

    -A

  7. cktricky on July 8, 2011 at 1:35 am

    Just to be clear, I completely agree that a total assessment of the application is more valuable than a black-box pen-test (with rare exceptions). But see that is my point. I work in AppSec, of course I think that. The way I come at an application is from a totally different perspective from a NetSec person. Not only do I look at the total sum, like chinks in the armor such as HttpOnly/Secure flags or what have you but I also to have to think about code or framework recommendations and cost effective/extensible solutions. It can get pretty complex.

    On the other hand, I’ve noticed a different approach from some seriously talented network pen-testers I’ve seen. Like I said in the article, very different approaches. Every time I go out with those guys I learn about something new and cool but really only critical in nature. After years of discussions it is clear that they are really pairing down on critical vulns and getting really good at exploiting them.

    The article wasn’t really meant to evaluate whether or not a total assessment versus a pen-test is better or worse or even how AppSec defines a pen-test versus NetSec. I just think we can learn a thing or two.

    I absolutely agree that this is a complete and utter failure on the part of AppSec consultants when they don’t attempt to go after the DB/System or whatever. Let’s be honest. There are a lot of consultants out there simply trying to fill a report while double or triple booked. If they can have a handful of medium and low findings then there customers go along happy and oblivious to the fact the consultant half-assed it.

  8. Andrew Wilson on July 8, 2011 at 2:19 am

    Ken (the other Chris),

    I think the difference you are observing is earnestly just one of mindset not skillset.

    I think that the offensive mindset is correct–any way you slice it. But where I see AppSec (ideally) holding far greater ground is in sophistication and reach of attacks. When the low hanging fruit isn’t available and you can’t move on– that should be my domain. I should know about all the crazy low level things going on in an app– or at least be able to learn them quickly to adapt an attack against them. I should know ins and outs, ups and downs, lefts and rights better than any NetSec guy.

    In my mind, a NetSec guy/gal might be able to exploit vulnerabilities– but I should be able to cause sites to be vulnerable when they aren’t normally. That is what I shoot for (and have been fairly successful with to date).

  9. cktricky on July 8, 2011 at 2:33 am

    lol @ “the other Chris”

    Andrew – You make great points and I agree, it is all about mindset. We can’t just worry about a small handful of categories of vulnerabilities, we have to be aware of the whole picture. Although, having a broader view of application security doesn’t *always* help you break in. Sometimes you need that other group’s perspective.

  10. Alexos (@alexandrosilva) on July 8, 2011 at 7:14 am

    Time to weigh in appsec and netsec guys – http://t.co/p6nhUdc (via @mubix)

  11. Win Security (@winsec) (@winsec) on July 8, 2011 at 7:23 am

    NetSec Breaking Apps Better than AppSec?: [nova#infosecportal.com] First let me define “NetSec” as a professional,… http://winsec.tk/ZQRmw

  12. Michal Vavrik on July 8, 2011 at 9:43 am

    Any AppSec testers that consider an application to be a vulnerability assessment of an application may want to reconsider the purpose of an application pentest. AppSec’s primary driver is, in my opinion, leveraging application vulnerabilites to support the larger goal of gaining a foothold into the OS, database, auth server or anything else that is firmly OUTSIDE the scope of the application assessment. Again… OUTSIDE the scope of your application pen test. It’s not about violating scope, which you don’t do, but it is about showing entry points and how they’re leveraged against the rest of the infrastructure or critical business components, even if you don’t ever touch those things (don’t do it).

    Sure, we’ll write up findings around missing cookie flags all day, because it adds value and because it presents the full picture. But doing an application VA and nothing else gives you half of the test. Now you find yourself with a long list of findings of varying risk levels. But how do they impact the business if you couldn’t step out of the context of the application? You’re immediately at a disadvantage when it comes time to explain how all these findings impact the business.

    Let’s step to the other side. You’re AppSec taking a NetSec perspective and focusing on critical vulns to attack other parts of the network. Great, you got access to the OS, sifted through some logs and scripts and found something you can leverage against the network. This also gives you about half of the test and you’re not authorized to touch the rest of the network because of that whole “scope” thing. What about that ridiculous user broadcasting feature that let you push javascript to ALL of the application users’ web browsers? What if the only thing between your stored XSS and lots of session cookies was that annoying HTTPOnly flag? Suddenly you see the value in protecting cookies and reporting on when that flag is not being set.

    A hybrid approach, using both perspectives, will likely get you a great list of findings that you’re able to explain within the larger context of the infrastructure or organization. Absolutely ask the NetSec guys for their perspective. But do it with the goal of making that perspective your own.

  13. cktricky on July 8, 2011 at 10:35 am

    I don’t disagree that a Hybrid approach is certainly more effective. Scope and context are always a biggie. This is where I find AppSec to be the most useful. Taking certain vulnerabilities and providing them in a report with context.

    Alternatively, if we keep focusing on the same old stuff we get into a routine. We should constantly try out new tools and techniques. I happen to lean on NetSec guys a lot when I want to discuss the value they’ve seen in various scripts or Blind SQLi tools (especially BSQLi) or what have you mainly because they seem to try out the new stuff more often. I think it is probably out of necessity but then again……its the mother of all invention.

  14. Cliff Barbier on July 8, 2011 at 10:35 am

    I think it’s a matter of focus. Let me synthesize some of the viewpoints already mentioned…

    AppSec testing is about knowing how all of the aspects of an application work together to cause insecurity and unauthorized access. Having that picture kept firmly in your mind, and finding the chinks in the armor. There should be peripheral knowledge of how this interacts with OS & networking.

    NetSec testing is about knowing how all of the aspects of an entire network and its OS’s work together to cause insecurity and unauthorized access. Having that picture kept firmly in mind, and finding the chinks in the armor. There should be peripheral knowledge of how the applications interact with it.

    When restated in those fashions, I think it’s obvious that the overall goals and methodologies are the same, just the foci and details are different. The difference you mention about testing and reporting may come from the audience for these reports and/or the tester’s motivations.

    I find that a lot of NetSec guys are very gung-ho about “let’s break in and get it!” My experience with AppSec guys is limited, but they seem to be more measured and nuanced. It seems this difference is a cultural one–network/OS techs have a different culture than programmers. I’m guessing here, but I imagine that most AppSec guys are programmer types, and are a part of that culture as a result. From what I’ve seen of the programmer culture, they avoid “you did it wrong! Oooohhhh, in your face!”, as a whole.

    Also, a personal observation: While NetSec reports tend to go to non-technical management, AppSec reports seem to be distributed only within technical departments and management. Do you agree? And if so, might that contribute to the difference you noticed?

  15. Andre Gironda on July 8, 2011 at 11:16 am

    “they are really pairing down on critical vulns and getting really good at exploiting them”

    App pen-testers also need to do this. They just largely haven’t yet. I agree with Andrew Wilson that this has to do with the focus of the skill-set instead of a lack of capability in it.

    A bit of history for you:

    Early Cigital started releasing exploitation books, usually under the Gary McGraw and John Viega names circa 2002 when Foundstone and AtStake were releasing lower quality Hacking Exposed and Virus Research books.

    A few other people, namely Chris Anley, Jack Koziol, David Litchfield, Neel Mehta, and Dave Aitel were releasing medium-quality books on bug-finding and exploitation, but with less of a focus on web applications. This was around 2004.

    2004 also broke out with some of the first practical books on appsec, cleverly named ‘Network Security Tools’ and similar. This is where things started to blur. Around that same time, web application security scanners hit the market.

    With this unnatural blend of bad karma, a few strange things happened. First of all, AtStake and Foundstone got eaten. Cigital started going in a quality-testing direction where bugs didn’t have to be fully fleshed out in order to go into a report. It was rare to see web applications fully exploited — only NGS Software was doing it.

    Then came the merger of ISS to IBM. Ambiron Trustwave was formed out of the PCI DSS camp and became a monster. Neohapsis talent fled to both and the Ernst and Young talent got dispersed all over the place.

    A lot of these people were raised on the Cigital model of minimal bug discovery. You find one XSS and stop: the whole app clearly needs to be re-architected. You see a SQL error and go to the lead developer’s desk and ask him what is going on. This wasn’t mediocrity — it was about getting the job done.

    A few consultancies in recent times have realized that the Brian Chess promise that “penetration-testing is dead” is a farce. So we’ve put a few more scalpels into our blackbags and we’re going to do a lot more surgery and a lot less preventative care.

    Is this a good thing? Is full exploitation always a good idea? I don’t know yet, but I’ve bought into it.

    I’ve bought into it for some of the same many reasons that Ken mentions here. The network-focused pen-testers have some keen ideas that work well, especially in combination with apps. Focusing on the business process (i.e. hdm and valsmith’s Tactical Exploitation) is incredibly useful for exploitation today. App pen-testers and appsec people in general will need to make a shift to support, at the very least, these important concepts.

  16. Dan Kennedy on July 8, 2011 at 3:19 pm

    Reading the article, and then the comments, there are a number of themes emerging, but I’m not sure meaningful conclusions are possible as to the veracity of people who normally test entire networks/environments versus those who “security test applications”, which usually takes the form of an application type person (someone who can code and understands security, overly simplistic, I know) doing what is actually a vulnerability assessment on a custom application.

    In this scenario I know I’m eliminating what is a huge percentage of people who are performing activities under both of these banners who are in fact just running scans and formatting results or other related half-hearted half-assed approaches.

    Anyway…some possible themes being hit on:

    - Difference between those with a programming background (comp sci bent) and those with an IT infrastructure (people who started as SA’s, running networks, etc.) background.
    - Difference between penetration testing and vulnerability assessment (where penetration is demonstrating a successful compromise of a system [ie: got shell]).
    - The continually smaller concentric circles of scope involved with people who can and choose to focus on: compromising a company anyway they can (think Tiger team, physical and soceng options, IT, everything on the table), people who perform penetration testing strictly limited to the IT systems, people assessing (looking for any and all significant vulnerabilities in an application). In this case smaller describes scope only (and not even scope of time, just ‘systems involved’), not complexity, as project goals and matching the right security services offering to those goals is what’s important, not running the “widest” type of test. People who can do everything are just as likely to be jack of all trades, master of none type folks, and there are extremely advanced people who narrowly focus on the types of vulnerabilities created in application programming.
    - AppSec is a newer discipline, as Andre indirectly points out by walking through the literature that’s out there.
    - AppSec activities are mostly done on custom applications, netsec on entire environments of custom apps, common (between companies) vendor boxes/apps/OS/etc..
    - Penetration testing may classically have a very different goal (prove we’re not bullet proof to senior management, and drive whatever behaviors we’re looking for to get better) versus the “tell me all my vulnerabilities and rate them” product of a vulnerability assessment which is likely of interest to product managers and those that support the application (programmers, etc.).

    What am I driving at by identifying all the possible different discussion themes going on? Simply that this is a hell of a lot more complicated than “appsec people should be more like netsec people” on any plane. I’m not sure a different mindset does exist from a strictly testing perspective (I know the two or more backgrounds people come to these jobs from certainly affects mindset), it’s more the type of test or exercise you are running in conjunction with the goals of the project.

  17. Jack Mannino (@jack_mannino) on July 8, 2011 at 6:06 pm

    Post by @cktricky on network testing vs. app testing http://bit.ly/pvOlvi

  18. cktricky on July 8, 2011 at 11:25 pm

    @Cliff Barbier

    -{*}- I think it’s a matter of focus.

    Absolutely I would agree. Not skill-set, just focus.

    -{*}- AppSec testing is about knowing how all of the aspects of an application work together

    Agreed

    -{*}- I find that a lot of NetSec guys are very gung-ho about “let’s break in and get it!” My experience with AppSec guys is limited, but they seem to be more measured and nuanced. It seems this difference is a cultural one–network/OS techs have a different culture than programmers. I’m guessing here, but I imagine that most AppSec guys are programmer types, and are a part of that culture as a result. From what I’ve seen of the programmer culture, they avoid “you did it wrong! Oooohhhh, in your face!”, as a whole.

    So when attending an OWASP conference versus something like Shmoocon or Defcon that contrast in culture is visually apparent. I have no opinion nor am I eluding to one being better than the other. I have had fantastic discussions at each conference with all types of folks. I like em’ all but there certainly is a difference.

    -{*}- Also, a personal observation: While NetSec reports tend to go to non-technical management, AppSec reports seem to be distributed only within technical departments and management. Do you agree? And if so, might that contribute to the difference you noticed?

    I write reports for both groups and I know that is typically standard practice within the consulting companies I’m familiar with. I’m not sure if anyone else on this thread has a comment on that but I cannot speak with any authority on the subject.

    Thanks!

  19. cktricky on July 8, 2011 at 11:40 pm

    @atdre – Firstly, thank you for the background information. Actually very interesting to see that progression when you lay it all out like that.

    Secondly, I really think you got my point here. It is a difficult point to make in 500-600 words but you nailed it. It isn’t about NetSec being better overall than AppSec at Application Security. That would be nonsense on my part to spout. The suggestion I’m making is that we still have a lot to learn from those tried and true processes that can convert over into the AppSec realm. I think that while we are burdened with a more comprehensive approach and emerging threats in addition to preventive care, integration into the SDLC, etc. there is a more purist outlook out there which is focusing on only the most (and I’m taking a page from Andrew Wilson’s “Intrusion Theory”) direct breach type of vulnerabilities.

    Not to take anything away from it, but just look at the discussions about this article on the webappsec mailing list. It feeds directly into my point without realizing it. The conversation has turned into picking apart the details of flags and cookies and how automated tools handle this, so on and so forth. That is my point. We HAVE to think about these things. We have endless discussions and debates about them. All the while, a full scope NetSec guy is probably highlighting some content in the “SQL Injection Attacks and Defense” book and picking online material apart to find a way into some DB….somewhere. There is no debate and no discussion. Their is data exfiltration and shell.

  20. grecs on July 9, 2011 at 9:27 pm

    Some additional thoughts by @jack_mannino I came across on Twitter…

    “Personally, I see little value in a web app guy spending 10 hours trying to pull DB tables via SQLi. If blackbox, confim issue and move on.”

    “You’ve proven that there’s an issue. Educate the developers instead of spending time proving how ‘leet’ you are.”

    “Additionally, if you are paying someone $ to tell you cookies aren’t marked secure, #youaredoingitwrong”

    “Let automated tools do what they do best, and let your high dollar consultants focus on answering tough questions, and help you move forward”

    “Done ranting…now it’s time for a beer. Happy Friday, everyone.”

  21. grecs (@grecs) (@grecs) on July 9, 2011 at 10:12 pm

    @mubix Tx for the RT on the @cktricky article! http://bit.ly/oqzQZU

  22. Chris Gates (@carnal0wnage) on July 9, 2011 at 11:23 pm

    good discussion on @cktricky ‘s article http://t.co/STblOof and on websec mail list http://t.co/ImBZcHn and like usual nothing on PT list…

  23. Andrew Wilson on July 10, 2011 at 12:43 am

    Isn’t proving it to them educating them? It’s one thing to get a lecture on SQLi and why it’s nasty– it’s another thing to have someone send you your database. And trust me– I’ve been in plenty of “oh that can’t happen for xyz reason” type conversations with developers. Sometimes even after I’ve already proven it can.

    Second– if you pay someone for their expertise that you have little in, that isn’t a bad way to spend money. Everyone comes from a different place in this– and you have to help them appropriately.

    Last– automated tools do little to nothing “best.” At “best” anything a scanner finds needs to be verified– but presumedly if you need help w/ understanding cookie flags you wont be able to tell the difference easily.

  24. Jack Mannino on July 11, 2011 at 12:25 pm

    Proving something isn’t the same as educating, although it’s a damn good way to open some eyes and get some attention. But this is why most developers can’t stand pentesters and security people. Going above and beyond to prove how smart they are and how stupid the developers are happens too often. Not to say everyone does it, but we all know this happens. Why not invest that same time and energy into giving them a demo (onsite, Webex, etc) of exactly how you discovered the issue, and discussing mitigation strategies and ways to minimize regression?

    If someone came over your house and kicked your dog, laughed, then sat in your favorite chair, wouldn’t that piss you off? Apply that same concept to a developer that might have invested thousands of hours of their life into building an application. You come in, make a mockery of it, rip it apart in every conceivable way, then boast about it on a status call. How quickly do you think that developer is going to stop caring about what you have to say, and start debating the validity of everything you say?

    You should know that I’m not a fan of automating your way towards secure. I think understanding where the tools stop and the brainpower starts, is essential. I think too many companies waste money on bringing consultants in and asking them to look for default content, cookie stuff, etc. Even worse, they bring in MULTIPLE consulting groups on a [monthly | quarterly | semi-annual] basis where they duplicate efforts. This is a complete waste of time, money, and effort. Why not have different groups of people attempting to solve different problems?

    In the past back in my worker-bee days, I worked for a company that was consistently rated against other “consultancies” in side-by-side testing by a few clients. Hence, it was a MUST to find every instance of XSS, every instance of CSRF, etc etc. If we missed one and someone else found it, we looked like crap. This resulted in a time consuming reporting process as well as lots of time spent looking for things that you’ve already proved were an epidemic of an issue 30 findings ago. Could the time and money have been better spent? Absolutely. If you’ve already found XSS in say, 40% of tested JSPs and you are 1/3 through the test, shouldn’t you assume that the client needs to go back to the drawing board?

    I’m against how the typical vuln assessment, pentest, whatever you wanna call it, is performed. It’s often an aimless, pissing into the wind mish mash of looking for important and not so important stuff. You could argue that the post-mortem mess is up the organization to solve, but the other scenario that generally unfolds is some orgs will blow off fixing certain things because 1) the consultant failed to articulate a clear cut methodology or approach to mitigate the issue, or 2) they deemed the level of difficulty too high for the average bear to also find.

    I guess I want to see the general approach evolve. We haven’t pentested our way towards being secure, nor have we code reviewed our way to glory. But until appsec sweatshops that offer to assess a 10 role application with like 400 JSPs in 40 hours go away (as well as their generic and useless templates findings), it’ll be more of the same.

  25. cktricky on July 11, 2011 at 1:28 pm

    Well I guess I did say “Argue amongst yourselves about the value of varying methodologies.”, lol.

    Good stuff.

  26. Ken Johnson (@cktricky) (@cktricky) on July 11, 2011 at 1:57 pm

    Nice comments by @jack_mannino on why appsec pentesting should be dead http://t.co/KuBEN1N

  27. Andrew Wilson on July 12, 2011 at 7:50 pm

    I am sorry but providing proof is education. You can talk about theoretical constructs as much as you want– but even in science evidence is not proof. Showing a demo of how I identified a possible issue and didn’t exploit it isn’t proof of the severity of the problem– or that it’s even exploitable. Google, for instance, recognizes this through it’s bug bounty by not paying for “self-only” XSS. Just because I can provide evidence of a vulnerability’s existence doesn’t mean it is viable for exploitation.

    I am also unclear on why you’ve concluded testing code is akin to kicking someone’s dog. As an ex-developer with a few years under my belt– as much as I enjoyed what I did, I never felt as though my code was to be placed in the same comparative charts as a living animal. Now– the developer has feelings and I get that. But it is indeed my job to find security issues to the best of my ability. If a developer holds ego against me doing my job, that is ultimately out of my control. The nature of my job doesn’t equate to abuse.

    Take security out of the question and re-ask it. If I was a QA tester and I found a handful of critical non-security bugs in the app. Should a developer take that less personally? Am I as a tester to feel sorry that the code is not up to a measured par? QA’s job is to validate the code itself against a set of functionality (or security standards in my case). If the developer is made aware of that expectation and doesn’t hold him/herself to those standards, that is not my fault for exposing them as falling short. Now, that doesn’t always mean the expectation of security is fair. Management might be required to measure security, but the team is not given time & incentives to produce it. That’d almost be like me QA’ing code for features not requested. But again I’d argue that is entirely a weakness of the development process, not me nor my job.

    As far as money and how people invest it– that ultimately isn’t my concern. Do companies always spend money on the right things all the time? Likely not. But that is their choice and right as a company to do so. I am not their judge or jury. If you own a company, you are free to make decisions as you choose. But I refuse to entertain hypothetical examples intended to disprove the validity of the work I do. As many of those things you have claimed being wastes, I could provide an equal amount of hypotheticals that say they are.

    Finally, and as a general note, if this conversation is going to continue I’d ask you to reconsider your tone. You’ve passively accused pen-testers of kicking dogs, sitting in other people’s favorite chairs, and being abusive to developers. An occupation of which I am a member. You’ve also less passively claimed my profession is useless, a waste of money, and “pissing in the wind.” I don’t believe any of that is warranted, nor fair. Respect is a two way street.

  28. Ken Johnson (@cktricky) on July 12, 2011 at 7:57 pm

    @natronkeltner Good response by @awilsong to Jack’s comment. http://t.co/KuBEN1N

  29. (@novainfosec) (@novainfosec) on March 26, 2012 at 12:52 pm

    Best Of: NetSec Breaking Apps Better than AppSec? http://t.co/64dFrBRM

  30. (@novainfosec) (@novainfosec) on April 12, 2012 at 5:17 pm

    Best Of: NetSec Breaking Apps Better than AppSec? http://t.co/yG3ux11c

  31. (@novainfosec) (@novainfosec) on June 5, 2012 at 5:40 pm

    Best Of: NetSec Breaking Apps Better than AppSec? http://t.co/64dAU1IC

  32. novainfosec (@novainfosec) on November 13, 2012 at 9:17 pm

    Best Of: NetSec Breaking Apps Better than AppSec? http://t.co/NAvpWKn7

  33. novainfosec (@novainfosec) on March 20, 2013 at 11:21 am

    Best Of: NetSec Breaking Apps Better than AppSec? http://t.co/hINaYFwndv

  34. EliteParakeet on June 5, 2013 at 9:12 am

    This is just a downright ignorant post. There is nothing about NetSec folks that make them more efficient than AppSec folks. Just because you know of some bad apples in AppSec that don’t go above and beyond in the goal of code execution, doesn’t mean all of us should be looped into the same lazy group.

    The same could be said for NetSec folks that run nothing but automated scanners and call it done. If you trust an automated Blind SQLi tool to tell you your app is secure, you’re doing it wrong.

    So I’m on a web app test, I find a OS Cmd Injection vuln on the app and I’m able to dump the applications .war and begin doing a code review on it locally. Do you think that’s lazy? My test is scoped to one app, and I’m trying to identify as many vulns as possible.

    This whole “ask your netsec friend” for help attitude in this article is crap. Yes, both sides could learn from each other, but this pompous attitude of NetSec being more efficient is garbage. Both sides have good and bad apples in their ranks. You just obviously haven’t been smart enough to break out of your shell (Literally, get out of your metasploit shell id your own vuln and write your own shellcode) and learn from quality AppSec folks.

    I’m quite surprised @Grecs would allow this to be posted here when it’s clear that the OP has no intention other than rabble rousing AppSec folks.

    And about what you said “Respect is a two way street”, that street turned into a dead end the second you posted this crap.

  35. novainfosec (@novainfosec) on March 6, 2014 at 4:49 am

    Best Of: NetSec Breaking Apps Better than AppSec? http://t.co/hINaYFfkbv

Leave a Reply

Your email address will not be published. Required fields are marked *


About Us

Founded in 2008, NoVA Infosec is dedicated to the community of Metro DC-based security professionals and whitehat hackers involved in the government and other regulated verticals. Find out more on our About Us page.