A few months ago I gave a talk about securing microservices at the Boston Cloud Native Computing Meetup. After the presentation, a young developer (a recent college grad) came up to me and said, “Nice talk — I didn’t learn any of that at school.” I asked which parts were new to him — I had covered a lot of material, some of which (like service mesh technology) is pretty new, and it didn’t surprise me that it wouldn’t all have been covered in a CS program. “Well, we weren’t really taught anything about security,” he admitted. As we got to chatting, I realized that he wasn’t exaggerating. He’d taken one network security class and some graduate level courses on cryptography, but none of the ordinary classes incorporated security as a normal part of good software development. It was another demonstration to me that for all our talk in the industry about DevSecOps and “building security in,” the reality remains that most developers are woefully under-prepared with application security skills.
I suggested that the developer at the meetup start with the basics: How do you find risk in your applications, and what can you do to reduce it? But there are dozens of tools and techniques that appsec professionals mention — almost all with unhelpful acronyms! So I put together a quick list for my new friend, and want to share it here as well. This is a list of ten terms involving application security discovery that every developer should know. Of course there are many more than ten, but this is a good start. These should help to identify and, in some cases, protect against the risks listed in the more famous top ten list — the OWASP Top Ten — the ten Most Critical Web Application Security Risks.
- Issues (including Vulnerabilities, Bugs, Flaws, False Positives and Negatives)
- Threat Modeling
- SAST — Static Application Security Testing
- DAST — Dynamic Application Security Testing
- IAST — Instrumented (or Interactive) Application Security Testing
- SCA — Software Composition Analysis
- Pen Testing
- WAF — Web Application Firewall
- RASP — Runtime Application Self-Protection
- GRC — Government Regulations and Compliance (including PCI, SOC 2, HIPAA, etc.)
When the security of an application is being discussed, a number of terms get tossed around to describe items of concern in the application — the security issues that need to be addressed. Security people often talk about bugs, flaws, or defects — and many get caught up in semantics. I prefer to focus instead on risk: Things in an application that create some security risk are security issues. It’s no different than performance issues that create risk of performance degradation. Security issues are best addressed when an app is still in development (they’re easier and cheaper to address in development than when it’s in production). Many tools and manual processes exist (several follow below) for trying to identify security issues in an application — both at development/build time and in production. Unsurprisingly, these tools and processes sometimes make mistakes. They may inform you that a security issue exists when it really doesn’t (a false positive) or, conversely, they may miss a security issue that actually does exist (a false negative).
2. Threat Modeling
A few of my colleagues were surprised that I put this term on the list. Threat Modeling is often considered a complex practice that only the largest or most mature security organizations employ. But I disagree. It’s something that can — and should — be done by any team for every project. Threat Modeling is essentially a way to think about where security problems could arise in an application. Ideally it’s done prior to actually developing the application, and it can be as simple as a 15-minute whiteboard conversation. You ask what the application will do, and what an attacker might want it to do instead. From that analysis, you can create security requirements or abuser stories (alongside user stories) and plan other ways to ensure that the application will accomplish what it needs to accomplish securely.
Static Application Security Testing (SAST) is one of the original security testing techniques. It is a way to identify potential security risks by looking at the source code of an application, not by running it. Think of SAST as an automated code review done by a tool instead of a human expert. Sometimes SAST (which you’ll often hear referred to as “static analysis”) is done as part of an automated test suite in the CI process, or it may be built into your IDE to provide feedback in (near) real time. While static analysis can be very effective at finding some relatively straightforward risks, it does have a reputation for producing a lot of false positives.
Dynamic Application Security Testing (DAST) tries to identify risk by testing a running application. In this way, DAST (also called dynamic analysis) is more like a real attacker. It can’t see the source code; rather it attempts to simulate the actions of an attack by throwing unexpected or incorrect actions at an app. Dynamic analysis can be quite time consuming — tests can run for hours — so it’s not an ideal fit for fast-moving DevOps CI/CD pipelines.
Instrumented (or Interactive) Application Security Testing (IAST) is an attempt to find the best of both worlds of SAST and DAST. It tests a running application, but does so from the outside in. This allows it to have fewer false positives than static analysis, but still have the deep contextual insight into the application that dynamic misses by running outside the app itself. Drawbacks? As an integrated part of the application, the solution needs to support the programming language (and frameworks) used. Not all languages are available, so you Haskell fans may be out of luck.
Most applications today contain more third-party code than locally developed code. This is a smart practice that allows you to focus on the important features that differentiate your solution without wasting time on “reinventing the wheel” solving problems that others have already solved. But risk comes when you don’t know what third-party code is in your app, and whether or not that code is secure. Software Composition Analysis (SCA) tools help reduce this risk by analyzing the open source and third-party components in your application. They identify potential license problems and inform you about any known security vulnerabilities. You can then take steps to reduce the risk by updating to a newer version of the component that fixes the issue, or switching to a different component. One drawback is that most of these tools will simply tell you which components are in the application; they won’t tell you whether they’re actually being used or how. If you have hundreds of components, it can be difficult to prioritize which ones need attention first.
7. Pen Testing
Sometimes the best way to find where risk is in your app is to pay someone to attack it. That’s the general idea of penetration testing (more commonly referred to as pen testing). You allow your application to be attacked by a professional tester to see what vulnerabilities they’re able to discover and exploit. Tests can be “white box” where you share information about the app with the tester in advance, or “black box” where they have no information and just come at it like a real-world attacker would. While highly effective, pen tests can be very expensive and take a long time to complete. This is not an application security testing technique that you can use with every release in a highly automated CD pipeline.
Every web application on the public internet is likely to be attacked sooner or later. It doesn’t matter whether you think you don’t have any information worth stealing (you probably do) — someone out there will try to attack you. So just testing for security during development is not enough. A web application firewall (WAF) is a solution (either hardware or software) that sits between your app and the public internet and tries to identify and block those inevitable attacks. The challenge comes in properly identifying what is an attack and what is just normal traffic. WAFs need to be “trained” to recognize the good traffic from the bad. Otherwise they run the risk of blocking legitimate traffic (which won’t make your users happy!). This training can be complex and time consuming and requires a good amount of security expertise. But there are newer WAF products that are using machine learning and other techniques to improve both the outcomes and experience of managing a WAF.
An alternate approach to WAF for detecting and blocking runtime attacks is called runtime application self-protection (RASP). Like IAST, RASP runs inside the application. It can see both the inbound traffic (like a WAF) as well as what happens to it once inside the application. This means attacks can be detected and blocked with much more accuracy and precision. But RASP, like other tools that rely on instrumented code (such as APM), can add performance overhead to the running app. This is generally not a problem if it’s only a few percent, but some RASP solutions can take a 10 percent or greater hit on performance.
10. GRC (Including PCI, SOC 2, HIPAA, etc.)
We started with risks and then listed a number of ways to identify and protect against them. While it’s a good idea for every organization to care about this (remember: good software is secure, and secure software is good software), in many cases an organization may be obligated to employ these techniques for regulatory reasons. The term GRC refers to government regulations and compliance. In certain industries, regulations are mandated by government or professional entities, and companies must demonstrate compliance to those regulations through periodic assessments and audits. And almost all of the common regulations involve security. If you’re in commerce, you’ll hear a lot about PCI DSS. In healthcare, it’s HIPAA. And any SaaS vendor will need to consider SOC 2 certification. Developers will likely need to follow certain development processes and practices as part of their company’s regulatory obligations. There’s likely going to be an impact on application architecture, data processing, and even which cloud services you can (or can’t) use.
Final Words . . .
Understanding these ten terms and techniques, and incorporating some into the SDLC, is only one step in reducing risk in applications. But every developer — whether they’re part of an organization with a mature application security team, or at a startup with no formal security at all — can, and should, play a role in shipping more secure software. As I told the developer at the meetup, take the time to understand the basics, and incremental progress will come quickly. Every single thing you do to reduce application risk makes a difference, whether you’re an expert or just starting out.
If you’d like to learn how the Threat Stack Cloud Security Platform® — which now includes Threat Stack Application Security Monitoring at no additional cost — can help to address your cloud security and compliance requirements, please sign up for a free demo.
The post Ten Application Security Terms That Every Developer Should Know appeared first on Threat Stack.