Having worked as a software developer for 3 years, most of the concepts in this chapter aren’t new to me. However, the CompTIA Security+ book tries to demonstrate the security implications of each concept. As usual, the chapter is broad and shallow, so don’t expect anything too in-depth here.
This is a continuation of my blog post series on the CompTIA Security+ exam, where I share my studying and connect it to real-world events.
Development Lifecycle Models
This refers to the process of how software is developed. This includes everything from gathering requirements, planning, design, coding, testing, deployment and support.
The two major models are waterfall and agile.
- Waterfall is based on manufacturing processes. You do one phase, complete it in its entirety, then move on to the next phase. It’s very simple, conceptually, but has issues. Most notably, it does not have flexibility to handle changing conditions or new requirements well. Security concerns have to find their way into a product at the very beginning.
- Agile, on the other hand, is focused on small, rapid increases in functionality. There are two subcategories here: scrum and XP. Scrum is process-based, where as agile is focused on “user stories.” Each add functionality to a product iteratively. New tasks (and possible security-related items) make their way into the development process via the product backlog.
Secure DevOps
DevOps is another buzzwordy phrase that refers to the combination of development and operations. Blending these two groups (and related tasks) together help strengthen the deployment process. When security concerns are integrated into devops, it is called “Secure DevOps” (very creative).
The book then provides a brief rundown of related DevOps tools and concepts.
- DevOps has a lot of ground to cover, and are often tasked with very highly scaled systems. Automating some of these tasks help free them to focus on the most urgent or important items. Routine security processes can be automated, as discussed in previous chapters.
- Continuous integration is when you continually update and improve a production code base, typically with automated testing and deployment in the mix. Making routine, incremental changes can help with security. Smaller, well-documented changes mean that you can trace back any issues quickly to the source.
- Baselining was covered in several previous chapters. You define metrics, measure your system by them, and then take future “snapshots” of system health. DevOps can do this as well, by defining metrics around performance and other variables.
- Immutable systems are those that, once deployed, are never modified. If there’s an issue or some update is needed, an entirely new system replaces the old one. This can make diffs and tracing issues easier.
- Infrastructure as code is the use of code to build systems programmatically rather than manual configuration. This helps with maintaining settings and configurations, and makes things easier as systems become larger and more complex.
Version Control and Change Management
Version control is tracking which version of a product is being worked on in a given environment (dev, staging, prod, etc.). Change management is how an organization manages which versions are currently being used and also manages changes as they are released.
It’s important to have detailed documentation (internally) about what is being fixed in each version number and bug fix, as well as why the fix was needed and how it was resolved. Again, continuous small changes can be advantageous. DevOps should have a process that ensures that all changes to production are authorized, properly reviewed and tested, and if something goes wrong, can be easily rolled back.
Provisioning and Deprovisioning
The process by which you assign permissions and privileges to a user is called provisioning. The reverse–revoking permissions–is called deprovisioning. The same can work with computer processes and threads, where a program temporarily elevates the privileges of a thread/process before removing those privileges.
Secure Coding Techniques
We’ve jumped back from DevOps into development again. This rehashes some earlier chapters talking about methods of attack and mitigation.
Secure applications are built on a solid foundation of properly handling configurations, errors, exceptions and inputs. See MITRE’s Top 25 list or OWASP’s Top Ten list for a summary of common issues.
Proper Error Handling
An attacker can force errors and shift an application into an exception handling state. These errors should be trapped and handled in the generating routine, and then reported securely to a log file. If these messages are shown to the user, they are also shown to attackers. This might leak valuable information about data structures, file paths, and so on.
Proper Input Validation
You should never trust user input without validating it. Proper validation of inputs can help mitigate against attacks such as buffer overflows, XSS, XSRF, path traversal, and more. You should also consider output validation: does the output I’m returning make sense in a given scenario? If not, consider that it might be a malicious request.
Normalization
This is the process of taking an input and creating the simplest form of the string before proceeding.
Store procedures
These are precompiled methods for use in databases. They help with performance, but can also have security implications. By using stored procedures, this isolates user input from the actual SQL statements being executed.
Code Signing
This refers to the application of a digital signature to code. This has two purposes. First, it provides a means for the end-user to verify code integrity. It also provides evidence as to the source of the software.
Code Obfuscation and Camouflage
Security by obscurity is not a legitimate security strategy. Still, you shouldn’t expose more information than you need to. The book provides the example of naming email servers “email1, email2, email3, etc.” By avoiding this, you’ve hidden some information from attackers (how many email servers there are and how to find them) even if it’s not a complete security strategy.
Code Reuse and Dead Code
Code reuse can be a good thing. It saves time and money, and can also provide consistency throughout your code base. The trick is making sure that it’s properly vetted. If you have a vulnerability in a piece of code, and then you reuse it everywhere, that doesn’t help your security stance.
Dead code is code that might be executed, but the results aren’t used anywhere. You can’t just compiler-option your way out of this, though. You need to examine each instance of dead code yourself to determine if it should be removed.
Server-Side vs. Client-Side Execution and Validation
So, you’re on-board with the idea of validating inputs. But where should you validate them? The client can become compromised, so even though it requires a roundtrip, all validation should be done on the server. You have more control over the server, so all checks with respect to completeness, correctness, and security should be done there.
Memory Management
This is the set of actions used to control and coordinate computer memory. This includes freeing memory after you’re done using it. In lower level languages like C, you must do this manually. Some languages provide garbage collection. If you don’t manage memory correctly, you will get a memory leak.
Use of Third-Party Libraries and SDKs
This is similar to code reuse. If you can use vetted libraries, why not? It saves time and money. If they aren’t properly vetted, or regulations require you to review/verify all of your dependencies, maybe this isn’t a good option.
Data Exposure
Data needs to be protected during storage (at rest), during communication (in transit) and at all times. If you lose control over data, that’s called data exposure. This might be due to a failure of confidentiality or a failure of integrity.
Code Quality and Testing
Code should be reviewed before it makes its way into production. You want to find weaknesses and bugs before your users (or attackers) do.
Code analysis is the processes by which code is inspected. This can be done statically (without executing the code), or dynamically (while the code is being executed).
Static code analysis is usually done via automated tools. According to CompTIA, nearly everything is a good candidate for static testing: at the unit level, subsystem level, system level, and the complete application.
Dynamic analysis is done by executing code on the target system or on an emulator. Depending on the system under test, this requires some specialized tools. You can use fuzzing as a brute force testing method for input validation issues. Works well on white, black and gray box testing.
Load testing involves “running the system under a controlled speed environment.” Stress testing is where you test the system under conditions that exceed expected values. In other words, it’s overload testing. These types of tests help determine bottlenecks and performance issues.
You can also test using sandboxes. This is an isolated environment that lets you execute untrusted or unverified code.
Lastly, validation vs. verification. Code testing is the verification that code meets the functional requirements. Validation checks whether the program specification captures the customer requirements. Verification is whether the software meets the model specification.
Compiled vs. Run-time Code
Compiled code is code written in one language, and then transformed via a compiler into executable code. This means it can be optimized and run faster at run-time. Conversely, interpreters create run-time code. This might be slower because the interpreter is managing the transformation on-the-fly, but is also more flexible if changes are needed.