The last chapter of the first section of the CompTIA Security+ study guide is about vulnerabilities. Since previous chapters covered malware, threat actors, and attacks, most of this chapter is not new. Instead, the book takes this opportunity to talk about them in slightly more depth. But, as I’ve stated many times before, this book (and seemingly the certification in general) focuses on breadth, not depth.
This is a continuation of my blog post series on the CompTIA Security+ exam, where I share my studying and connect it to real-world events.
Vulnerabilities and Impacts
Once again, the structure/order in this chapter seems… weird. Most of the vulnerabilities are based in software. If you’ve written software before, you’ve probably run into most of them.
Race conditions are bad in themselves, but often very difficult to debug as well. What are they? Race conditions are errors that occur when “the output of a function is dependent on the sequence or timing of the inputs.” If the inputs don’t happen in the expected order or timing, bugs occur. There are ways around this (reference counters, locks, etc). Race conditions can be exploited to crash a program or system.
This part of the chapter covers some general vulnerabilities. Once again, I’m confused by the organization.
End-of-life refers to a system that is no longer functioning as intended. This could be because the original vendor doesn’t support it anymore (Windows XP), the vendor went out of business, etc.
The vendor may have discontinued support… that doesn’t mean that people aren’t still using those systems. They’re just using them without vendor help, or security updates. More vulnerabilities will be found after end-of-life, which means there will be known issues without any security patching.
Embedded systems are, broadly speaking, systems that exist within other systems. Think “internet of things” or other close coupling of PCBs and embedded software. Embedded systems are thrown into this category because they can be cut off from normal security update functionality. For example, an embedded system running Linux might not get as many (or any) security updates compared to Linux running on a desktop.
Lack of Vendor Support
Lack of vendor support was kind of mentioned in the end-of-life section, but it gets a few more paragraphs here. If an organization decides to use a product after it’s been end-of-life’d, the organization assumes all responsibility to cover existing and newly-found risks.
Additionally, if someone is using software in a way that isn’t covered or sanctioned by the vendor, or they can’t get third-party vendor support, then they too are responsible for all risks, even if it hasn’t been end-of-life’d.
Improper Input Handling
As covered several of the attack sections, improper input handling creates a large number of vulnerabilities. It’s generally understood in computer science that you should never blindly trust user input. And yet, it remains an issue.
Improper input handling can lead to issues with overflows, XSS, XRSF, path traversal, injection, etc. Consider all user input to be dangerous until you’ve sanitized it.
Improper Error Handling
As a developer, you want to make sure that all error cases are handled. Additionally, you want to have useful debugging information so you can trace the cause of the error. What you don’t want to do is share this useful debugging information with the outside world.
This can lead to the disclosure of database schemas, filenames, paths, and more. Capture your error messages in a secured log file, instead of the console.
Misconfiguration or Weak Configuration
Another broad category. This refers to any kind of configuration that weakens the security posture of an organization or its systems. This might be leaving default credentials as-is. It might be having a backup or other regular tasks that leave the sever unresponsive. It might be support for backwards compatibility and legacy protocols. You get the idea.
Default configuration is “the configuration that a system enters upon start, upon recovering from an error, and at times when operating.” Some operating systems try to be “secure by default” (ex: Microsoft). This hopefully prevents situations like leaving default credentials in use by forcing the user to create their own credentials, etc.
While people don’t typically think about software programs as requiring resources, they do (I’m defining “people” as “non-developers”… don’t read into it too much). If a program runs out of memory, or needs more bandwidth, the program might run into errors or crash.
Untrained users are people who haven’t received enough training to be able to fully use a computer system. This might result in them working more slowly (if they don’t know shortcuts, etc.). It also might result in them bypassing controls or other unsafe behavior.
Improperly Configured Accounts
I don’t know why this section isn’t up with the other configuration ones. But an improperly configured account can cause security risks. Someone might accidentally be given access to data or permissions. Another “bad smell” is having an admin-by-default account (vs having to confirm that you want to perform an admin action).
Vulnerable Business Processes
This falls more in the social engineering realm. Some businesses, for example, do not verify invoices with purchase orders before sending out a check. Others don’t verify with HR before they provide IT services. In any of these cases, a vulnerable business process can be exploited by attackers.
Weak Cipher Suites and Implementations
A number of the issues in the cryptographic attacks section are caused by weak cipher suites and implementations. One well-known issue is when people try to “roll their own” auth implementation. Here, they’re relying on security by obscurity, but that doesn’t work very well. Another vulnerability is a poor implementation of a known cryptography algorithm. Lastly, a “weak cipher suite” refers to a cryptographic method that has been deprecated due to significant vulnerabilities being discovered.
Continuing to hop around, we’re back to buffer overflows and user input. If you ask for user input, but do not verify or limit the length of the input, it could result in a buffer overflow. This means that other areas in memory will be overwritten.
Memory leaks are another category of software issues where the computer program doesn’t handle memory usage correctly. When a program is running, it can use more memory resources… it should also release resources that are no longer needed. If this doesn’t happen properly, it cause exhaust system resources.
Integer overflow refers to an integer “rolling over” once it has reached the maximum value. Depending on the type of integer, it can “roll over” to either 0, or a negative number (assuming we’re incrementing). This can result in logic errors within the program. One tragic example of this is the Therac-25.
This book loves buffer overflows. Once, again, this is an input validation attack that takes advantage of programs that do not validate the length of inputs. This is both the result of poor programming practice, and weaknesses within programming languages.
Pointers are variables that point to other data. If they are dereferenced, it means that the meaning is no longer the memory address (pointed to by the pointer), but the contents of that location. If, for some reason, you decided to let the user choose what to dereference, bad things could happen. The program might crash, you might reveal data that you don’t want to reveal, etc.
DLL injection refers to adding a DLL to program at runtime. This DLL is malicious and/or has a vulnerability to exploit.
System Sprawl and Undocumented Assets
Many systems, especially legacy systems, can grow very large in size. Additionally, many people can be involved over the course of a project. This results in a lot of hardware, software and data that isn’t fully understood by the organization. The book notes that “the foundation of a comprehensive security program is understanding all of your assets and how they are connected.”
If you don’t know what software you have, it’s unlikely that it will be updated. It’s also likely that configuration issues won’t be noticed by your team.
System sprawl refers to the expansion of systems over time where the growth exceeds the documentation and understanding. Undocumented assets are parts of the system that aren’t documented or otherwise known by the whole team.
Not a whole lot to say here other than the book’s definition.
Architecture and design weaknesses are issues that result in vulnerabilities and increased risk in a systemic manner.
One such example might be flat (non-segmented) networks that allow attackers (once inside the system) to easily traverse the network.
New Threats/Zero Day
Lastly, we’re going to talk about zero days… again. A zero day is a vulnerability that is new and not yet covered by a patch. If the Security+ exam asks a question where a system is compromised despite having completely up-to-date security patches, the answer is probably “zero day.”
Improper Certificate and Key Management
I used to have a professor who would say “this test would be easy if you did all the questions right.” Similarly, public key infrastructure can work if you do it right. If you mismanage certificates or keys, then your system might be compromised. Follow established processes.