Before integrating SAST into your SDLC you want to make sure that your code analysis produces only relevant findings with the best performance possible. In the first part of this guide, we will cover the following 5 configuration options and best practices for fine-tuning:
- Set the Language Version
- Exclude Superfluous Files
- Disable Irrelevant Issue Types
- Ignore Noisy Code
- Add Review Labels
In RIPS, your custom configuration settings are stored in analysis profiles. Create a new profile and assign it to a specific application, or create a global profile that can be used for the analysis of all different applications. If you enable the default flag, then your profile is loaded automatically whenever a new scan is performed.
Before you start fine-tuning your analysis, we recommend to first scan your application and to manually review some of the reported issues. This way, you can identify relevant and non-relevant findings that help you to take corrective actions in the following configuration steps.
1. Set the Language Version
As a first step, you should set the exact version of the programming language that your application uses at runtime in production. RIPS performs language-specific code analysis that is aware of all subtlenesses and features of different programming languages and their versions. RIPS does not execute your code, but it simulates precisely the behavior of different language features to find out if a security issue arises or not. Further, RIPS detects vulnerabilities that base on a vulnerable language interpreter and that could lead to memory corruption issues depending on the version that you use.
2. Exclude Superfluous Files
A low hanging fruit to boost your performance and to prevent irrelevant findings is to exclude certain file paths from your analysis. In RIPS, you can configure a list of ignored locations and specify whether these should be excluded entirely from analysis, or if they should remain included in the data flow analysis but should not trigger any security or quality issue reports.
- Test cases in your repository should be ignored completely from analysis. Typically these codes are not reachable for outside attackers, and duplicate code definitions can confuse static analysis.
- Large libraries can be excluded from analysis if they do not introduce new user input to your code base and do not perform security-sensitive operations (file operations, SQL queries, etc.). For example, a PDF or parser library can be huge and complex to analyze but is often irrelevant when following user input into security-sensitive functions and thus can be put onto your ignore list.
- Frameworks should be handled differently. Typically, there is a lot of data flow happening between your custom code and the framework code that both call each other’s functions and shift data back and forth. Hence, we recommend keeping the framework code included in your analysis. You may, however, want to ignore code quality reports for your framework code since you likely don’t plan to address any of those issues.
Find out more about match and exclude types.
3. Disable Irrelevant Issue Types
This brings us to our next fine-tuning option. RIPS can detect code issues from hundreds of different categories and tries to report only significant bugs by default. But which issues are of your interest depends on your priority, resources, and compliance requirements. Any rule that you can disable saves hardware resources, analysis speed, and review time.
- First, disable code quality types in order to focus on critical security bugs at the beginning. If you are remediating hundreds of security issues, there is no need to let RIPS detect the same hundreds of code quality issues over and over again.
- Set a maximum for reported issues per type in the general settings. When you see, for example, 1.000 Cross-Site Scripting issues and you can only address a handful of them at a time, or when you fail your build for every critical issue, it makes sense to let RIPS only report the first 10 or 100 issues per category.
- Later re-enable code quality types once the most critical security bugs are patched. The code can then be further hardened by addressing the less severe but also important code quality findings.
- Disable specific issues types that turned out to be not relevant for your developer team. For example, you might be aware of your leftover debug code or dangerous feature usage but decided to not do anything about it.
Find out more about all supported issue types.
4. Ignore Noisy Code
As a next step, we look at more code specific configuration options. Disabling issue types, as described above, accounts for the complete code base. However, you might want to disable findings of an issue type only for specific code parts and not in general. Let’s have a look at an example.
Vulnerable code sample with a debug function
The code defines a debug function
sqlError() that prints failed SQL queries in line 3.
It is vulnerable to Cross-Site Scripting (XSS) attacks.
In line 10, the code also dynamically constructs a SQL query that is susceptible to SQL injection.
Best practices for this code sample:
- Ignore calls of function sqlError() to prevent that an XSS issue is reported for every SQL query that uses this debug function. If a SQL query contains user input, then there is a SQL injection reported anyway, which is more worrisome. It also seems that this function is only used in debug mode. To prevent the analysis of
sqlError()you can add it to the ignored code list, or you can use a code annotation to ignore only specific calls, as shown in line 15.
- Ignore return values from User::getParameter() if RIPS reports a SQL injection although you are confident that the id parameter of the user object cannot contain user input in line 9. Functions or methods can be added to the ignore list with a type return. Their code definitions are then still included in security analysis. But all values returned from these functions are ignored during data flow analysis. As a result, there is no SQL injection reported since no user input is assigned to
5. Add Review Labels
Review labels are small tags that you can add to each detected issue to quickly flag and coordinate your evaluation status with others. When you then rescan your code base, RIPS remembers your applied review labels and copies them to new detected issues that look equal or similar. This works even if your code lines change between two subsequent scans. To ignore one specific issue reported by RIPS, you can add a negative review label. These issues are then hidden in your current and following scan results.
Best practices for using negative review labels:
- Not exploitable is typically used when code-wise, the issue looks like a valid security problem, but it turns out to be not exploitable for attackers. If you still plan on fixing this issue to harden your code, we recommend using the Bad practice label instead.
- Not an issue is typically used for a false positive report. It can also mean it is a valid issue, but not an issue in your specific environment.
- Duplicate is typically used for issues that are similar or equal to another issue that you already address. Hiding duplicate issues helps to maintain a clean list.
In your account settings, you can choose if negatively reviewed issues should be hidden in your results (default), or displayed. You can also assign a review label to multiple issues at once by using the bulk review feature in the Filter Issues list.
In the first part of our guide, we looked at 5 basic options to fine-tune your static code analysis results with high-level settings. We recommend to start with these more general settings first and then later to deep-dive into the advanced and code-specific settings introduced in our next part. We hope that this guide helped you to get started with fine-tuning, and our team is always happy to assist during this process.