DefectDojo Features

Below are the main sections within DefectDojo. Each is designed to allow for ease of use and simple organization of Products and their Tests. The Models page will help you understand the terminology we use below, so we recommend taking a look at that first.

Products

The following attributes describe a Product:

Name
A short name for the product, used for easy identification. This field can hold up to 300 characters.
Description
Used to fully describe the product. This field can hold up to 2000 characters.
Product Manager
Provides the ability to store who manages the product lifecycle. Useful for contacting team members. This field can hold up to 200 characters.
Technical Contact
Provides the ability to store who should be contacted in case of technical questions and/or difficulties. This field can hold up to 200 characters.
Manager
Provides the ability to store who manages the technical resources for the product. This field can hold up to 200 characters.
Date Created
Stores when the Product was first added to DefectDojo.
Date Updated
Stores when the Product was updated.
Business Criticality
Criticality of the product.
Platform
Type of product: web, API, mobile etc.
Lifecycle
Stage of product development
Product Type
Used to group products together.
Authorized Users
List of users who are allowed to view and interact with the product.

Products are listed on the /product page and can be filtered by their attributes as well as sorted by their name and product type.

Product Listing Page

Visual representation of a product:

View Product Page

Product with metrics:

View Product Page With Metrics Displayed

Engagements

The following attributes describe an Engagement:

Name
Helps distinguish one Engagement from another on the same product. This field can hold up to 300 characters.
Target Start Date
The projected start date for this engagement.
Target End Date
The projected end date for this engagement.
Lead
The DefectDojo user who is considered the lead for this group of tests.
Product
The Product being tested as part of this group of tests.
Active
Denotes if the Engagement is currently active or not.
Test Strategy
The URL of the testing strategy defined for this Engagement.
Threat Model
The document generated by a threat modeling session discussing the risks associated with this product at this moment in time.
Hash Code
A hash over a configurable set of fields that is used for findings deduplication.
Payload
Payload used to attack the service / application and trigger the bug / problem.
Status
Describes the current state of the Engagement. Values include In Progress, On Hold and Completed.

Engagements are listed in the /engagement page and can be filtered by their attributes as well as sorted by the product or product type.

Engagement Listing Page

Visual representation of an engagement:

View Engagement Page

Endpoints

Endpoints represent testable systems defined by IP address or Fully Qualified Domain Name.

The following attributes describe an Endpoint:

Protocol
The communication protocol such as ‘http’, ‘https’, ‘ftp’, etc.
Host
The host name or IP address, you can also include the port number. For example ‘127.0.0.1’, ‘127.0.0.1:8080’, ‘localhost’, ‘yourdomain.com’.
Path
The location of the resource, it should start with a ‘/’. For example “/endpoint/420/edit”
Query
The query string, the question mark should be omitted. “For example ‘group=4&team=8’
Fragment
The fragment identifier which follows the hash mark. The hash mark should be omitted. For example ‘section-13’, ‘paragraph-2’.
Product
The Product that this endpoint should be associated with.

Endpoints are listed in the /endpoints page and can be filtered by their attributes as well as sorted by the product or host.

Endpoint Listing Page

Visual representation of an endpoint:

View Endpoint Page

Visual representation of an endpoint with metrics displayed:

View Endpoint Page with metrics

Findings

Findings represent a flaw within the product being tested. The following attributes help define a Finding:

Title
A short description of the flaw (Up to 511 characters).
Description
Longer more descriptive information about the flaw.
Date
The date the flaw was discovered.
CVE
The Common Vulnerabilities and Exposures (CVE) associated with this flaw.
CVSSV3
Common Vulnerability Scoring System version 3 (CVSSv3) score associated with this flaw.
CWE
The CWE number associated with this flaw.
URL
External reference that provides more information about this flaw.
Severity
The severity level of this flaw (Critical, High, Medium, Low, Informational)
Numerical Severity
The numerical representation of the severity (S0, S1, S2, S3, S4)
Mitigation
Text describing how to best fix the flaw.
Impact
Text describing the impact this flaw has on systems, products, enterprise, etc.
Steps to Reproduce
Text describing the steps that must be followed in order to reproduce the flaw / bug.
Severity Justification
Text describing why a certain severity was associated with this flaw.
Endpoints
The hosts within the product that are susceptible to this flaw.
Endpoint Status
The status of the endpoint associated with this flaw (Vulnerable, Mitigated, …).
References
The external documentation available for this flaw.
Thread ID
Thread ID
Hash Code
A hash over a configurable set of fields that is used for findings deduplication.
Test
The test that is associated with this flaw.
Is Template
Denotes if this finding is a template and can be reused.
Active
Denotes if this flaw is active or not.
Verified
Denotes if this flaw has been manually verified by the tester.
False Positive
Denotes if this flaw has been deemed a false positive by the tester.
Duplicate
Denotes if this flaw is a duplicate of other flaws reported.
Duplicate Finding
Link to the original finding if this finding is a duplicate.
Out Of Scope
Denotes if this flaw falls outside the scope of the test and/or engagement.
Under Review
Denotes is this flaw is currently being reviewed.
Mitigated
Denotes if this flaw has been fixed, by storing the date it was fixed.
Is Mitigated
Denotes if this flaw has been fixed.
Mitigated By
Documents who has deemed this flaw as fixed.
Reporter
Documents who reported the flaw.
Reviewers
Document who reviewed the flaw.
Last Reviewed
Provides the date the flaw was last “touched” by a tester.
Last Reviewed By
Provides the person who last reviewed the flaw.
Component Name
Name of the affected component (library name, part of a system, …).
Component Version
Version of the affected component.
Found By
The name of the scanner that identified the flaw.
SonarQube Issue
The SonarQube issue associated with this finding.
Unique ID from tool
Vulnerability technical id from the source tool. Allows to track unique vulnerabilities.
Defect Review Requested By
Document who requested a defect review for this flaw.
Under Defect Review
Denotes if this finding is under defect review.
Review Requested By
Document who requested a review for this finding.
Static Finding
Flaw has been detected from a Static Application Security Testing tool (SAST).
Dynamic Finding
Flaw has been detected from a Dynamic Application Security Testing tool (DAST).
Jira Creation
The date a Jira issue was created from this finding.
Jira Change
The date the linked Jira issue was last modified.
SLA Days Remaining
The number of day remaining to stay within the SLA.
Finding Meta
Custom metadata (K/V) that can be set on top of findings.
Tags
Add custom tags on top of findings (helpful for searching).
Created
The date the finding was created inside DefectDojo.
Param
Parameter used to trigger the issue (DAST).
Payload
Payload used to attack the service / application and trigger the bug / problem.
Age
The number of days since the finding was created.
Scanner confidence
Confidence level of vulnerability which is supplied by the scanner.
Number of Occurrences
Number of occurrences in the source tool when several vulnerabilities were found and aggregated by the scanner.
Source File
Name of the source code file in which the flaw is located.
Source File Path
Filepath of the source code file in which the flaw is located.
Notes

Stores information pertinent to the flaw or the mitigation. Initially there isn’t a way to categorize notes added for Findings. Admin can introduce a new attribute to notes as ‘note-type’ which can categorize notes. To enable note-types go to System Settings, select Note Types and add new note-types to Dojo.

Note-type

A note-type has 4 attributes.

  • Name
  • Description
  • is_active - This has to be true to assign the note-type to a note.
  • is_single - If true, only one note of that note-type can exist for a Finding.
  • is_mandatory - If true, a Finding has to have at least one note from the note-type in order to close it.

If note-types are enabled, User has to first select the note-type from the “Note Type” drop down and then add the contents of the note.

Images
Image(s) / Screenshot(s) related to the flaw.

SAST specific

For SAST, when source (start of the attack vector) and sink (end of the attack vector) information are available.

Line
Source line number of the attack vector.
Line Number
Deprecated will be removed, use line.
File Path
Identified file(s) containing the flaw.
SAST Source Object
Source object (variable, function…) of the attack vector.
SAST Sink Object
Sink object (variable, function…) of the attack vector.
SAST Source line
Source line number of the attack vector,
SAST Source File Path
Source file path of the attack vector.

Images

Images
Finding images can now be uploaded to help with documentation and proof of vulnerability.

If you are upgrading from an older version of DefectDojo, you will have to complete the following and make sure MEDIA_ROOT and MEDIA_URL are properly configured:

Add imagekit to INSTALLED_APPS:

INSTALLED_APPS = (
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.sites',
    'django.contrib.messages',
    'django.contrib.staticfiles',
    'polymorphic',  # provides admin templates
    'overextends',
    'django.contrib.admin',
    'django.contrib.humanize',
    'gunicorn',
    'tastypie',
    'djangobower',
    'auditlog',
    'dojo',
    'tastypie_swagger',
    'watson',
    'tagging',
    'custom_field',
    'imagekit',
)

Add r’^media/’ to LOGIN_EXEMPT_URLS:

LOGIN_EXEMPT_URLS = (
    r'^static/',
    r'^api/v1/',
    r'^ajax/v1/',
    r'^reports/cover$',
    r'^finding/image/(?P<token>[^/]+)$'
)

Then run the following commands (make sure your virtual environment is activated):

pip install django-imagekit
pip install pillow --upgrade
./manage.py makemigrations dojo
./manage.py makemigrations
./manage.py migrate

New installations will already have finding images configured.

Findings are listed on the /finding/open, /finding/closed, /finding/accepted and /finding/all pages. They can be filtered by their attributes as well as sorted by their Name, Date, Reviewed Date, Severity and Product.

Finding Listing Page

Finding Listing Page

Finding Listing Page

Visual representation of a Finding:

Finding View Finding View Finding View

Deduplication / Similar findings

Automatically Flag Duplicate Findings

‘De-duplication’ is a feature that when enabled will compare findings to automatically identify duplicates. To enable de-duplication go to System Settings and check Deduplicate findings. Dojo deduplicates findings by comparing endpoints, CWE fields, and titles. If two findings share a URL and have the same CWE or title, Dojo marks the less recent finding as a duplicate. When deduplication is enabled, a list of deduplicated findings is added to the engagement view. The following image illustrates the option deduplication on engagement and deduplication on product level:

Deduplication on product and engagement level

Similar Findings Visualization:

Similar findings list Similar findings list with a duplicate
Similar Findings
While viewing a finding, similar findings within the same product are listed along with buttons to mark one finding a duplicate of the other. Clicking the “Use as original” button on a similar finding will mark that finding as the original while marking the viewed finding as a duplicate. Clicking the “Mark as duplicate” button on a similar finding will mark that finding as a duplicate of the viewed finding. If a similar finding is already marked as a duplicate, then a “Reset duplicate status” button is shown instead which will remove the duplicate status on that finding along with marking it active again.

Metrics

DefectDojo provides a number of metrics visualization in order to help with reporting, awareness and to be able to quickly communicate a products/product type’s security stance.

The following metric views are provided:

Product Type Metrics

This view provides graphs displaying Open Bug Count by Month, Accepted Bug Count by Month, Open Bug Count by Week, Accepted Bug Count by Week as well as tabular data on Top 10 Products by bug severity, Detail Breakdown of all reported findings, Opened Findings, Accepted Findings, Closed Findings, Trending Open Bug Count, Trending Accepted Bug Count, and Age of Issues.

Product Type Metrics
Product Type Counts

This view provides tabular data of Total Current Security Bug Count, Total Security Bugs Opened In Period, Total Security Bugs Closed In Period, Trending Total Bug Count By Month, Top 10 By Bug Severity, and Open Findings. This view works great for communication with stakeholders as it is a snapshot in time of the product.

Product Type Counts
Simple Metrics

Provides tabular data for all Product Types. The data displayed in this view is the total number of S0, S1, S2, S3, S4, Opened This Month, and Closed This Month.

Simple Metrics
Engineer Metrics

Provides graphs displaying information about a tester’s activity.

Simple Metrics
Metrics Dashboard

Provides a full screen, auto scroll view with many metrics in graph format. This view is great for large displays or “Dashboards.”

Metrics Dashboard

Users

DefectDojo users inherit from django.contrib.auth.models.User.

A username, first name, last name, and email address can be associated with each. Additionally the following describe the type of use they are:

Active
Designates whether this user should be treated as active. Unselect this instead of deleting accounts.
Staff status
Designates whether the user can log into this site.
Superuser status
Designates that this user has all permissions without explicitly assigning them.

Calendar

The calendar view provides a look at all the engagements occurring during the month displayed. Each entry is a direct link to the Engagement view page.

Port Scans

DefectDojo has the ability to run a port scan using nmap. Scan can be configured for TCP or UDP ports as well as for a Weekly, Monthly or Quarterly frequency.

Port Scan Form

In order for the scans to kick off the dojo.management.commands.run_scan.py must run. It is easy to set up a cron job in order to kick these off at the appropriate frequency. Below is an example cron entry:

0 0 * * 0 /root/.virtualenvs/dojo/bin/python /root/defect-dojo/manage.py run_scan Weekly
0 0 1 * * /root/.virtualenvs/dojo/bin/python /root/defect-dojo/manage.py run_scan Monthly
0 0 1 3,6,9,12 * /root/.virtualenvs/dojo/bin/python /root/defect-dojo/manage.py run_scan Quarterly
Port Scan Form

The scan process will email the configured recipients with the results.

These scans call also be kicked off on demand by selecting the Launch Scan Now option in the view scan screen.

Port Scan Form

Notifications

Notification settings

DefectDojo can inform you of different events in a variety of ways. You can be notified about things like an upcoming engagement, when someone mentions you in a comment, a scheduled report has finished generating, and more.

The following notification methods currently exist: - Email - Slack - Microsoft Teams - Alerts within DefectDojo

You can set these notifications on a global scope (if you have administrator rights) or on a personal scope. For instance, an administrator might want notifications of all upcoming engagements sent to a certain Slack channel, whereas an individual user wants email notifications to be sent to the user’s specified email address when a report has finished generating.

In order to identify and notify you about things like upcoming engagements, DefectDojo runs scheduled tasks for this purpose. These tasks are scheduled and run using Celery beat, so this needs to run for those notifications to work. Instructions on how to run Celery beat are available in the Reports section.

Benchmarks

OWASP ASVS Benchmarks

DefectDojo utilizes the OWASP ASVS Benchmarks to benchmark a product to ensure the product meets your application technical security controls. Benchmarks can be defined per the organizations policy for secure development and multiple benchmarks can be applied to a product.

Benchmarks are available from the Product view. To view the configured benchmarks select the dropdown menu from the right hand drop down menu. You will find the selection near the bottom of the menu entitled: ‘OWASP ASVS v.3.1’.

OWASP ASVS Benchmarks Menu

In the Benchmarks view for each product, the default level is ASVS Level 1. On the top right hand side the drop down can be changed to the desired ASVS level (Level 1, Level 2 or Level 3). The publish checkbox will display the ASVS score on the product page and in the future this will be applied to reporting.

OWASP ASVS Score

On the left hand side the ASVS score is displayed with the desired score, the % of benchmarks passed to achieve the score and the total enabled benchmarks for that AVSV level.

Additional benchmarks can be added/updated in the Django admin site. In a future release this will be brought out to the UI.

Reports

Report Listing

DefectDojo’s reports can be generated in AsciiDoc and PDF. AsciiDoc is recommended for reports with a large number of findings.

The PDF report is generated using wkhtmltopdf via Celery and sane defaults are included in the settings.py file. This allows report generation to be asynchronous and improves the user experience.

If you are updating from an older version of DefectDojo, you will need to install wkhtmltopdf on your own. Please follow the directions for your specific OS in the wkhtmltopdf documentation.

Some operating systems are capable of installing wkhtmltopdf from their package managers:

Note

To get report email notifications, make sure you have a working email configuration in the system settings, and enable notifications for generated reports in the notification settings.

Mac:

brew install Caskroom/cask/wkhtmltopdf

Debian/Ubuntu:

sudo apt-get install wkhtmltopdf

Fedora/Centos:

sudo yum install wkhtmltopdf

Warning

Version in debian/ubuntu repos has reduced functionality (because it’s compiled without the wkhtmltopdf QT patches), such as adding outlines, headers, footers, TOC etc. To use these options you should install a static binary from wkhtmltopdf site.

Additionally, DefectDojo takes advantage of python-PDFKit to interact with the wkhtmltopdf commandline interface. It is easily installed by running:

pip install pdfkit

It will also be necessary to add the path of wkhtmltopdf to your settings.py file. By default the following entry ships with DefectDojp:

WKHTMLTOPDF_PATH = '/usr/local/bin/wkhtmltopdf'

However, you may have to update that entry to suit your installation.

Celery is included with DefectDojo and needs to be kicked off in order for reports to generate/work. In development you can run the celery process like:

celery -A dojo worker -l info --concurrency 3

In production it is recommended that the celery process be daemonized. Supervisor is also included with DefectDojo and can be set up by following the Celery documentation. A sample celeryd.conf can be found at.

Celery beat should also be running, this will enable defectDojo to perform periodic checks of things like upcoming and stale engagements as well as allowing for celery to clean up after itself and keep your task database from getting too large. In development you can run the process like:

celery beat -A dojo -l info

In production it is recommended that the celery beat process also be daemonized. A sample celerybeatd.conf can be found here.

If you are upgrading from an older version of DefectDojo, you will have to install Celery on your own. To do this you you can run:

pip install celery

If you are using virtual environments make sure your environment is activated. You can also follow the installation instructions from the Celery documentation.

Reports can be generated for:

  1. Groups of Products
  2. Individual Products
  3. Endpoints
  4. Product Types
  5. Custom Reports
Report Generation

Filtering is available on all Report Generation views to aid in focusing the report for the appropriate need.

Custom reports allow you to select specific components to be added to the report. These include:

  1. Cover Page
  2. Table of Contents
  3. WYSIWYG Content
  4. Findings List
  5. Endpoint List
  6. Page Breaks

The custom report workflow takes advantage of the same asynchronous process described above.

Slack integration

Scopes

The following scopes have to be granted.

Slack OAuth scopes

Token

The bot token has to be chosen and put in your System Settings

Slack token

JIRA Integration

DefectDojo’s JIRA integration is bidirectional. You may push findings to JIRA and share comments. If an issue is closed in JIRA it will automatically be closed in Dojo.

NOTE: These steps will configure the necessary webhook in JIRA and add JIRA integration into DefectDojo. This isn’t sufficient by itelf, you will need to configure products and findings to push to JIRA. On a product’s settings page you will need to define a:
  • Project Key (and this project must exist in JIRA)
  • JIRA Configuration (select the JIRA configuration that you create in the steps below)
  • Component (can be left blank)

Then elect (via tickbox) whether you want to ‘Push all issues’, ‘Enable engagement epic mapping’ and/or ‘Push notes’. Then click on ‘Submit’.

If creating a Finding, ensure to tick ‘Push to jira’ if desired.

Preparing Jira, Enabling the Webhook
  1. Visit https://<YOUR JIRA URL>/plugins/servlet/webhooks
  2. Click ‘Create a Webhook’
  3. For the field labeled ‘URL’ enter: https://<YOUR DOJO DOMAIN>/webhook/
  4. Under ‘Comments’ enable ‘Created’. Under Issue enable ‘Updated’.
Configurations in Dojo
  1. Navigate to the System Settings from the menu on the left side or by directly visiting <your url>/system_settings.
  2. Enable ‘Enable JIRA integration’ and click submit.
Adding JIRA to Dojo
  1. Click ‘JIRA’ from the left hand menu.
  2. Select ‘Add Configuration’ from the drop-down.
  3. If you use Jira Cloud, you will need to generate an API token for Jira to use as the password
  4. To obtain the ‘open status key’ and ‘closed status key’ visit https://<YOUR JIRA URL>/rest/api/latest/issue/<ANY VALID ISSUE KEY>/transitions?expand=transitions.fields
  5. The ‘id’ for ‘Todo’ should be filled in as the ‘open status key’
  6. The ‘id’ for ‘Done’ should be filled in as the ‘closed status key’

To obtain ‘epic name id’: If you have admin access to JIRA:

  1. visit: https://<YOUR JIRA URL>/secure/admin/ViewCustomFields.jspa
  2. Click on the cog next to ‘Epic Name’ and select view.
  3. The numeric value for ‘epic name id’ will be displayed in the URL
  4. Note: dojojira uses the same celery functionality as reports. Make sure the celery runner is setup correclty as described: http://defectdojo.readthedocs.io/en/latest/features.html#reports

Or

  1. login to JIRA
  2. visit https://yourjiraurl/rest/api/2/field and use control+F or grep to search for ‘Epic Name’ it should look something like this:

{“id”:”customfield_122”,”key”:”customfield_122”,”name”:”Epic Name”,”custom”:true,”orderable”:true,”navigable”:true,”searchable”:true,”clauseNames”:[“cf[122]”,”Epic Name”],”schema”:{“type”:”string”,”custom”:”com.pyxis.greenhopper.jira:gh-epic-label”,”customId”:122}},

In the above example 122 is the number needed

Troubleshooting JIRA integration JIRA actions are typically performed in the celery background process. Errors are logged as alerts/notifications to be seen on the top right of the Defect Dojo UI.

Issue Consolidation

DefectDojo allows users to automatically consolidate issues from multiple scanners to remove duplicates.

To enable this feature, hover over the configuration tab on the left menu and click on system settings. In system settings, click ‘Deduplicate findings’. Click ‘Submit’ at the bottom of the page.

When deduplication is enabled, Dojo will compare CWE, title, and endpoint details for all findings in a given product. If an issue is added with either the CWE or title being the same while the endpoint is also the same, Dojo marks the old issue as a duplicate.

False Positive Removal

DefectDojo allows users to tune out false positives by enabling False Positive History. This will track what engineers have labeled as false positive for a specific product and for a specific scanner. While enabled, when a tool reports the same issue that has been flagged as a false positive previously, it will automatically mark the finding as a false positive, helping to tune overly verbose security tools.

Deduplication

Deduplication is a process that allows DefectDojo to find out that a finding has already been imported.

Upon saving a finding, defectDojo will look at the other findings in the product or the engagement (depending on the configuration) to find duplicates

When a duplicate is found:

  • The newly imported finding takes status: inactive, duplicate
  • An “Original” link is displayed after the finding status, leading to the original finding

There are two ways to use the deduplication:

  • Deduplicate vulnerabilities in the same build/release. The vulnerabilities may be found by the same scanner (same scanner deduplication) or by different scanners (cross-scanner deduplication).
    • this helps analysis and assessment of the technical debt, especially if using many different scanners; although detecting duplicates across scanners is not trivial as it requires a certain standardization.
  • Track unique vulnerabilities across builds/releases so that defectDojo knows when it finds a vulnerability whether it has seen it before.
    • this allows you keep information attached to a given finding in a unique place: all further duplicate findings will point to the original one.

Deduplication Configuration

Global configuration

The deduplication can be activated in “System Settings” by ticking “Deduplicate findings”.

An option to delete duplicates can be found in the same menu, and the maximum number of duplicates to keep for the same finding can be configured.

Engagement configuration

When creating an engagement or later by editing the engagement, the “Deduplication within engagement only” checkbox can be ticked.

  • If activated: Findings are only deduplicated within the same engagement. Findings present in different engagements cannot be duplicates
  • Else: Findings are deduplicated across the whole product

Note that deduplication can never occur across different products.

Deduplication algorithms

The behavior of the deduplication can be configured for each parser in settings.dist.py (or settings.py after install) by configuring the DEDUPLICATION_ALGORITHM_PER_PARSER variable.

The available algorithms are:

  • DEDUPE_ALGO_UNIQUE_ID_FROM_TOOL
    • the deduplication occurs based on finding.unique_id_from_tool which is a unique technical id existing in the source tool. Few scanners populate this field currently. If you want to use this algorithm, you may need to update the scanner code beforehand
    • The tools that populate the unique_id_from_tool field are:
      • Checkmarx Scan detailed
      • SonarQube Scan detailed
    • Advantages:
      • If your source tool has a reliable means of tracking a unique vulnerability across scans, this configuration will allow defectDojo to use this ability
    • Drawbacks:
      • Using this algorithm will not allow cross-scanner deduplication as other tools will have a different technical id.
      • When the tool evolves, it may change the way the unique id is generated. In that case you won’t be able to recognise that findings found in previous scans are actually the same as the new findings.
  • DEDUPE_ALGO_HASH_CODE
    • the deduplication occurs based on finding.hash_code. The hash_code itself is configurable for each scanner in parameter HASHCODE_FIELDS_PER_SCANNER
  • DEDUPE_ALGO_UNIQUE_ID_FROM_TOOL_OR_HASH_CODE
    • a finding is a duplicate with another if they have the same unique_id_from_tool OR the same hash_code
    • Allows to use both
      • a technical deduplication (based on unique_id_from_tool) for a reliable same-parser deduplication
      • and a functional one (based on hash_code configured on CWE+severity+file_path for example) for cross-parser deduplication
  • DEDUPE_ALGO_LEGACY
    • This is algorithm that was in place before the configuration per parser was made possible, and also the default one for backward compatibility reasons.
    • Legacy algorithm basically deduplicates based on:
      • For static scanner: [‘title’, ‘cwe’, ‘line’, ‘file_path’, ‘description’]
      • For dynamic scanner: [‘title’, ‘cwe’, ‘line’, ‘file_path’, ‘description’, ‘endpoints’]
    • Note that there are some subtleties that may give unexpected results. Switch dojo.specific-loggers.deduplication to debug in settings.py to get more info in case of trouble.

Hash_code computation configuration

The hash_code computation can be configured for each parser using the parameter HASHCODE_FIELDS_PER_SCANNER in settings.dist.py.

The parameter HASHCODE_ALLOWED_FIELDS list the fields from finding table that were tested and are known to be working when used as a hash_code. Don’t hesitate to enrich this list when required (the code is generic and allows adding new fields by configuration only)

Note that endpoints isn’t a field from finding table but rather a meta value that will trigger a computation based on all the endpoints.

Whe populating HASHCODE_FIELDS_PER_SCANNER, please respect the order of declaration of the fields: use the same order as in HASHCODE_ALLOWED_FIELDS : that will allow cross-scanner deduplication to function because the hash_code is computed as a sha-256 of concatenated values of the configured fields.

Tips:

  • It’s advised to use fields that are standardized for a reliable deduplication, especially if aiming at cross-scanner deduplication. For example title and description tend to change when the tools evolve and don’t allow cross-scanner deduplication
  • Good candidates are
    • cwe or cve
    • Adding the severity will make sure the deduplication won’t be to aggressive (there are several families of XSS and sql injection for example, with various severities but the same cwe).
    • Adding the file_path or endpoints is advised too.
  • The parameter HASHCODE_ALLOWS_NULL_CWE will allow switching to legacy algorithm when a null cwe is found for a given finding: this is to avoid getting many duplicates when the tool fails to give a cwe while we are expecting it.

Debugging deduplication

There is a specific logger that can be activated in order to have details about the deduplication process : switch dojo.specific-loggers.deduplication to debug in settings.py.

Deduplication - APIv2 parameters

  • skip_duplicates : if true, duplicates are not inserted at all
  • close_old_findings : if true, findings that are not duplicates and that were in the previous scan of the same type (example ZAP) for the same product (or engagement in case of “Deduplication on engagement”) and that are not present in the new scan are closed (Inactive, Verified, Mitigated)

Google Sheets Sync

With the Google Sheets sync feature, DefectDojo allow the users to export all the finding details of each test into a separate Google Spreadsheet. Users can review and edit finding details via Google Spreadsheets. Also, they can add new notes to findings and edit existing notes using the Google Spreadsheet. After reviewing and updating the finding details in the Google Spreadsheet, the user can import (sync) all the changes done via the Google Spreadsheet into DefectDojo database.

Configuration

Creating a project and a Service Account
  1. Go to the Service Accounts page.
  2. Create a new project for DefectDojo and select it.
  3. Click +CREATE SERVICE ACCOUNT, enter a name and description for the service account. You can use the default service account ID, or choose a different, unique one. When done click Create.
  4. The Service account permissions (optional) section that follows is not required. Click Continue.
  5. On the Grant users access to this service account screen, scroll down to the Create key section. Click +Create key.
  6. In the side panel that appears, select the format for your key as JSON
  7. Click Create. Your new public/private key pair is generated and downloaded to your machine.
Enabling the required APIs
  1. Go to the Google API Console.
  2. From the projects list, select the project created for DefectDojo.
  3. If the APIs & services page isn’t already open, open the console left side menu and select APIs & services, and then select Library.
  4. Google Sheets API and Google Drive API should be enabled. Click the API you want to enable. If you need help finding the API, use the search field.
  5. Click ENABLE.
Configurations in DefectDojo
  1. Click ‘Configuration’ from the left hand menu.

  2. Click ‘Google Sheets Sync’.

  3. Fill the form.

    Google Sheets Sync Configuration Page
    1. Upload the downloaded json file into the Upload Credentials file field.

    2. Drive Folder Id

      1. Create a folder inside the Google drive of the same gmail account used to create the service account.

      2. Get the client_email from the downloaded json file and share the created drive folder with client_email giving edit access.

      3. Extract the folder id from the URL and insert it as the Drive Folder Id.

        Extracting Drive Folder ID
    3. Tick the Enable Service check box. (Optional as this has no impact on the configuration, but you must set it to true inorder to use the feature. Service can be enabled or disabled at any point after the configuration using this check box)

    4. For each field in the finding table there are two related entries in the form.

      1. In the drop down, select Hide if the column needs to be hidden in the Google Sheet, else select any other option based on the length of the entry that goes under the column.
      2. If the column needs to be protected in the Google Sheet, tick the check box. Otherwise leave it unchecked.
  4. Click ‘Submit’.

Admin has the privilege to revoke the access given to DefectDojo to access Google Sheets and Google Drive data by simply clicking the Revoke Access button.

Using Google Sheets Sync Feature

Before a user can export a test to a Google Spreadsheet, admin must Configure Google Sheets Sync and Enable sync feature.Depending on whether a Google Spreadsheet exists for the test or not, the User interface displayed will be different.

If a Google Spreadsheet does not exist for the Test:

Create Google Sheet Button

If a Google Spreadsheet is already created for the Test:

Sync Google Sheet Button

After creating a Google Spreadsheet, users can review and edit Finding details using the Google Sheet. If any change is done in the Google Sheet users can click the Sync Google Sheet button to get those changes into DefectDojo.

Service Level Agreement (SLA)

DefectDojo allows you to maintain your security SLA and automatically remind teams whenever a SLA is about to get breached, or breaches.

Simply indicate in the System Settings for each severity, how many days teams have to remediate a finding.

SLA configuration screen

SLA notification configuration

There are 5 variables in the settings.py file that you can configure, to act on the global behavior. By default, any findings across the instance that are in Active, Verified state will be considered for notifications.

SLA_NOTIFY_ACTIVE = False
SLA_NOTIFY_ACTIVE_VERIFIED_ONLY = True
SLA_NOTIFY_WITH_JIRA_ONLY = False
SLA_NOTIFY_PRE_BREACH = 3
SLA_NOTIFY_POST_BREACH = 7

Setting both SLA_NOTIFY_ACTIVE and SLA_NOTIFY_ACTIVE_VERIFIED_ONLY to False will effectively disable SLA notifications.

You can choose to only consider findings that have a JIRA issue linked to them. If so, please set SLA_NOTIFY_WITH_JIRA_ONLY to True.

The SLA_NOTIFY_PRE_BREACH is expressed in days. Whenever a finding’s “SLA countdown” (time to remediate) drops to this number, a notification would be sent everyday, as scheduled by the crontab in settings.py, until the day it breaches.

The SLA_NOTIFY_POST_BREACH lets you define in days how long you want to be kept notified about findings that have breached the SLA. Passed that number, notifications will cease.

Warning

Be mindful of performance if you choose to have SLA notifications on non-verified findings, especially if you import a lot of findings through CI in ‘active’ state.

What notification channels for SLA notifications?

The same as usual. You will notice that an extra SLA breach option is now present on the Notification page and also in the Product view.

SLA notification checkbox

SLA notification with JIRA

You can choose to also send SLA notification as JIRA comments, if your product is configured with JIRA. You can enable it at the JIRA configuration level or at the Product level.

The Product level JIRA notification configuration takes precendence over the global JIRA notification configuration.

When is the SLA notification job run?

The default setup will trigger the SLA notification code at 7:30am on a daily basis, as defined in the settings.py file. You can of course modify this schedule to your context.

'compute-sla-age-and-notify': {
    'task': 'dojo.tasks.async_sla_compute_and_notify',
    'schedule': crontab(hour=7, minute=30),
}

Note

The celery containers are the ones concerned with this configuration. If you suspect things are not working as expected, make sure they have the latest version of your settings.py file.

You can of course change this default by modifying that stanza.

Launching from the CLI

You can also invoke the SLA notification function from the CLI. For example, if run from docker-compose:

$ docker-compose exec uwsgi /bin/bash -c 'python manage.py sla_notifications'