This page contains a technical reference for how DMARCwise processes and normalizes DMARC reports. It may in some cases help explain discrepancies you may see in the dashboard compared to the original reports.

Once received as an attachment to an email message, a DMARC report is parsed and passed through a validator.

The validator is typically lax and forgives many issues that would be unacceptable according to the DMARC specification. Some fields in the XML reports are however validated more strictly, otherwise the report would be very hard to use in practice. This includes for example the policy fields, which must contain a valid policy name.

Although blatantly invalid reports are rare, when one such report is received it is skipped and flagged for manual revision.

To prevent reports from being discarded for minor reasons, we apply some normalization operations to the report. These normalizations are based on bad reports we’ve seen in the past. When you browse a report in DMARCwise, you already see the normalized data, but you can always see the original data in the XML format.

The following normalizations are applied:

  • Semicolons are trimmed from the policy_published policy strings. For example, reject; is treated as reject.
  • In the policy_published section, some fields sometimes contain the invalid value unknown. In these cases, the value is treated as a missing value (i.e. the tag was not seen in the DMARC record).
  • If the header_from for a report record is empty, it is set to the envelope_from.
  • For DKIM auth results:
    • DKIM auth results with a result of none are discarded: a result of none means that the message was not signed, so it makes little sense to present it as a DKIM signature.
    • Similarly, DKIM auth results with an empty result or empty domain name are discarded.
    • The DKIM domain and selector are lower-cased.
  • For SPF auth results:
    • SPF auth results without a domain name are kept but the empty domain is replaced with the string N/A.
    • The SPF result is lower-cased.

The parsing, normalization and validation implementations are thoroughly tested with automated tests.