Skip to content

Conversation

sei-vsarvepalli
Copy link
Contributor

Added RFD's to document the changes and capture example of the SSVC 2.0.0 record as requested by QWG.

Note: this proposal was previously approved in a QWG chaired by Jay Jacobs and Chris Coffin around December 2024 and initially merged in Jan 17 2025, but continued to evolve as SSVC has continued to evolve.

@sei-vsarvepalli
Copy link
Contributor Author

Feedback from meeting on October 2 2025:

  1. Provide minimalist example even in the advanced record to ensure there is a clear understanding of what minimal information needs to be parsed by consume even in the "advanced record" @jayjacobs updated the CVE example
  2. Add any information on operational usage of SSVC by the community users and their feedback (added to RFD document)


Note: this proposal was previously approved in a QWG chaired by Jay Jacobs and Chris Coffin around December 2024 and initially merged in Jan 17 2025, but continued to evolve as SSVC has continued to evolve.

## Problem Statement
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you expand on the problem that SSVC itself tries to address?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SSVC is a framework for metrics of a vulnerability. Perhaps review https://certcc.github.io/SSVC/tutorials/ssvc_overview/ - some of these questions may be goes into SSVC GitHub? The assumption here is you are aware of SSVC basics I suppose.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The assumption here is you are aware of SSVC basics I suppose.

This feels like a bad assumption to make. I would assume most CVE consumers are not familiar with SSVC and so a conversation about its merits and how it works seems like a value add to the CVE community. I have read the intro doc, but my question here is more CVE specific. Could you expand on the problem facing CVE consumers and how SSVC could be used to address their problem?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is realistic to explain SSVC to the level in which all justification of current usage will need to be expanded. The RFD is not a full historic document as I read it.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you comment on who's using it and why maybe? I'm not trying to get a super exhaustive explanation, but something which could be helpful to the uninitiated.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CISA, Bitsight, VulnCheck, Rapid7 are current consumers of SSVC data. All these are using it for analysis and reporting of vulnerabilities.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, so they're using it by providing supplier decision points?

All these are using it for analysis and reporting of vulnerabilities.

Sorry but that kinda reads like they're using it to do a thing to me and I can't parse much from it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most of the usage that is currently public is in Coordinator and Deployer decision trees. it is being used by Coordinators like CISA to provide information on. vulnerability, and used by Vulnerability Managed Services to prioritize patch management. Currently Supplier decision trees are used privately for supplier's to decide their scheduling of patch release priority, they could publish it depending on their PSIRT and transparency expectation of patch creation/adoption.

RFD will be considered successful if:
* At least one ADP (e.g., CISA, VulnCheck, CERT/CC) adopts the new structured ssvc block within one year.

* Major consumer tools (CVE Services,vuln enrichment pipelines, dashboards) can automatically parse SSVC data without special parsing logic.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is it possible to parse SSVC data without special/specific parsing logic? This is a new data structure so far as I can tell.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea is a one consistent parsing logic, if you think the sentence is unclear - please update with a suggestion or PR in GH? No Custom logic. For e.g., the three metric records in https://cveawg.mitre.org/api/cve/CVE-2024-52270 have three different parsers needed with "special" parsing logic. That is what I mean by special.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we document that parsing logic in this RFD? 👀

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is the pseudo code - we can add but it is likely you will have further comments - so I leave it here for now.

# 1. Load JSON
data = load_json("ssvc_data.json")

# 2. Sort SSVC metrics by timestamp (latest first)
metrics = sort_by(data.metrics, key="timestamp", descending=True)
latest_metric = metrics[0]

# 3. Extract decision points
for dp in latest_metric.selections:
    namespace = dp.namespace
    version = dp.version
    key = dp.key
    value_keys = [v.key for v in dp.values]

    # 4. If human-friendly resources exist, collect them
    if "decision_point_resources" in dp:
        friendly_info = dp.decision_point_resources

    # 5. Collect outcome if present
    if "outcome" in dp:
        outcome = dp.outcome

# 6. Represent the collected info
result = {
    "namespace": namespace,
    "version": version,
    "key": key,
    "values": value_keys,
    "friendly_info": friendly_info if defined else None,
    "outcome": outcome if defined else "Unspecified"
}

return result

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Many thanks and indeed I do have more comments. This is great to have as a reference, but with respect to the decision points it looks like it handles arbitrary values. I know there was some discussion of decision point flexibility, but maybe it makes sense to restrict decision point values for the benefit of the CVE reader? What do you think?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This all looks quite complex. Again, what sort of restrictions to trees/decision models/points are you open to? The import tree option on this calculator opens a file upload dialog which I would guess allows for arbitrary logic. Without documentation arbitrary logic isn't helpful to the reader.
image

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Jon,

For tightly constraining the CVE Records SSVC we could restrict the namespace to be the registered ones - ssvc , nist - see https://certcc.github.io/SSVC/reference/code/namespaces/#base-namespace for the full list. This means companies (e.g., Yahoo) working on their own decision trees cannot publish any customization they develop in this schema format - CERT/CC will have to adopt and control it.

For creating your own decision tree: ( I don't think you care! but)
You can go to the SSVC Explorer to create your own tree to import. Or Just a CSV file upload. our pypi site also has examples for a "weather" decision tree that you can follow to create your own.

The Calculator spits out a lot of information for user friendly reading and understanding. However, if you want to use python here is the example.

from datetime import datetime, timezone
from ssvc.decision_tables.cisa.cisa_coordinate_dt import LATEST as decision_table
from ssvc import selection
namespace = "ssvc"
decision_points = ["Exploitation"]
values = [["Public PoC"]]
timestamp = datetime.now()
selections = []

for dp in decision_table.decision_points.values():
    if dp.namespace == namespace and dp.name in decision_points:
        dp_index = decision_points.index(dp.name)
        selected = selection.Selection.from_decision_point(dp)
        selected.values = tuple(selection.MinimalDecisionPointValue(key=val.key,name=val.name)
                                for val in dp.values if val.name in values[dp_index])
        selections.append(selected)

out = selection.SelectionList(selections=selections,timestamp=timestamp)
print(out.model_dump_json(exclude_none=True, indent=4))

output

{
    "timestamp": "2025-10-07T19:27:28Z",
    "schemaVersion": "2.0.0",
    "selections": [
        {
            "namespace": "ssvc",
            "key": "E",
            "version": "1.1.0",
            "name": "Exploitation",
            "values": [
                {
                    "name": "Public PoC",
                    "key": "P"
                }
            ]
        }
    ]
}

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For tightly constraining the CVE Records SSVC we could restrict the namespace to be the registered ones - ssvc , nist - see https://certcc.github.io/SSVC/reference/code/namespaces/#base-namespace for the full list. This means companies (e.g., Yahoo) working on their own decision trees cannot publish any customization they develop in this schema format - CERT/CC will have to adopt and control it.

Oh, that sounds great. Lets do it! I think we add a control on the namespace value with the json schema oneOf keyword 👍

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we should chat in the QWG briefly, that way others are aware before I can make an update to the schema with that big change.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool. That change would help align to the goal of preferring structure data
#423 (comment)

@darakian
Copy link

darakian commented Oct 3, 2025

I left a number of specific comments and I know I'm coming from a point of ignorance, but I'm not sure I understand the goal(s) of SSVC. I read through this RFD and the actual PR (which got merged in record time 🎉) and there's a lot of language explaining what is changing but very little that explains the why.

Could you expand on the value to an uninitiated consumer of these records? I can see the value of knowing that an exploit module exists out in some open source tool or whatever, but could this information be encoded in such a way that it compliments information we already store? The technical impact decision point in particular seems like it could be inferred from the CVSS CIA triad (all H => total).

Thoughts?

@sei-vsarvepalli
Copy link
Contributor Author

technical impact

Can you give an example of the "information we already store" in CVE records that duplicates SSVC?

It is also clear you have some input for SSVC itself unrelated to CVE record questions, so can you please create an issue in SSVC GitHub perhaps with a suggestion or question.

@darakian
Copy link

darakian commented Oct 6, 2025

Can you give an example of the "information we already store" in CVE records that duplicates SSVC?

The one that comes to mind is an advisory with a CVSS score where the CIA values are all H. Eg. Total loss of confidentiality, integrity, and availability. This would also imply a technical impact rating of total correct?

@sei-vsarvepalli
Copy link
Contributor Author

Can you give an example of the "information we already store" in CVE records that duplicates SSVC?

The one that comes to mind is an advisory with a CVSS score where the CIA values are all H. Eg. Total loss of confidentiality, integrity, and availability. This would also imply a technical impact rating of total correct?

I understand why it might seem that certain CVSS vector elements, like all CIA values being set to H, could imply an SSVC outcome such as "Technical Impact = Total." However, "implied" does not mean "assessed." In SSVC, each decision point (such as Technical Impact, Exploitation, or Mission Impact) is explicitly and independently evaluated based on the evidence and the organization’s (CNA or ADP) policy framework. The assessment is not necessarily derived or inferred from other scoring systems like CVSS. Technical Impact is defined here https://certcc.github.io/SSVC/reference/decision_points/technical_impact/ to address very specific example.

@darakian
Copy link

darakian commented Oct 6, 2025

Could it ever be the case that one would come to the conclusion that an SSVC technical impact should be total and simultaneously come to the conclusion that a CVSS impact would not be all H?

@sei-vsarvepalli
Copy link
Contributor Author

Could it ever be the case that one would come to the conclusion that an SSVC technical impact should be total and simultaneously come to the conclusion that a CVSS impact would not be all H?

Please read up https://www.bitsight.com/blog/do-we-need-yet-another-vulnerability-scoring-system-ssvc-thats-yass - they exactly talk about this question. However, the information even if as you say the score is implied, it is still not duplicative. There are also other Technical Impact values - it also seems nonsensical to talk about each combination of possibility of CIA to Technical Impact, as they are not the same. Hopefully that part is clear.

For technical impact the vast majority of the vulnerabilities with “total” impact have “high” impact on Confidentiality, Integrity and Availability (88%), though it’s not universally true.

You can also read the recent paper that may help you understand current SSVC metrics https://arxiv.org/pdf/2508.13644v1

@darakian
Copy link

darakian commented Oct 6, 2025

I feel like I'm more confused now. Your first link states

For technical impact the vast majority of the vulnerabilities with “total” impact have “high” impact on Confidentiality, Integrity and Availability (88%), though it’s not universally true.

But then doesn't get into why this may be the case. Perhaps the remaining 12% were scored in error? It does seem like there's a very high correlation so maybe these are measuring the same thing and a future cve spec could encode the underlying concept more directly.

I quite liked the paper and it does seem to indicate that there's a difference in observable scores, but it also seems like its a real downer on the metrics period. The conclusion

8 Conclusions
This paper presents the first large-scale, empirical evaluation of
four prominent vulnerability scoring systems—CVSS, EPSS, SSVC,
and the Exploitability Index—using a real-world dataset of 600 vul-
nerabilities from Microsoft’s Patch Tuesday disclosures. Our study
was designed to fill a critical gap left by prior work, which has been
largely qualitative, by providing quantitative evidence of how these
systems perform in an operational context. The findings demon-
strate considerable and systemic disagreement among the systems,
which exhibit little to no correlation or categorical agreement when
scoring the same vulnerabilities. We found that all four systems
produce overly broad priority groups that complicate triage efforts
and that predictive systems like EPSS often fail to flag known ex-
ploited vulnerabilities ahead of time, with fewer than 20% of CISA
KEV CVEs receiving a high-confidence score before exploitation
was public.
The central implication of this research is that these widely used
scoring systems are not interchangeable and their conflicting guid-
ance reveals a deeper, systemic issue: a lack of a shared conceptual
model of risk across the vulnerability management ecosystem. The
observed divergence is a direct result of each system’s unique de-
sign goals—measuring inherent severity versus predicting threat
likelihood versus recommending a specific action. Given these find-
ings, we caution practitioners against relying on any one system
as the sole basis for prioritization; scores should be treated as ad-
visory inputs to a broader, context-aware process. Ultimately, our
study highlights an urgent need for the research community to
develop more transparent, interpretable, and task-specific frame-
works that are empirically grounded and better aligned with the
practical realities of cybersecurity operations.

I agree with it, but I feel like its maybe not making the case for SSVC. Fun note, I went to grad school with one of the authors, so I'll reached out and see if I can get more of their perspective directly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants