-
Notifications
You must be signed in to change notification settings - Fork 207
Updated with RFD as requested by @ccoffin #460
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: feature-144-SSVC
Are you sure you want to change the base?
Updated with RFD as requested by @ccoffin #460
Conversation
Feedback from meeting on October 2 2025:
|
|
||
Note: this proposal was previously approved in a QWG chaired by Jay Jacobs and Chris Coffin around December 2024 and initially merged in Jan 17 2025, but continued to evolve as SSVC has continued to evolve. | ||
|
||
## Problem Statement |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you expand on the problem that SSVC itself tries to address?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SSVC is a framework for metrics of a vulnerability. Perhaps review https://certcc.github.io/SSVC/tutorials/ssvc_overview/ - some of these questions may be goes into SSVC GitHub? The assumption here is you are aware of SSVC basics I suppose.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The assumption here is you are aware of SSVC basics I suppose.
This feels like a bad assumption to make. I would assume most CVE consumers are not familiar with SSVC and so a conversation about its merits and how it works seems like a value add to the CVE community. I have read the intro doc, but my question here is more CVE specific. Could you expand on the problem facing CVE consumers and how SSVC could be used to address their problem?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this is realistic to explain SSVC to the level in which all justification of current usage will need to be expanded. The RFD is not a full historic document as I read it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you comment on who's using it and why maybe? I'm not trying to get a super exhaustive explanation, but something which could be helpful to the uninitiated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CISA, Bitsight, VulnCheck, Rapid7 are current consumers of SSVC data. All these are using it for analysis and reporting of vulnerabilities.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, so they're using it by providing supplier decision points?
All these are using it for analysis and reporting of vulnerabilities.
Sorry but that kinda reads like they're using it to do a thing
to me and I can't parse much from it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Most of the usage that is currently public is in Coordinator and Deployer decision trees. it is being used by Coordinators like CISA to provide information on. vulnerability, and used by Vulnerability Managed Services to prioritize patch management. Currently Supplier decision trees are used privately for supplier's to decide their scheduling of patch release priority, they could publish it depending on their PSIRT and transparency expectation of patch creation/adoption.
RFD will be considered successful if: | ||
* At least one ADP (e.g., CISA, VulnCheck, CERT/CC) adopts the new structured ssvc block within one year. | ||
|
||
* Major consumer tools (CVE Services,vuln enrichment pipelines, dashboards) can automatically parse SSVC data without special parsing logic. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How is it possible to parse SSVC data without special/specific parsing logic? This is a new data structure so far as I can tell.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea is a one consistent parsing logic, if you think the sentence is unclear - please update with a suggestion or PR in GH? No Custom logic. For e.g., the three metric records in https://cveawg.mitre.org/api/cve/CVE-2024-52270 have three different parsers needed with "special" parsing logic. That is what I mean by special.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we document that parsing logic in this RFD? 👀
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here is the pseudo code - we can add but it is likely you will have further comments - so I leave it here for now.
# 1. Load JSON
data = load_json("ssvc_data.json")
# 2. Sort SSVC metrics by timestamp (latest first)
metrics = sort_by(data.metrics, key="timestamp", descending=True)
latest_metric = metrics[0]
# 3. Extract decision points
for dp in latest_metric.selections:
namespace = dp.namespace
version = dp.version
key = dp.key
value_keys = [v.key for v in dp.values]
# 4. If human-friendly resources exist, collect them
if "decision_point_resources" in dp:
friendly_info = dp.decision_point_resources
# 5. Collect outcome if present
if "outcome" in dp:
outcome = dp.outcome
# 6. Represent the collected info
result = {
"namespace": namespace,
"version": version,
"key": key,
"values": value_keys,
"friendly_info": friendly_info if defined else None,
"outcome": outcome if defined else "Unspecified"
}
return result
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Many thanks and indeed I do have more comments. This is great to have as a reference, but with respect to the decision points it looks like it handles arbitrary values. I know there was some discussion of decision point flexibility, but maybe it makes sense to restrict decision point values for the benefit of the CVE reader? What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Jon,
For tightly constraining the CVE Records SSVC we could restrict the namespace to be the registered ones - ssvc
, nist
- see https://certcc.github.io/SSVC/reference/code/namespaces/#base-namespace for the full list. This means companies (e.g., Yahoo) working on their own decision trees cannot publish any customization they develop in this schema format - CERT/CC will have to adopt and control it.
For creating your own decision tree: ( I don't think you care! but)
You can go to the SSVC Explorer to create your own tree to import. Or Just a CSV file upload. our pypi site also has examples for a "weather" decision tree that you can follow to create your own.
The Calculator spits out a lot of information for user friendly reading and understanding. However, if you want to use python here is the example.
from datetime import datetime, timezone
from ssvc.decision_tables.cisa.cisa_coordinate_dt import LATEST as decision_table
from ssvc import selection
namespace = "ssvc"
decision_points = ["Exploitation"]
values = [["Public PoC"]]
timestamp = datetime.now()
selections = []
for dp in decision_table.decision_points.values():
if dp.namespace == namespace and dp.name in decision_points:
dp_index = decision_points.index(dp.name)
selected = selection.Selection.from_decision_point(dp)
selected.values = tuple(selection.MinimalDecisionPointValue(key=val.key,name=val.name)
for val in dp.values if val.name in values[dp_index])
selections.append(selected)
out = selection.SelectionList(selections=selections,timestamp=timestamp)
print(out.model_dump_json(exclude_none=True, indent=4))
output
{
"timestamp": "2025-10-07T19:27:28Z",
"schemaVersion": "2.0.0",
"selections": [
{
"namespace": "ssvc",
"key": "E",
"version": "1.1.0",
"name": "Exploitation",
"values": [
{
"name": "Public PoC",
"key": "P"
}
]
}
]
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For tightly constraining the CVE Records SSVC we could restrict the namespace to be the registered ones - ssvc , nist - see https://certcc.github.io/SSVC/reference/code/namespaces/#base-namespace for the full list. This means companies (e.g., Yahoo) working on their own decision trees cannot publish any customization they develop in this schema format - CERT/CC will have to adopt and control it.
Oh, that sounds great. Lets do it! I think we add a control on the namespace value with the json schema oneOf
keyword 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we should chat in the QWG briefly, that way others are aware before I can make an update to the schema with that big change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool. That change would help align to the goal of preferring structure data
#423 (comment)
I left a number of specific comments and I know I'm coming from a point of ignorance, but I'm not sure I understand the goal(s) of SSVC. I read through this RFD and the actual PR (which got merged in record time 🎉) and there's a lot of language explaining what is changing but very little that explains the why. Could you expand on the value to an uninitiated consumer of these records? I can see the value of knowing that an exploit module exists out in some open source tool or whatever, but could this information be encoded in such a way that it compliments information we already store? The technical impact decision point in particular seems like it could be inferred from the CVSS CIA triad (all H => total). Thoughts? |
Can you give an example of the "information we already store" in CVE records that duplicates SSVC? It is also clear you have some input for SSVC itself unrelated to CVE record questions, so can you please create an issue in SSVC GitHub perhaps with a suggestion or question. |
The one that comes to mind is an advisory with a CVSS score where the CIA values are all |
I understand why it might seem that certain CVSS vector elements, like all CIA values being set to |
Could it ever be the case that one would come to the conclusion that an SSVC technical impact should be |
Please read up https://www.bitsight.com/blog/do-we-need-yet-another-vulnerability-scoring-system-ssvc-thats-yass - they exactly talk about this question. However, the information even if as you say the score is implied, it is still not duplicative. There are also other Technical Impact values - it also seems nonsensical to talk about each combination of possibility of CIA to Technical Impact, as they are not the same. Hopefully that part is clear.
You can also read the recent paper that may help you understand current SSVC metrics https://arxiv.org/pdf/2508.13644v1 |
I feel like I'm more confused now. Your first link states
But then doesn't get into why this may be the case. Perhaps the remaining 12% were scored in error? It does seem like there's a very high correlation so maybe these are measuring the same thing and a future cve spec could encode the underlying concept more directly. I quite liked the paper and it does seem to indicate that there's a difference in observable scores, but it also seems like its a real downer on the metrics period. The conclusion
I agree with it, but I feel like its maybe not making the case for SSVC. Fun note, I went to grad school with one of the authors, so I'll reached out and see if I can get more of their perspective directly. |
Added RFD's to document the changes and capture example of the SSVC 2.0.0 record as requested by QWG.