Skip to content

BIP-360: QuBit - Pay to Quantum Resistant Hash #1670

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 66 commits into
base: master
Choose a base branch
from

Conversation

cryptoquick
Copy link

This spent several months gathering feedback from the mailing list and from other advisors. This is hopefully polished enough to submit upstream.

Let me know if you have any questions or feedback, and of course feel free to submit suggestions.

Thank you for your time.

@cryptoquick cryptoquick marked this pull request as draft September 27, 2024 18:18
Copy link
Member

@jonatack jonatack left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting (the question of resistance to quantum computing may have resurged lately with the publication of https://scottaaronson.blog/?p=8329, see also https://x.com/n1ckler/status/1839215426091249778).

@cryptoquick cryptoquick force-pushed the p2qrh branch 2 times, most recently from b6ed2c3 to d6d15ad Compare September 28, 2024 18:01
@jonatack
Copy link
Member

jonatack commented Oct 1, 2024

@cryptoquick Can you begin to write up the sections currently marked as TBD, along with a backwards compatibility section (to describe incompatibilities, severity, and suggest mitigations, where applicable/relevant)? We've begun to reserve a range of BIP numbers for this topic, pending continued progress here.

@jonatack jonatack added the PR Author action required Needs updates, has unaddressed review comments, or is otherwise waiting for PR author label Oct 9, 2024
@jonatack
Copy link
Member

@cryptoquick ping for an update here. Have you seen https://groups.google.com/g/bitcoindev/c/p8xz08YTvkw / https://github.com/chucrut/bips/blob/master/bip-xxxx.md? It may be interesting to review each other and possibly collaborate.

@conduition
Copy link

Hey @EthanHeilman , I wanted to cross-link this post on delving. Adding a new tapleaf version with dynamically endorsed script leafs may give us a way to start migrating people early, even before ml-dsa and slh-dsa opcodes are defined, but still allow the use of those opcodes once they're spec'd out and deployed.

That said, if we can package those opcodes together alongside BIP360, i still think that'd be a better option. It will lead to less complexity and confusion overall.

@EthanHeilman
Copy link
Contributor

@murchandamus We are putting up the PQ signature BIP soon. Would you rather it be part of this PR or a new PR?

Remove dashes in BIP numbers and change to SegWit version 2
@leviwinks
Copy link

i have a suggestion @EthanHeilman
[email protected]

- Witness program calculated as SHA256 of binary encoding of PI
@murchandamus
Copy link
Contributor

@murchandamus We are putting up the PQ signature BIP soon. Would you rather it be part of this PR or a new PR?

Hey @EthanHeilman and @cryptoquick, given the amount of comments this PR already has, I think it would be clearer to have a separate PR for the companion BIP.

@jonatack
Copy link
Member

Agree on a new BIP and keeping them focused. A range of BIP numbers was reserved for a series on this topic.

Adding PQ signatures via a tapleaf version increase does not introduce any new opcodes and allows previously written tapscript programs to be used with PQ signatures
by simply using the new tapleaf version. Instead of developers explicitly specifying the intended signature algorithm through an opcode, the algorithm
to use must be indicated within the public key or public key hash<ref>'''Why not have CHECKSIG infer the algorithm based on signature size?''' Each of the three signature algorithms, Schnorr, ML-DSA, and SLH-DSA, have unique signature sizes. The problem with using signature size to infer algorithm is that spender specifies the signature. This would allow a public key which was intended to be verified by Schnorr to be verified using ML-DSA as the spender specified a ML-DSA signature. Signature algorithms are often not secure if you can mix and match public key and signature across algorithms.</ref>.
The disadvantage of this approach is that it requires a new tapleaf version each time we want to add a new signature algorithm.
Copy link

@conduition conduition Aug 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can add new signature algos in the future as a soft fork without a new tapscript version by coding an "always succeed" path in the new tapscript version's OP_CHECKSIG implementation.

For example, let's say the new multi-algo version of OP_CHECKSIG chooses signature algo based on a version byte header in the pubkey. 0x00 for Schnorr, 0x01 for ML-DSA, 0x02 for SLH-DSA. Define any other public-key version byte as being an "auto-succeed" sigalg type. Adding a new algorithm in the future is as easy as redefining one such "sigalg" version.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can add new signature algos in the future as a soft fork without a new tapscript version by coding an "always succeed" path in the new tapscript version's OP_CHECKSIG implementation.

This works as long as the signatures are smaller than the max stack element size of 520-bytes. Unfortunately both SLH_DSA and ML_DSA are less over the max stack element size.

The precedence, at a rough level, works like:

  1. IF Witness version not recognized --> return SUCCESS
  2. IF Witness version == 1 and tapleaf version not recognized --> return SUCCESS
  3. IF tapscript contains OP_SUCCESSx opcode --> return SUCCESS
  4. IF stack item size > MAX_SCRIPT_ELEMENT_SIZE in witness stack --> return FAIL
  5. Execute tapscript on witness stack, if OP_CHECKSIG has pubkey of size != 32 or 0 --> return SUCCESS

We use OP_SUCCESSx for new opcodes, but if we wanted to repurpose OP_CHECKSIG we would need to use a new tapleaf version or a new witness version.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry maybe I should clarify. I'm aware of the script size limits and their effects. I'm saying that, once we add a new tapscript version, we get an opportunity to redefine how OP_CHECKSIG works, so we can add an "always succeed" path for pubkeys with an unrecognized format (e.g. sigalg version 0x03 and up).

Then, if/when we want to add new signature algos in the future (such as SQIsign), we don't need a third newer tapscript version.

So the statement "it requires a new tapleaf version each time we want to add a new signature algorithm" is not entirely correct.

Comment on lines +328 to +329
Both approaches must raise the stack element size limit. In the OP_SUCCESSx case, the increased size limit would only be effect for transaction outputs
that use of the new opcodes. Otherwise this stack element size limit increase would be a soft fork. If the tapleaf version is used, then the stack
Copy link

@conduition conduition Aug 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo fix:

Suggested change
Both approaches must raise the stack element size limit. In the OP_SUCCESSx case, the increased size limit would only be effect for transaction outputs
that use of the new opcodes. Otherwise this stack element size limit increase would be a soft fork. If the tapleaf version is used, then the stack
Both approaches must raise the stack element size limit. In the OP_SUCCESSx case, the increased size limit would only have effect for transaction outputs
that use the new opcodes. Otherwise this stack element size limit increase would be a hard fork. If the tapleaf version is used, then the stack

This complexity is one more reason to prefer the new tapscript version approach, IMO.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A new tapleaf version or witness version would require maintaining two versions of tapscript.

  1. This would be messier in the bitcoin-core code base and more likely to introduce an accidental hardfork or other bug. It wouldn't be terrible, but all things being equal, we choose the one simpler option.
  2. It always requires that developers care about tapleaf versions for opcode features. Loss of funds would result if the wrong version is used.

My personal take is that we should only use tapleaf versions for major rewrites of Bitcoin script. For instance GSR would be a great fit a new tapleaf version. It would likely have its own intrepreter.cpp file. Developers aren't going to confuse GSR script with tapscript.

Comment on lines +375 to +376
To prevent OP_DUP from creating an 8 MB stack by duplicating stack elements larger than 520 bytes we define OP_DUP to fail on stack
elements larger than 520 bytes. Note this change to OP_DUP is not consensus critical and does not require any sort of fork. This is

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would kill the classic pattern of OP_DUP OP_SHA256 <hash> OP_EQUALVERIFY OP_CHECKSIG for scripts using PQ signatures.

Could we not instead limit the total stack size to 520kb more explicitly?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does anyone do OP_DUP on signatures? This wouldn't break for PQ public keys.

Could we not instead limit the total stack size to 520kb more explicitly?

I personally would prefer that the limitation was expressed this way, but that is likely to be a highly controversial soft fork that requires carefully consideration of performance implications.

If you think such a soft fork can get activated, do it and we will use it in BIP 360. I worry that including this change in BIP 360 will reduce the chances of BIP 360 from activating to almost zero.

Copy link

@conduition conduition Aug 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, i posted this comment before I read the section where you talked about compressing ML-DSA pubkeys using a hash, and conjoining the ML-DSA pubkey and signature together. Please ignore

public keys in excess of 520 bytes. For instance:

* ML-DSA public keys are 1,312 bytes and signatures are 2,420 bytes
* SLH-DSA public keys are 32 bytes and signatures are 7,856 bytes

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm sure you're aware already, but by tuning SLH-DSA parameters down we can get signatures of 4 kilobytes or less, about on-par with ML-DSA, while still being usable securely for hundreds of millions of signatures, far more than any bitcoin key will ever need to sign. We can condense signatures even more using clever compression by the signer.

I think this would go a long way to making SLH-DSA more practical as an option. ML-DSA's main advantage then would not be its signature size, but its faster signing and verification times.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've looked at some of the XMSS schemes that provide very tunable numbers of signatures. It is a neat idea.

So far we have not included this in the BIP because of the design rationale of "Use standardized post-quantum signature algorithms." This is so we can benefit from all the other research, hardware rollouts and software support.

hundreds of millions of signatures, far more than any bitcoin key will ever need to sign

Where do you draw the line here. A lightning network channel would in theory use millions of signatures for one public key, but they probably should be using ML-DSA. I don't like having a special rule for one signature scheme, although hundreds of millions of signatures is unlikely to ever happen. But why not 1 million signatures, or 10,000 signatures. What's the right number.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where do you draw the line here

Great question. There's various rationales you could go by, but I would frame it like this:

  • We should pick some duration X, denominated in "years of repeated signing" (YORS).
  • Assume a wallet is somehow tricked into repeatedly signing random messages for an adversary using an SLH-DSA key.
  • Assume only a single benchmarked CPU is used to produce the signatures, and assume zero latency between victim and attacker.
  • After X years of repeated signing, the public key should still maintain at least $2^{128}$ security against the attacker forging any signatures.

I don't know what the magic number X is there. Realistically I don't see any wallet ever signing data continuously for more than a few years, but maybe others would prefer stronger guarantees. Anyway, this is a number we can more easily debate about.

Some suggestions:

  • Maybe X = 30 YORS to match human reproductive cycles - this is roughly the global average age of first childbearing.
  • Maybe X = 45 YORS to match the length of an average human's working life - Keys last your entire career.
  • Maybe X = 70 YORS to match an average human lifetime. Keys live as long as we do.

Before we pin this down, we should have a working SPHINCS implementation we can benchmark against. Then we can pin down one or more parameter sets to standardize based on its performance.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was having trouble using the official parameter exploration script, so I made a faster/easier ported version in python, if anyone is curious: https://gist.github.com/conduition/469725009397c08a2d40fb87c8ca7baa


To prevent OP_DUP from creating an 8 MB stack by duplicating stack elements larger than 520 bytes we define OP_DUP to fail on stack
elements larger than 520 bytes. Note this change to OP_DUP is not consensus critical and does not require any sort of fork. This is
because currently there is no way to get a stack element larger than 520 bytes onto the stack so triggering this rule is currently
Copy link

@conduition conduition Aug 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't forget about OP_OVER, OP_2OVER, OP_2DUP, OP_3DUP, OP_PICK, OP_IFDUP, and OP_TUCK which all copy stack items.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What would be the impact here?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you want to impose new size limits on stack item duplication, then we should extend the BIP's wording to cover not just OP_DUP but also any opcode which copies stack items. Here's my suggested wording:

To prevent OP_DUP and other opcodes from creating an 8 MB stack by duplicating stack elements larger than 520 bytes we modify all opcodes to fail if copying any stack elements larger than 520 bytes. Note this change is not consensus critical and does not require any sort of fork.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missed these! Thanks

Comment on lines +680 to +682
Commit-reveal schemes can only be spent from and to outputs that are not vulnerable to long-exposure quantum attacks, such as
P2PKH, P2SK, P2WPKH, etc... To use tapscript outputs with this system either a soft fork could disable the key path spend of P2TR outputs
or P2QRH could be used here as it does not have a key path spend and thus is not vulnerable to long-exposure quantum attacks.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Recently on the mailing list, we've had discussions about recovering coins from exposed pubkeys by using a BIP32 xpriv commit/reveal protocol. So we can rescue coins that are vulnerable to long-exposure attacks. It just requires a soft fork to disable regular EC spending without defined commit/reveal steps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.