-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Separate magnitude/normalization/... checking/propagation from implementations #1060
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Making the rules part of the generic interface seems a reasonable simplification given that we currently use only magnitudes below ~16 (#1028 (comment)) but you could argue that then we optimize for 32bit code. Most users are (probably) on 64bit platforms and the magnitude limit there is higher...
You mean the wrapper check the rules, i.e., they do what what is currently done in P.S.: Dreaming of dependent types for magnitude.. |
Currently the only rules are magnitude rules. Can you elaborate what kind of rules are added by "precomputed" dimension?
In fact, if we have magnitude implied statically (#1001) and full test coverage, this is equivalent to enforcing the magnitude in the type system. I think we should do that. Then it will be easy to drop normalize calls (we can just run the tests to obtain a proof that everything is alright). In that situation, moving the magnitude rules to the generic interface won't gain us much. By the way, has anyone ever tried a variant with explicit magnitude tracking? What I mean is that there is a magnitude member in the struct (not only in VERIFY) and the implementation decides dynamically when to normalize and thus can be as lazy as possible. |
I'm in favour of enforcing consistent limits across the field implementations. Although I don't particularly like adding the extra "layer", perhaps it does make more sense to put those checks in a single place. I would prefer to see effort directed to #1001 first.
This is becoming less and less likely IMO. For one thing, our group methods are improving with respect to the output magnitudes, and #1032 would eventually let us drop several normalize calls at entry anyway. Actually I think these will soon be normalization-free (ignoring the _normalizes_to_zero checks that aren't about magnitude reduction). Secondly, the main constraint for normalization is the input magnitudes for _fe_mul/_fe_sqr. We can actually improve that to 16 (from the current 8) for the 5x52 and 10x32 implementations by:
The 5x52 C mul/sqr are already consistent with this. One of my other PRs included that behaviour for the 10x26 C mul/sqr, but had a (slight) performance impact on some platforms (probably ignorable). The blocker for this is currently updating the asm implementation(s). Even though I think 16M inputs to mul/sqr is a nice thing to have, the only performance impact I could easily see was to eliminate (IIRC) 2 normalizations from _gej_add_ge, and they may already be gone! Therefore I don't expect any gains from saved normalizations by having field-implementation-specific code. Perhaps one sunny day someone will e.g. want to sum a long list of field elements, and the most sensible thing to do then would be to give them a new field method I think. |
One point to add to the statically-defined magnitudes view discussed above is that we're unable to have all magnitude reasoning fixed at compile time, just because some of our algorithms have data-dependent branches (only the constant-time ones are guaranteed not to have any, and only for the secret arguments). For example in |
When control flows join the effective magnitude (which is a bound) is the worst-case of all of them, except for complicated scenarios where past and future branches aren't independent of each other. It might be worth providing a method to make that an explicit operation (asserting a magnitude limit and actually setting the magnitude to that limit). This parallels the original logic of secp256k1_fe_cmov (which, in passing, hasn't been restored, though I believe there was an issue tracking it). Although any given branch is likely to get code coverage, it's more difficult to cover all combinations of branches, so I think this would be valuable. I guess one could contrive a scenario where performance would be slightly affected (implying an extra |
Right now, it is not exactly clear where the definition of magnitude/normalization rules belongs:
With #967 things may become even more complicated as it adds a "precomputed" dimension (=reduced to 4 64-bit limbs instead of 5), but lacks the need for magnitudes as a concern.
So I think that we should make these rules part of the generic field interface, rather than part of the implementations. To avoid duplication, there could be a field_debug.h that implements these by "wrapping" the underlying field. It could be done by giving all field types/functions a suffix, and if debug is in use, have the real field functions forward to those, and if not, just
#define fn fn_suffix
for all.One reason for not doing this would be if we want to make group logic dependent on what the used field is (e.g. a field with laxer magnitude requirements could result in the group code emitting fewer normalization calls).
The text was updated successfully, but these errors were encountered: