Skeleton for tests for hash.Hash interface. Would love to know if this is in the right direction before I iterate on it.
t.Errorf(
Is it alright to have two different t.Errof calls in the same test like this or should they split into separate tests?
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. | Gerrit |
"digest from Sum of Write(\"%s\") != digest from consecutive calls to Write(\"%s\") and Write(\"%s\")", prefix+suffix, prefix, suffix)
Manuel SabinShould this include the values of the hash like on line 98?
Done
"Sum of new hash != Sum of hash after writing %v and then calling Reset\n New hash Sum is %v\n Reset hash Sum is %v",
Manuel SabinIs this format ok or too verbose? The hashes are quite long, so I broke it up onto new lines.
Done
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. | Gerrit |
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. | Gerrit |
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. | Gerrit |
t.Errorf(
Manuel SabinIs it alright to have two different t.Errof calls in the same test like this or should they split into separate tests?
Done
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. | Gerrit |
Note that when I mark a comment as "optional", I do not have strong feelings about it, and so you may reasonably mark it as "acknowledged" without making said change.
// Create one random number generator to use for all Hash tests
rng := newRandReader(t)
If I understand correctly, this adds global state that will vary based on the order the tests are executed in, which is best avoided.
https://google.github.io/styleguide/go/best-practices#global-state
If we really need something pseudorandom, I recommend calling newRandReader in each test separately; in practice, I don't think any of these test cases really needs something pseudorandom (a repeated non-zero byte is probably sufficient).
"Sum appends number of bytes != Size; got %v , want %v",
Optional:
"Sum appended %v bytes, expected %v bytes"
writeToHash(t, h, slice)
Optional: Add // panics if Write returns an error.
expectEqual(t, compositeSum, serialSum, "two successive Write calls resulted in a different Sum than a single one")
Optional nit: combinedSum and separateSum
seed := time.Now().UnixNano()
Broadly speaking, we don't want seeds used in tests to depend on the time the test was run at. Picking some arbitrary number would probably be fine.
func TestSHA(t *testing.T) {
cryptotest.TestHash(t, New)
}
I'm not sure if the style guide says what to do here, but for all of the implementation tests, I'd probably put the Test functions you're adding next to the other test cases (e.g. right after TestAllocations here).
func TestSHA(t *testing.T) {
cryptotest.TestHash(t, New)
cryptotest.TestHash(t, New512_224)
cryptotest.TestHash(t, New512_256)
cryptotest.TestHash(t, New384)
}
When a test case fails for one of these implementations, does the failure message indicate which implementation the test was for?
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. | Gerrit |
// Create one random number generator to use for all Hash tests
rng := newRandReader(t)
If I understand correctly, this adds global state that will vary based on the order the tests are executed in, which is best avoided.
https://google.github.io/styleguide/go/best-practices#global-state
If we really need something pseudorandom, I recommend calling newRandReader in each test separately; in practice, I don't think any of these test cases really needs something pseudorandom (a repeated non-zero byte is probably sufficient).
I see, I was following a pattern Filippo did in https://go-review.googlesource.com/c/crypto/+/112315/3
I don't *think* we'd want to call newRandReader in each test since it would create a new seed for the rng on each call.
My understanding of the testing pattern for this is that newRandReader will print the seed used for all the tests and then that seed can be used to replicate the error for the whole suite of tests when debugging
I'm not sure I understand what "will vary based on the order the tests are executed in." Do you see a debugging issue here that I'm missing?
I also don't know that random cases help here beyond giving new test cases on each test. I was following Filippo's 112315 CL. Do you think this pattern is overkill here?
"Sum appends number of bytes != Size; got %v , want %v",
Optional:
"Sum appended %v bytes, expected %v bytes"
Acknowledged
Optional: Add // panics if Write returns an error.
Acknowledged
Maybe add OutOfBoundsRead test that makes sure Write(b []byte) doesn't read from more of the array that b points to than b indexes.
writeToHash(t, h, []byte(prefix))
Don't need this typecasting
expectEqual(t, compositeSum, serialSum, "two successive Write calls resulted in a different Sum than a single one")
Optional nit: combinedSum and separateSum
Acknowledged
seed := time.Now().UnixNano()
Broadly speaking, we don't want seeds used in tests to depend on the time the test was run at. Picking some arbitrary number would probably be fine.
Replied to in previous comment, but this function was lifted directly from CL 112315 and my understanding of the pattern is that it yields new test cases on each run while also printing the seed to hardcode when debugging. Do you think I should change this pattern for the rest of the tests going forward?
func TestSHA(t *testing.T) {
cryptotest.TestHash(t, New)
}
I'm not sure if the style guide says what to do here, but for all of the implementation tests, I'd probably put the Test functions you're adding next to the other test cases (e.g. right after TestAllocations here).
Sounds good. So the convention would be to call the interface tester *after* all of the implementation-specific tests going forward, yes? (I currently put them at the beginning before any implementation-specific tests begin)
cryptotest.TestHash(t, New)
cryptotest.TestHash(t, New224)
Include these in t.Run calls that name the key size
cryptotest.TestHash(t, New)
cryptotest.TestHash(t, New512_224)
cryptotest.TestHash(t, New512_256)
cryptotest.TestHash(t, New384)
Include these in t.Run calls that name the key size for each test
func TestSHA(t *testing.T) {
cryptotest.TestHash(t, New)
cryptotest.TestHash(t, New512_224)
cryptotest.TestHash(t, New512_256)
cryptotest.TestHash(t, New384)
}
When a test case fails for one of these implementations, does the failure message indicate which implementation the test was for?
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. | Gerrit |
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. | Gerrit |
// Create one random number generator to use for all Hash tests
rng := newRandReader(t)
Manuel SabinIf I understand correctly, this adds global state that will vary based on the order the tests are executed in, which is best avoided.
https://google.github.io/styleguide/go/best-practices#global-state
If we really need something pseudorandom, I recommend calling newRandReader in each test separately; in practice, I don't think any of these test cases really needs something pseudorandom (a repeated non-zero byte is probably sufficient).
I see, I was following a pattern Filippo did in https://go-review.googlesource.com/c/crypto/+/112315/3
I don't *think* we'd want to call newRandReader in each test since it would create a new seed for the rng on each call.
My understanding of the testing pattern for this is that newRandReader will print the seed used for all the tests and then that seed can be used to replicate the error for the whole suite of tests when debuggingI'm not sure I understand what "will vary based on the order the tests are executed in." Do you see a debugging issue here that I'm missing?
I also don't know that random cases help here beyond giving new test cases on each test. I was following Filippo's 112315 CL. Do you think this pattern is overkill here?
Re: "the order the tests are executed in": my understanding is that if the pseudorandom number generator is seeded once then shared between tests, the random numbers that each test gets will be affected by how many previous numbers had been retrieved from the pRNG - which is in turn affected by how many of the other tests had run before the current one.
Conversely, if each test gets its own pRNG, the calls Test 1 makes to pRNG A should not affect the result of the calls Test 2 makes to pRNG B.
It's possible that I'm incorrect, and that the pRNG is not *really* shared state between tests.
---
I do think adding (pseudo-)entropy for these tests is overkill, but I'm not passionate on the subject; I'll defer to Filippo/Roland about that.
seed := time.Now().UnixNano()
Manuel SabinBroadly speaking, we don't want seeds used in tests to depend on the time the test was run at. Picking some arbitrary number would probably be fine.
Replied to in previous comment, but this function was lifted directly from CL 112315 and my understanding of the pattern is that it yields new test cases on each run while also printing the seed to hardcode when debugging. Do you think I should change this pattern for the rest of the tests going forward?
(see above)
func TestSHA(t *testing.T) {
cryptotest.TestHash(t, New)
}
Manuel SabinI'm not sure if the style guide says what to do here, but for all of the implementation tests, I'd probably put the Test functions you're adding next to the other test cases (e.g. right after TestAllocations here).
Sounds good. So the convention would be to call the interface tester *after* all of the implementation-specific tests going forward, yes? (I currently put them at the beginning before any implementation-specific tests begin)
I'm not too worried about whether it's before/after the implementation-specific tests; I just have a mild preference that it be next to the other Test functions rather than between the md5Test struct definition and the golden data definitions (it can get lost in there).
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. | Gerrit |
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. | Gerrit |
// Create one random number generator to use for all Hash tests
rng := newRandReader(t)
Manuel SabinIf I understand correctly, this adds global state that will vary based on the order the tests are executed in, which is best avoided.
https://google.github.io/styleguide/go/best-practices#global-state
If we really need something pseudorandom, I recommend calling newRandReader in each test separately; in practice, I don't think any of these test cases really needs something pseudorandom (a repeated non-zero byte is probably sufficient).
Russell WebbI see, I was following a pattern Filippo did in https://go-review.googlesource.com/c/crypto/+/112315/3
I don't *think* we'd want to call newRandReader in each test since it would create a new seed for the rng on each call.
My understanding of the testing pattern for this is that newRandReader will print the seed used for all the tests and then that seed can be used to replicate the error for the whole suite of tests when debuggingI'm not sure I understand what "will vary based on the order the tests are executed in." Do you see a debugging issue here that I'm missing?
I also don't know that random cases help here beyond giving new test cases on each test. I was following Filippo's 112315 CL. Do you think this pattern is overkill here?
Re: "the order the tests are executed in": my understanding is that if the pseudorandom number generator is seeded once then shared between tests, the random numbers that each test gets will be affected by how many previous numbers had been retrieved from the pRNG - which is in turn affected by how many of the other tests had run before the current one.
Conversely, if each test gets its own pRNG, the calls Test 1 makes to pRNG A should not affect the result of the calls Test 2 makes to pRNG B.
It's possible that I'm incorrect, and that the pRNG is not *really* shared state between tests.
---
I do think adding (pseudo-)entropy for these tests is overkill, but I'm not passionate on the subject; I'll defer to Filippo/Roland about that.
I see, yes, that's my understanding too: PRG for a given seed conceptually yields a static tape of fixed psuedorandom bytes and each call to it increments the pointer on that tape by the number of bytes read from it. So it is stateful in that way and state is shared in that other tests making calls to it will leave the next test at a different starting point on that tape.
Knowing the seed will let you deterministically reproduce the test for all sub-tests but, to your point, if a change is made to one test's calls to the PRG while debugging then that will affect the pseudorandom bytes the next test gets.
The testing readout may get more cluttered with PRG seeds by putting it in each subtest instead of once at the top of the containing test, but that may be worth it for ease of debugging.
I'll defer to Filippo/Roland as well on this
func TestSHA(t *testing.T) {
cryptotest.TestHash(t, New)
}
Manuel SabinI'm not sure if the style guide says what to do here, but for all of the implementation tests, I'd probably put the Test functions you're adding next to the other test cases (e.g. right after TestAllocations here).
Russell WebbSounds good. So the convention would be to call the interface tester *after* all of the implementation-specific tests going forward, yes? (I currently put them at the beginning before any implementation-specific tests begin)
I'm not too worried about whether it's before/after the implementation-specific tests; I just have a mild preference that it be next to the other Test functions rather than between the md5Test struct definition and the golden data definitions (it can get lost in there).
Sounds good! That placement was an accident, I meant to put it at the very top of the file (right under the imports). I'll place all my interface tester calls right under "import" of each implementation test if that sounds good to you.
func TestSHA(t *testing.T) {
cryptotest.TestHash(t, New)
cryptotest.TestHash(t, New512_224)
cryptotest.TestHash(t, New512_256)
cryptotest.TestHash(t, New384)
}
Manuel SabinWhen a test case fails for one of these implementations, does the failure message indicate which implementation the test was for?
Done
Added labeled subtests for each one
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. | Gerrit |
func TestSHA(t *testing.T) {
cryptotest.TestHash(t, New)
}
Manuel SabinI'm not sure if the style guide says what to do here, but for all of the implementation tests, I'd probably put the Test functions you're adding next to the other test cases (e.g. right after TestAllocations here).
Russell WebbSounds good. So the convention would be to call the interface tester *after* all of the implementation-specific tests going forward, yes? (I currently put them at the beginning before any implementation-specific tests begin)
Manuel SabinI'm not too worried about whether it's before/after the implementation-specific tests; I just have a mild preference that it be next to the other Test functions rather than between the md5Test struct definition and the golden data definitions (it can get lost in there).
Sounds good! That placement was an accident, I meant to put it at the very top of the file (right under the imports). I'll place all my interface tester calls right under "import" of each implementation test if that sounds good to you.
If I understand correctly, you're talking about moving it up to line 18 or so? If so, that'd still separate TestSHA from the rest of the tests, which strikes me as undesirable for the reasons I laid out above.
That said, I'll leave that to Filippo.
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. | Gerrit |
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. | Gerrit |
// Create one random number generator to use for all Hash tests
rng := newRandReader(t)
Manuel SabinIf I understand correctly, this adds global state that will vary based on the order the tests are executed in, which is best avoided.
https://google.github.io/styleguide/go/best-practices#global-state
If we really need something pseudorandom, I recommend calling newRandReader in each test separately; in practice, I don't think any of these test cases really needs something pseudorandom (a repeated non-zero byte is probably sufficient).
Russell WebbI see, I was following a pattern Filippo did in https://go-review.googlesource.com/c/crypto/+/112315/3
I don't *think* we'd want to call newRandReader in each test since it would create a new seed for the rng on each call.
My understanding of the testing pattern for this is that newRandReader will print the seed used for all the tests and then that seed can be used to replicate the error for the whole suite of tests when debuggingI'm not sure I understand what "will vary based on the order the tests are executed in." Do you see a debugging issue here that I'm missing?
I also don't know that random cases help here beyond giving new test cases on each test. I was following Filippo's 112315 CL. Do you think this pattern is overkill here?
Manuel SabinRe: "the order the tests are executed in": my understanding is that if the pseudorandom number generator is seeded once then shared between tests, the random numbers that each test gets will be affected by how many previous numbers had been retrieved from the pRNG - which is in turn affected by how many of the other tests had run before the current one.
Conversely, if each test gets its own pRNG, the calls Test 1 makes to pRNG A should not affect the result of the calls Test 2 makes to pRNG B.
It's possible that I'm incorrect, and that the pRNG is not *really* shared state between tests.
---
I do think adding (pseudo-)entropy for these tests is overkill, but I'm not passionate on the subject; I'll defer to Filippo/Roland about that.
I see, yes, that's my understanding too: PRG for a given seed conceptually yields a static tape of fixed psuedorandom bytes and each call to it increments the pointer on that tape by the number of bytes read from it. So it is stateful in that way and state is shared in that other tests making calls to it will leave the next test at a different starting point on that tape.
Knowing the seed will let you deterministically reproduce the test for all sub-tests but, to your point, if a change is made to one test's calls to the PRG while debugging then that will affect the pseudorandom bytes the next test gets.
The testing readout may get more cluttered with PRG seeds by putting it in each subtest instead of once at the top of the containing test, but that may be worth it for ease of debugging.
I'll defer to Filippo/Roland as well on this
Done
func TestSHA(t *testing.T) {
cryptotest.TestHash(t, New)
}
Manuel SabinI'm not sure if the style guide says what to do here, but for all of the implementation tests, I'd probably put the Test functions you're adding next to the other test cases (e.g. right after TestAllocations here).
Russell WebbSounds good. So the convention would be to call the interface tester *after* all of the implementation-specific tests going forward, yes? (I currently put them at the beginning before any implementation-specific tests begin)
Manuel SabinI'm not too worried about whether it's before/after the implementation-specific tests; I just have a mild preference that it be next to the other Test functions rather than between the md5Test struct definition and the golden data definitions (it can get lost in there).
Russell WebbSounds good! That placement was an accident, I meant to put it at the very top of the file (right under the imports). I'll place all my interface tester calls right under "import" of each implementation test if that sounds good to you.
If I understand correctly, you're talking about moving it up to line 18 or so? If so, that'd still separate TestSHA from the rest of the tests, which strikes me as undesirable for the reasons I laid out above.
That said, I'll leave that to Filippo.
Done
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. | Gerrit |
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. | Gerrit |
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. | Gerrit |
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. | Gerrit |
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. | Gerrit |
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. | Gerrit |
Code-Review | +1 |
Commit-Queue | +1 |
crypto: add common tests for the hash.Hash interface
crypto/internal/cryptotest:
tests to most of the crypto/ hash implementation test suites. It
currently skips the hmac implementation since the keyed hash requires
more setup.
old
ctrlDigest := getSum(t, h, nil) //Record control digest
Space after //
t.Errorf(
"Write returned error; got (%v, %v), want (nil, %v)",
err, n, len(p))
nit: can be one line.
func expectEqual(t *testing.T, got, want []byte, msg string) {
nit: inline this
bytes, hash, io, math/rand, testing, time
CRYPTO-MATH, testing
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. | Gerrit |
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. | Gerrit |
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. | Gerrit |
crypto: add common tests for the hash.Hash interface
Manuel Sabincrypto/internal/cryptotest:
Done
tests to most of the crypto/ hash implementation test suites. It
currently skips the hmac implementation since the keyed hash requires
more setup.
Manuel Sabinold
Done
ctrlDigest := getSum(t, h, nil) //Record control digest
Manuel SabinSpace after //
Done
t.Errorf(
"Write returned error; got (%v, %v), want (nil, %v)",
err, n, len(p))
nit: can be one line.
Done
func expectEqual(t *testing.T, got, want []byte, msg string) {
Manuel Sabinnit: inline this
Done
bytes, hash, io, math/rand, testing, time
Manuel SabinCRYPTO-MATH, testing
Done
Manuel Sabinspurious
Done
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. | Gerrit |