1

I'm trying to implement RFC 5683, which relies on a hashing function that is described as follows:

H1 = SHA-1(1|1|z) mod 2^128 | SHA-1(1|2|z) mod 2^128 |...| SHA-1(1|9|z) mod 2^128

Where z is the string to hash. The part I have trouble understanding is the following:

in order to create 1152 output bits for H1, nine calls to SHA-1 are made and the 128 least significant bits of each output are used.

Once I get the output from my hash function (I'm using SHA-256 instead of SHA1), how do I get the 128 "least significant bits" from that hash? The libraries I'm using are able to output as an array of 8 x 32-bit integers:

[-1563099236, 1891088516, -531757887, -2069381238, 131899433, -1500579251, 74960544, -956781525]

Or as a 64-character hexadecimal string:

 "a2d4ff9c70b7b884e04e04c184a7bf8a07dca029a68efa4d0477cea0c6f8ac2b"

But I'm at a loss as to how I would recover the least significant bits from these representations.

4

1 回答 1

2

鉴于该十六进制字符串:

a2d4ff9c70b7b884e04e04c184a7bf8a 07dca029a68efa4d0477cea0c6f8ac2b
           most significant <-  | -> least significant

64 个字符 -> 256 位,因此 128 位将是字符串的一半。最不重要的是在字符串的结尾,最重要的是在字符串的开头。

于 2013-07-08T18:42:12.763 回答