robobully 19 hours ago

A month ago, the team behind the SHA-3 has published an RFC for TurboSHAKE and KangarooTwelve: secure hash functions that employ the same primitive as SHA-3, but with reduced number of rounds to make it faster. K12 is basically a tree-based hash over TurboSHAKE.

https://keccak.team/2025/rfc9861.html

Octoth0rpe 20 hours ago

Could someone explain the differences between these two points? They seem identical to me.

> [The hash function] also should be second-preimage resistant: For a given message M1, it must be virtually impossible to find a second message M2 where M1 != M2 that produces the same hash hash(M1) == hash(M2).

> These functions should be also collision resistant: it must be virtually impossible to find two messages M1 and M2 where hash(M1) == hash(M2).

  • Paedor 19 hours ago

    The second objective is easier than the first. It may be easier to find any M1 and M2 that collide, than to find an M2 that collides with a specific M1.

  • coldpie 19 hours ago

    The difference is "for a given message M1". In the 2nd requirement, you may choose both M1 and M2. For the 1st requirement, you are given M1 and must find M2.

  • amluto 19 hours ago

    Collision resistance implies second-preimage resistance, but second-preimage resistance does not imply collision resistance.

    Some care is needed with the definitions. For any hash function, the adversary can compute a bunch of hashes, and those outputs obviously have known first preimages. And a hash function with a known collision has a known second preimage given one of the colliding inputs.

amluto 19 hours ago

> We will evaluate these functions on 3 points:

I’m disappointed that they didn’t discuss my favorite feature of BLAKE3: it’s a tree hash. If you have a file and the BLAKE3 hash of that file, you can generate a proof that a portion of the file is correct. And you can take a file, split it into pieces (of known length and offset), hash them as you receive them, and then assemble them into the full file and efficiently calculate the full file’s hash. The other options cannot do this, although you could certainly build this on top of them.

Imagine how much better S3 would be if it used BLAKE3 instead of MD5. (Hah, S3 and its competitors don’t even manage to fully support MD5 for multipart uploads, although they could support BLAKE3 very well with multipart uploads!)

  • UltraSane 16 hours ago

    Isn't the hash of multipart uploads just the hash of all the hashes of each part? I have actually replicated the multipart hash locally.

    • amluto 14 hours ago

      Something vaguely along those lines, and not the same thing on Google vs AWS.

      But this isn’t the desired behavior! If you upload the same logical bytes as a single part or as multiple parts, you should get the same thing, and BLAKE3 could do this.

  • EnPissant 17 hours ago

    > If you have a file and the BLAKE3 hash of that file, you can generate a proof that a portion of the file is correct

    This seems wrong to me? I would expect you could only verify the entire file.

    • oconnor663 14 hours ago

      This is a difference between BLAKE3 and most other hash functions. In the usual arrangement ("Merkle–Damgård"), each block depends on the previous one, so the only way to verify some "slice" of the input is to re-hash the whole thing. But when you arrange the input into a tree shape (a "Merkle tree") instead, suddenly the right half of the tree does not depend on the left half until the very last step at the very top. If you give me the input to that last step, I can verify that it matches the root hash that I know, now I have the hashes ("chaining values") I'd need to verify either the left half or the right half without the other. Then I do our favorite trick in computer science, which is to recursively apply that same procedure all the way down, until I have an efficient "path" to whatever part of the tree I actually care about.

      For more on this see Section 6.4 of our paper: https://docs.google.com/viewer?url=https://github.com/BLAKE3...

      And the Bao repo: https://github.com/oconnor663/bao

      • EnPissant 10 hours ago

        Yeah, I was objecting to this part:

        > If you have a file and the BLAKE3 hash of that file

        To me that means the final hash. If you have the full tree of hashes that is a different story!

        PS. Thanks for making BLAKE3! I use it in several pieces of software.

scatbot 17 hours ago

Honestly, I'm skeptical of the whole Keccak-derived ecosystem. The reduced-rounds variants like K12 and TurboShake are trading a conservative security margin for speed, which kinda feels odd when compared to BLAKE3. Meanwhile, BLAKE3 covers everything for real-world use. It's super fast on any input, fully parallelizable and has a built-in key mode. The only real advantage Keccak-based functions seem to have is standardization and potential hardware acceleration.

If you care about speed, security and simplicity, and you don't care about NIST compliance, BLAKE3 is hard to beat.

  • 15155 16 hours ago

    Keccak is substantially more simple/elegant from a hardware design standpoint because it has no addition operations - there's no comparison. fMax is way, way easier to obtain, and it's way easier to implement and understand.

    On legacy hardware, BLAKE performs well because ALUs perform well.

  • robobully 17 hours ago

    > trading a conservative security margin for speed

    That's what precisely happened to BLAKE with BLAKE2/3, isn't it?

    • scatbot 16 hours ago

      Not really. BLAKE3 isn’t a reduced-round tweak of BLAKE2 like K12 is for Keccak. It's a different construction that still meets its full security target. K12 and TurboSHAKE on the other hand are literally the same permutation with fewer rounds, which actually reduces Keccak's security margin. The situations are not really comparable.

      • oconnor663 14 hours ago

        BLAKE3 does reduce the round count relative to BLAKE2, and the underlying compression functions are similar enough that it is an apples-to-apples comparison. Our rationale for doing that was described in https://eprint.iacr.org/2019/1492.pdf, which also argued that Keccak could reduce their round count even farther than they did.

      • robobully 14 hours ago

        > BLAKE3 isn’t a reduced-round tweak of BLAKE2 <...> It's a different construction

        My initial argument was meant to highlight the difference between BLAKE and its successors. However, I have no idea what you back your statements with, BLAKE3 in fact _is_ BLAKE2s with reduced round + tree-based structure on top of it. The authors even directly mention it in the spec.

        > K12 and TurboSHAKE on the other hand are literally the same permutation with fewer rounds

        It's true for TurboSHAKE, but as for K12, it builds a tree-based structure on top of TurboSHAKE by the virtue of Sakura encoding (similar to what Bao encoding is used for in BLAKE3).

        IANAC, so I won't make any claims about cryptographical strengths of the functions.

jswelker 18 hours ago

Easy solution: each year just add +1 MD5 iteration. Problem solved.