I’m confused by this. Maybe I misunderstand the algorithm given in the draft linked above, but that doesn’t look O(n^2) at all. Intuitively it also doesn’t make sense as quadratic somehow implies that each character depends on all others (so for each of n chars you’d have do do something with all other n chars -> n^2)
It’s basically a naive algorithm of converting a number in base 256 (each byte is a digit) to base 58. The algorithm is super-linear and worst case quadratic due to all the carrying. Maybe average case quadratic too? Not sure.
Oops, I think my previous comment was inaccurate, and it's too late for me to edit it. It's only decoding that's quadratic, not encoding. Sorry for the confusion.
It's not really obvious (IMO) from the wording of the specification, but step 2 of the decoding algorithm is actually a nested loop.
Still not seeing it. As it's an encoding, not a compression, it cannot produce a large amount of output. At most I guess ceil(log2(256)/log2(58)). So encoding will produce a slightly larger output, while decoding will produce a slightly shorter output. Still all linear.
The output size is linear, but since each byte of output requires O(n) work to compute, the total time to decode an encoded string is O(n^2). Or to put it another way, the decoding throughput (in bytes per second) is inversely proportional to the total length of the string.
Odd. Certainly different to how I thought it works (equivalent to base64, just with divmod instead of bit-shifting). Thanks for fixing my intuition. Related: https://cs.stackexchange.com/q/21736