Algorithms in action – iota and shuffle

When I was working on my sorting code for this post, I wanted a vector of randomly ordered, unique integers for testing.

The code is straightforward:

std::vector< int >
getRandomVector( int n )
{
    std::vector< int > v_i( n );

    std::iota( std::begin( v_i ), std::end( v_i ), 0 );

    std::default_random_engine dre;
    std::shuffle( std::begin( v_i ), std::end( v_i ), dre );

    return v_i;
}

C++11 added std::iota – an algorithm that assigns incrementing values to a sequence. In my example the values are integers starting at zero.

I was confused when I first saw iota. Initially I thought someone had misspelt itoa (not a standard function but sometimes available as an extension), then I thought since iota is atoi backwards there must be a link between them.

I was wrong on all counts. std::iota represents the Greek letter iota ( Ι, &#x03B9 ) which (depending on your font and which case you are examining) looks similar to a lower case “L”, an upper case “i”, a one “1”, or a vertical bar “|”.

The Greek letter iota is used in the programming language APL to generate a sequence of consecutive integers. APL has a wild and wonderful character set described here. My favourite part of the Wikipedia article is the statement “Most APL symbols are present in Unicode”. In other words, some APL symbols are not present in Unicode (the article goes on to identify the missing symbols). I think this is wonderful. There are over 100,000 characters in Unicode and you still can’t use it to write APL (I realize that APL predates Unicode so was not trying to fit into its character set, and also that there are implementations of APL that do limit themselves to characters generally available on a modern computer). I thought digraphs and trigraphs were bad enough – C++ programmers have it easy compared to APL programmers.

std::iota uses prefix operator ++ to generate successive values. I wish I had a good example other than integers to show. I don’t, but I’ll look out for one.

One final point, std::iota assumes that you have a sequence already created – it isn’t possible to use it with std::back_inserter (or any other inserter) to create the sequence. For my example it really doesn’t make any difference, but if there is a use case with a more complicated type, possibly one that doesn’t support a default constructor, it would be nice not to have to default construct the sequence first. Of course, if this ever becomes a real problem it is easy enough to write iota_n.


I am using std::shuffle (added in C++11) rather than std::random_shuffle (available since C++98) so that I can use the bright shiny new random number generators we get in C++11. (See Stephan T. Lavavej’s talk rand() Considered Harmful for more on why the new C++ random number generators are much better than rand).

For this example I haven’t bothered to seed the random number generator, and I am not that worried about perfect randomness anyway but I am trying to drag my programming style into the C++11 world. I suspect I will succeed at embracing C++11 at about the same time C++14 is unleashed.

While we are on the subject of randomness, here’s an interesting note from the C++11 standard. The description of the three shuffle algorithms includes this phrase:

Effects: Permutes the elements in the range [first,last) such that each possible permutation of those elements has equal probability of appearance.

Given a perfect random number generator the output from any of the shuffles should be perfectly uniform. Sounds reasonable enough, however there is at least one “obvious” shuffling algorithm that fails this test:

Iterate over the elements of the sequence and swap each element with a randomly chosen element (possibly itself).

Jeff Atwood goes into more details here, and also mentions a common algorithm which does produce uniform results, the Fisher–Yates shuffle, also known as the Knuth shuffle (both links take you to the same Wikipedia page).

Roughly speaking the Fisher–Yates shuffle is:

Pick an element randomly from the input sequence, add it to the result sequence and remove it from the input sequence. Repeat until the input sequence is empty.

It’s equivalent to picking a card at random from the deck, putting it into a separate “shuffled” pile and repeating until all the cards are in the “shuffled” pile.

Many years ago I looked at a couple of standard library implementations of shuffle algorithms and they were using the Fisher–Yates shuffle. I assumed this would still be true. Once again I was wrong. Both Visual Studio and GCC are using another shuffle algorithm which I haven’t been able to track down the name of (if anyone knows, please post a comment). As far as I can tell, the algorithms works something like this:

Assume you have a perfectly shuffled sequence of n elements. Add another element to the end of the sequence, then swap it with a randomly chosen element of the new (size n + 1) sequence (possibly itself).

I assume that this procedure produces another perfectly shuffled sequence (although I can’t prove it). Since we can start with a single element (perfectly shuffled) we can build up to any number of elements.

References for shuffling:

  • The C++11 standard Section 25.3.12 Random shuffle
  • Knuth: The Art of Computer Programming Volume 2, Seminumerical Algorithms 3rd edition Section 3.4.2 Random Sampling and Shuffling
  • Cormen, Leiserson, Rivest, Stein: Introduction to Algorithms 2nd edition Section 5.3 Randomized algorithms

6 thoughts on “Algorithms in action – iota and shuffle

    • Bob says:

      The reason I think the two shuffles are different is that the Knuth/Fisher-Yates/Durstenfeld shuffle starts with a full deck of cards and removes cards one by one whereas our unknown shuffle starts with an empty deck (or a deck containing one card) and adds them one by one.

      There might be something I am missing of course.

      Edit:

      What I wrote above is confusing and unhelpful. Let me try again.

      We have two decks of cards – the original unshuffled deck (starts off with all the cards) and our final shuffled deck (starts off with no cards).

      For the Knuth/Fisher-Yates/Durstenfeld shuffle we pull cards from a random position in the unshuffled deck and put them on top of the shuffled deck.

      For the unnamed shuffle we pull cards from the top of the unshuffled deck and put them at a random position into the shuffled deck.

      The difference is which deck we apply the randomness to.

      • Adrian says:

        K/F-Y: Take randomly from “unshuffled” and put it in the next position in “shuffled”.

        Unnamed: Take the next from “unshuffled” and put it in a random position in “shuffled”, but that might not be its final position.

        It’s that last bit that’s concerning (to me). If it weren’t for the fact that a “shuffled” card may be moved again, I think we could say these approaches are equivalent. So does it matter?

        If you try to implement K/F-Y in place, as Christopher suggested, you’ll do a lot of copying. Suppose we have the shuffled cards at the beginning of the array and the unshuffled cards at the end. Let i be the index of the first unshuffled card. Each time you select a card at random from the unshuffled partition, you effectively rotate it to position i; everything from i to the selected card needs to be shifted in order to make room for the selected card to go at i. At that point, you increment i, since the moved card is now “shuffled.”

        Suppose that instead of rotating the selected card to i, you simply swapped it with the card that was already there. Is that equivalent? I think it is, but I don’t have the mathematical chops to prove it. The hapless card that was at i is still in the unshuffled portion, and all the cards in the unshuffled portion still have an equal probability of being selected on the next iteration. So I believe that this is still equivalent to K/F-Y.

        If I’m right, K/F-Y sounds a lot more like the unnamed algorithm. The remaining discrepancy is, as you’ve said, is which deck we apply the randomness to. That is a difference, but it doesn’t strike me as a detail that matters. I suspect the reason it’s done is that it’s marginally easier (or faster) to generate a random index in the interval [0..i) than in [i..n).

        • Vlastimil says:

          No matter what swaps those algorithms do, they all draw numbers from [1..i] for each i in [1..n], that means there are exactly n! possibilities (input values) with equal probability. Because the algorithm has no other inputs (the values of the array are not read), that means there exactly n! paths in the program space with equal probability.

          Now it is sufficient to show that for every permutation, the algorithm can produce it. Then, for every permutation there is a path, and since the number of paths and permutations is the same, there must be an one-to-one correspondence, so all permutations have the same probability.

          You can add any number of swaps (for example by shifting the array), as long as you keep this property.

          • Bob says:

            Vlastimil, thank you for your nice simple proof that our unknown shuffle algorithm does implement a perfect shuffle. I like proofs like this where you step back from the problem and come at it from a different angle.

Leave a Reply to Adrian Cancel reply

Your email address will not be published. Required fields are marked *