Archive for the ‘Math’ Category.

Happy 2019!

Happy 2019, the first 4 digit number to appear 6 times in the decimal expansion of Pi.

By the way:

2019 = 14 + 24 + 34 + 54 + 64.

Also, 2019 is the product of two primes 3 and 673. The sum of these two prime factors is a square.

This is not all that is interesting about factors of 2019. Every concatenation of these two prime factors is prime. Even more unusual, 2019 is the largest known composite number such that every concatenation of its prime factors is prime. [Oops, the last statement is wrong, Jan 3,2019]

Happy Happy-go-Lucky year, as 2019 is a Happy-go-Lucky number: the number that is both Happy and Lucky.

In case you are wondering, here is the definition of Happy numbers: One can take the sum of the squares of the digits of a number. Those numbers are Happy for which iterating this operation eventually leads to 1.

In case you are wondering, to build the Lucky number sequence, start with natural numbers. Delete every second number, leaving 1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, …. The second number remaining is 3, so delete every third number, leaving 1, 3, 7, 9, 13, 15, 19, 21, …. The next number remaining is 7, so delete every 7th number, leaving 1, 3, 7, 9, 13, 15, 21, …. The next number remaining is 9, so delete every ninth number, etc.


Two Dice

My friend Alex Ryba uses interesting math questions in the CUNY Math Challenge. For the 2016 challenge they had the following problem.

Problem. Eve owns two six-sided dice. They are not necessarily fair dice and not necessarily weighted in the same manner. Eve promises to give Alice and Bob each a fabulous prize if they each roll the same sum with her dice. Eve wishes to design her two dice to minimize the likelihood that she has to buy two fabulous prizes. Can she weight them so that the probability for Alice and Bob to get prizes is less than 1/10?

The best outcome for Eve would be if she can weight the dice so that the sum is uniform. In this case the probability that Alice and Bob get the prizes is 1/11. Unfortunately for Eve, such a distribution of weight for the dice is impossible. There are many ways to prove it.

I found a beautiful argument by Hagen von Eitzen on the stack exchange: Let ai (correspondingly bi) be the probabilities that die A (correspondingly B) shows i + 1. It would be very useful later that that i ranges over {0,1,2,3,4,5} for both dice. Let f(z) = ∑ aizi and g(z) = ∑ bizi. Then the desired result is that f(z)g(z) = ∑j=010 zj/11. The roots of the right side are the non-real roots of unity. Therefore both f and g have no real roots. So, they must both have even degree. This implies a5=b5=0 and the coefficient of z10 in their product is also 0, contradiction.

Alex himself has a straightforward argument. The probabilities of 2 and 12 have to be equal to 1/11, therefore, a0b0 = a5b5 = 1/11. Then the probability of a total 7 is at least a0b5 + a0b5. The geometric mean of a0b5 and a0b5 is 1/11 (from above), so their arithmetic mean is at least 1/11 and their sum is at least 2/11. Therefore, the uniform distribution for sums is impossible.

So 1/11 is impossible, but how close to it can you get?


3-Inflatable Permutations

We can inflate one permutation with another permutation. Let me define the inflation by examples and pictures. Suppose we have a permutation 132 which we want to inflate using permutation 21. The result is the permutation 216543 that can be divided into three blocks 21|65|43. The blocks are ordered as the first permutation 132, and within each block the order is according to the second permutation. This operation is often called a tensor product of two permutations. The operation is non-commutative: the inflation of 21 with 132 is 465132.

Inflation Definition

As one might guess this post is related to k-symmetric permutations, that is, permutations that contain all possible patterns of size k with the same frequency. As I mentioned in my recent post 3-Symmetric Permutations, the smallest non-trivial examples of 3-symmetric permutations are 349852167 and 761258943 in size 9.

A permutation is called k-inflatable if its inflation with k-symmetric permutation is k-symmetric. One of my PRIMES projects was about 3-inflatable permutations. The result of this project is the paper On 3-Inflatable Permutations written with Eric Zhang and posted at the arxiv.

The smallest non-trivial examples of 3-inflatable permutations are in size 17: E534BGA9HC2D1687F and B3CE1H76F5A49D2G8, where capital letters denote numbers greater than nine. Another cool property discovered in the paper is that the tensor product of two k-inflatable permutations is k-inflatable.Share:Facebooktwittergoogle_plusredditpinterestlinkedinmail

Symmetries of k-Symmetric Permutations

I am fascinated by 3-symmetric permutations, that is, permutations that contain all possible patterns of of size three with the same frequency. As I mentioned in my recent post 3-Symmetric Permutations, the smallest non-trivial examples are in size 9.

When I presented these examples at a combinatorics pre-seminar, Sasha Postnikov suggested to draw the permutations as a graph or a matrix. Why didn’t I think of that?

Below are the drawings of the only two 3-symmetric permutations of size 9: 349852167 and 761258943.

3-Symmetric Permutations

As I already mentioned in the aforementioned essay the set of 3-symmetric permutations is invariant under the reversal and subtraction of each number from the size of the permutation plus 1. In geometrical terms it means reflection along the vertical midline and central symmetry. But as you can see the pictures are invariant under 90 degree rotation. Why?

What I forgot to mention was that the set of k-symmetric permutations doesn’t change after the inversion. In geometrical terms it means the reflection with respect to the main diagonal. If you combine a reflection with respect to a diagonal with a reflection with respect to a vertical line you get a 90 degree rotation. Overall, the symmetries of the k-symmetric permutations are the same as all the symmetries of a square. Which means we can only look at the shapes of the k-symmetric permutations.

There are six 2-symmetric permutations: 1432, 2341, 2413, 3142, 3214, 4123. As we can see in the picture below they have two different shapes.

2-Symmetric Permutations

Here is the list of all 22 2-symmetric permutations of size 5: 14532, 15342, 15423, 23541, 24351, 24513, 25143, 25314, 31542, 32451, 32514, 34152, 34215, 35124, 41352, 41523, 42153, 42315, 43125, 51243, 51324, 52134. The list was posted by Drake Thomas in the comments to my essay. Up to symmetries the permutations form four groups. Group 1: 14532, 15423, 23541, 32451, 34215, 43125, 51243, 52134. Group 2: 15342, 24351, 42315, 51324. Group 3: 24513, 25143, 31542, 32514, 34152, 35124, 41523, 42153. Group 4: 25314, 41352. The picture shows the first permutation in each group.

2-Symmetric Permutations of Size 5


3-Symmetric Graphs

In my previous post I described 3-symmetric permutations. Now I want to define 3-symmetric graphs.

A reminder: a k-symmetric permutation is such that the densities of all permutations of size k in it are the same. In particular a 2-symmetric permutation has the same number of inversions and non-inversions. How do we translate this to graphs? I call a graph 2-symmetric if it has the same number of edges as non-edges. 2-symmetric graphs are easy to construct, but they only exist if the number of vertices, n, is such that n choose 2 is even. The simplest non-trivial examples are graphs with 4 vertices and three edges.

The above definition is difficult to generalize. So I rephrase: a graph G is 2-symmetric, if the density of any subgraph H with 2 vertices in G is the same as the expected density of H in a random graph where the probability of an edge equals 1/2. This definition is easy to generalize. A graph G is k-symmetric, if the density of any subgraph H with k vertices in G is the same as the expected density of H in a random graph where the probability of an edge equals 1/2. In particular, here are the densities of all four possible subgraphs with 3 vertices in a 3-symmetric graph:

  • A complete graph with 3 vertices: 1/8,
  • A path graph with 3 vertices: 3/8,
  • A graph with 3 vertices and only one edge: 3/8,
  • A graph with 3 isolated vertices: 1/8.

For a graph G to be 3-symmetric, the number of vertices, n, in G needs to be such that n choose 3 is divisible by 8. The first non-trivial case is n = 8. Here are the pictures of two 3-symmetric graphs. The first one is a wheel, and the second one is its complement.

A Wheel Graph with 8 vertices           A Complement to the Wheel Graph with 8 vertices


3-Symmetric Permutations

I want to study patterns inside permutations. Consider a permutation of size 4: 1432. Today I am excited by ordered subset of size 3 inside this permutation. For example, I can drop the last number and look at 143. The ordering in 143 is the same as in 132, or, as a mathematician would say, 143 is order-isomorphic to 132. In my example of 1432, I have four subsets depending on which number I remove: 432 is order-isomorphic to 321, while 132, 142 and 143 are all order-isomorphic to 132.

The density of a pattern 123 in the permutation 1432 is the ratio of subsets of size 3 that are order-isomorphic to 123 to all subsets of size 3. As I already calculated, the density of 123 in 1432 is zero, while the density of 321 is 1/4, and the density of 132 is 3/4.

A permutation is called 3-symmetric if the densities of all patterns of size 3 (123, 132, 213, 231, 312, 321) are the same. The reason I love this notion, is that 3-symmetric permutations are similar to random permutations with respect to patterns of size 3.

My example permutation, 1432, is not 3-symmetric. Thinking about it, no permutation of size 4 can be 3-symmetric. The number of subsets of size 3 is four, which is not divisible by 6.

I wanted to find 3-symmetric permutations. So the size n of the permutation needs to be such that n choose 3 is divisible by 6. The numbers with this property are easy to find. The sequence starts as 1, 2, 9, 10, 18, 20, 28, 29, 36, 37, 38, 45, 46. The sequence is periodic with period 36.

Any permutation of sizes 1 or 2 is 3-symmetric as all densities are zero. Duh!

The next interesting size is 9. My student, Eric Zhang, wrote a program and found that there are two 3-symmetric permutations of size 9: 349852167 and 761258943. These numbers are so cool!. First, they are reverses of each other. This is not very surprising: if a permutations is 3-symmetric, then its reverse must also be 3-symmetric. There is another property: the permutations are rotational symmetries of each other. That is, the sum of two digits in the same place is 10. You can see that rotating a 3-symmetric permutation produces a 3-symmetric permutation.

I decided to write a program to find 3-symmetric permutations of the next size: 10. There is none. I do not trust my programming skills completely, so adjusted my program to size 9 and got the same result as Eric. I trust Eric’s programming skills, so I am pretty sure that there are no 3-symmetric permutations of size 10. Maybe there are some 3-symmetric permutations in size 18.

Let’s find 2-symmetric permutations. These are permutations with the same number of ascends and descends inversions and non-inversions. For n to be the size of such permutation n choose 2 needs to be divisible by 2. That means n has to have remainder 0 or 1 modulo 4. The first nontrivial case is n = 4. There are six 2-symmetric permutations: 1432, 2341, 2413, 3142, 3214, 4123. We can also group them into reversible pairs: 1432 and 2341, 2413 and 3142, 3214 and 4123. If we look at rotational symmetry we get different pairs: 1432 and 4123, 2341 and 3214, 2413 and 3142.

You can try to find non-trivial 4-symmetric permutations. Good luck! The smallest nontrivial size is 33. Finding 5-symmetric permutations is way easier: the smallest nontrivial size is 28. The sequence of nontrivial sizes as a function of n is: 1, 4, 9, 33, 28, 165, 54, 1029, 40832, 31752, 28680, 2588680, 2162700, and so on. My computer crashed while calculating it.Share:Facebooktwittergoogle_plusredditpinterestlinkedinmail

Shapes of Symbols in Latin Squares

Once John Conway showed me a cute way to enumerate Latin squares of size 4, up to the movements of the plane. It was a joint result with Alex Ryba, which is now written in a paper Kenning KenKen.

For starters, I want to remind you that a Latin square of size n is an n by n table filled with integers 1 through n, so that every row and column doesn’t have repeated integers. KenKen is a game that John Conway likes, where you need to recover a Latin square given some information about it.

Let me start by describing a particular shape of four cells that one digit can occupy in a Latin square of size 4. There are only seven different shapes. To get to the beautiful result, we need to number these seven shapes in a particular order starting from zero. The shapes are shown below.

Shape 0 Shape 1 Shape 2 Shape 3 Shape 4 Shape 5 Shape 6

There are 12 different Latin squares up to movements of the square and relabeling the digits. Here is how Conway and Ryba matches shapes and squares. For each Latin square, take the shapes of all four digits, remove the duplicate shape numbers and sum the leftover shape numbers. You will get a unique number from 1 to 12 that represents a particular Latin square. For example, consider the square on the picture below.

Square 12

Digit 1 is shape 4, digits 2 and 4 form shape 2, and digit 3 forms shape 6. Shape 2 is used twice, and we ignore multiplicities. So we have shapes 2, 4, and 6 used. The resulting Latin square is number 2 + 4 + 6, that is 12. It is a fun exercise to try to find all the squares. For example, square 1 can only use shapes 0 and 1. But shape 1 uses exactly one corner. So the first square should use each of the digits in shape 1.

John likes finding interesting ways to remember which shape is which. You can find his and Alex’s suggestions in the paper which Alex submitted to the arxiv.

Oops! While I was writing this essay, arxiv rejected the paper.Share:Facebooktwittergoogle_plusredditpinterestlinkedinmail

A Domino-Covering Problem

I do not remember where I saw this problem.

Problem. Invent a connected shape made out of squares on the square grid that cannot be cut into dominoes (rectangles with sides 1 and 2), but if you add a domino to the shape then you can cut the new bigger shape.

This problem reminds me of another famous and beautiful domino-covering problem.

Problem. Two opposite corner squares are cut out from the 8 by 8 square board. Can you cover the remaining shape with dominoes?

The solution to the second problem is to color the shape as a chess board and check that the number of black and white squares is not the same.

What is interesting about the first problem is that it passes the color test. It made me wonder: Is there a way to characterize the shapes on a square grid that pass the color test, but still can’t be covered in dominoes?Share:Facebooktwittergoogle_plusredditpinterestlinkedinmail

Fair-Share Sequences

Every time I visit Princeton, or otherwise am in the same city as my friend John Conway, I invite him for lunch or dinner. I have this rule for myself: I invite, I pay. If we are in the same place for several meals we alternate paying. Once John Conway complained that our tradition is not fair to me. From time to time we have an odd number of meals per visit and I end up paying more. I do not trust my memory, so I prefer simplicity. I resisted any change to our tradition. We broke the tradition only once, but that is a story for another day.

Let’s discuss the mathematical way of paying for meals. Many people suggest using the Thue-Morse sequence instead of the alternating sequence of taking turns. When you alternate, you use the sequence ABABAB…. If this is the order of paying for things, the sequence gives advantage to the second person. So the suggestion is to take turns taking turns: ABBAABBAABBA…. If you are a nerd like me, you wouldn’t stop here. This new rule can also give a potential advantage to one person, so we should take turns taking turns taking turns. Continuing this to infinity we get the Thue-Morse sequence: ABBABAABBAABABBA… The next 2n letters are generated from the first 2n by swapping A and B. Some even call this sequence a fair-share sequence.

Should I go ahead and implement this sequence each time I cross paths with John Conway? Actually, the fairness of this sequence is overrated. I probably have 2 or 3 meals with John per trip. If I pay first every time, this sequence will give me an advantage. It only makes sense to use it if there is a very long stretch of meals. This could happen, for example, if we end up living in the same city. But in this case, the alternating sequence is not so bad either, and is much simpler.

Many people suggest another use for this sequence. Suppose you are divorcing and dividing a huge pile of your possessions. A wrong way to do it is to take turns. First Alice choses a piece she wants, then Bob, then Alice, and so on. Alice has the advantage as the first person to choose. An alternative suggestion I hear in different places, for example from standupmaths, is to use the Thue-Morse sequence. I don’t like this suggestion either. If Alice and Bob value their stuff differently, there is a better algorithm, called the Knaster inheritance procedure, that allows each of them to think they are getting more than a half. If both of them have the same value for each piece, then the Thue-Morse sequence might not be good either. Suppose one of the pieces they are dividing is worth more than everything else put together. Then the only reasonable way to take turns is ABBBB….

The beauty of the Thue-Morse sequence is that it works very well if there are a lot of items and their consecutive prices form a power function of a small degree k, such as a square or a cube function. After 2k+1 turns made according to this sequence, Alice and Bob will have a tie. You might think that if the sequence of prices doesn’t grow very fast, then using the Thue-Morse sequence is okay.

Not so fast. Here is the sequence of prices that I specifically constructed for this purpose: 5,4,4,4,3,3,3,2,2,2,2,1,1,0,0,0. The rule is: every time a turn in the Thue-Morse sequence switches from A to B, the value goes down by 1. Alice gets an extra 1 every time she is in the odd position. This is exactly half of her turns. That is every four turns, she gets an extra 1.

If the prices grow faster than a power, then the sequence doesn’t work either. Suppose your pieces have values that form a Fibonacci sequence. Take a look at what happens after seven turns. Alice will have pieces priced Fn + Fn-3 + Fn-5 + Fn-6. Bob will have Fn-1 + Fn-2 + Fn-4. We see that Alice gets more by Fn-3. This value is bigger than the value of all the leftovers together.

I suggest a different way to divide the Fibonacci-priced possessions. If Alice takes the first piece, then Bob should take two next pieces to tie with Alice. So the sequence might be ABBABBABB…. I can combine this idea with flipping turns. So we start with a triple ABB, then switch to BAA. After that we can continue and flip the whole thing: ABBBAABAAABB. Then we flip the whole thing again. And again and again. At the end we get a sequence that I decided to call the Fibonacci fair-share sequence.

I leave you with an exercise. Describe the Tribonacci fair-share sequence.Share:Facebooktwittergoogle_plusredditpinterestlinkedinmail

Winning Nim Against a Player who Plays Randomly

I recently wrote about my way of playing Nim against a player who doesn’t know how to play. If my move starts in an N-position, then I obviously win. If my move starts in a P-position, I would remove one token hoping that more tokens for my opponent means more opportunity for them to make a mistake. But which token to remove? Does it make a difference from which pile I choose?

Consider the position (2,4,6). If I take one token, my opponent has 11 different moves. If I choose one token from the first or the last pile, my opponent needs to get to (1,4,5) not to lose. If I choose one token from the middle pile, my opponent needs to get to (1,3,2) not to lose. But the first possibility is better, because there are more tokens left, which gives me a better chance to have a longer game in case my opponent guesses correctly.

That is the strategy I actually use: I take one token so that the only way for the opponent to win is to take one token too.

This is a good heuristic idea, but to make such a strategy precise we need to know the probability distribution of the moves of my opponent. So let us assume that s/he picks a move uniformly at random. If there are n tokens in a N-position, then there are n − 1 possible moves. At least one of them goes to a P-position. That means my best chance to get on the winning track after the first move is not more than n/(n−1).

If there are 2 or 3 heaps, then the best strategy is to go for the longest game. With this strategy my opponent always has exactly one move to get to a P-position, I win after the first turn with probability n/(n−1). I lose the game with probability 1/(n−1)!!.

Something interesting happens if there are more than three heaps. In this case it is possible to have more than one winning move from a N-position. It is not obvious that I should play the longest game. Consider position (1,3,5,7). If I remove one token, then my opponent has three winning moves to a position with 14 tokens. On the other hand, if I remove 2 tokens from the second or the fourth pile, then my opponent has one good move, though to a position with only 12 tokens. What should I do?

I leave it to my readers to calculate the optimal strategy against a random player starting from position (1,3,5,7).Share:Facebooktwittergoogle_plusredditpinterestlinkedinmail