Archive for the ‘Math’ Category.

3-Symmetric Graphs

In my previous post I described 3-symmetric permutations. Now I want to define 3-symmetric graphs.

A reminder: a k-symmetric permutation is such that the densities of all permutations of size k in it are the same. In particular a 2-symmetric permutation has the same number of inversions and non-inversions. How do we translate this to graphs? I call a graph 2-symmetric if it has the same number of edges as non-edges. 2-symmetric graphs are easy to construct, but they only exist if the number of vertices, n, is such that n choose 2 is even. The simplest non-trivial examples are graphs with 4 vertices and three edges.

The above definition is difficult to generalize. So I rephrase: a graph G is 2-symmetric, if the density of any subgraph H with 2 vertices in G is the same as the expected density of H in a random graph where the probability of an edge equals 1/2. This definition is easy to generalize. A graph G is k-symmetric, if the density of any subgraph H with k vertices in G is the same as the expected density of H in a random graph where the probability of an edge equals 1/2. In particular, here are the densities of all four possible subgraphs with 3 vertices in a 3-symmetric graph:

  • A complete graph with 3 vertices: 1/8,
  • A path graph with 3 vertices: 3/8,
  • A graph with 3 vertices and only one edge: 3/8,
  • A graph with 3 isolated vertices: 1/8.

For a graph G to be 3-symmetric, the number of vertices, n, in G needs to be such that n choose 3 is divisible by 8. The first non-trivial case is n = 8. Here are the pictures of two 3-symmetric graphs. The first one is a wheel, and the second one is its complement.

A Wheel Graph with 8 vertices           A Complement to the Wheel Graph with 8 vertices
Share:Facebooktwitterredditpinterestlinkedinmail

3-Symmetric Permutations

I want to study patterns inside permutations. Consider a permutation of size 4: 1432. Today I am excited by ordered subset of size 3 inside this permutation. For example, I can drop the last number and look at 143. The ordering in 143 is the same as in 132, or, as a mathematician would say, 143 is order-isomorphic to 132. In my example of 1432, I have four subsets depending on which number I remove: 432 is order-isomorphic to 321, while 132, 142 and 143 are all order-isomorphic to 132.

The density of a pattern 123 in the permutation 1432 is the ratio of subsets of size 3 that are order-isomorphic to 123 to all subsets of size 3. As I already calculated, the density of 123 in 1432 is zero, while the density of 321 is 1/4, and the density of 132 is 3/4.

A permutation is called 3-symmetric if the densities of all patterns of size 3 (123, 132, 213, 231, 312, 321) are the same. The reason I love this notion, is that 3-symmetric permutations are similar to random permutations with respect to patterns of size 3.

My example permutation, 1432, is not 3-symmetric. Thinking about it, no permutation of size 4 can be 3-symmetric. The number of subsets of size 3 is four, which is not divisible by 6.

I wanted to find 3-symmetric permutations. So the size n of the permutation needs to be such that n choose 3 is divisible by 6. The numbers with this property are easy to find. The sequence starts as 1, 2, 9, 10, 18, 20, 28, 29, 36, 37, 38, 45, 46. The sequence is periodic with period 36.

Any permutation of sizes 1 or 2 is 3-symmetric as all densities are zero. Duh!

The next interesting size is 9. My student, Eric Zhang, wrote a program and found that there are two 3-symmetric permutations of size 9: 349852167 and 761258943. These numbers are so cool!. First, they are reverses of each other. This is not very surprising: if a permutations is 3-symmetric, then its reverse must also be 3-symmetric. There is another property: the permutations are rotational symmetries of each other. That is, the sum of two digits in the same place is 10. You can see that rotating a 3-symmetric permutation produces a 3-symmetric permutation.

I decided to write a program to find 3-symmetric permutations of the next size: 10. There is none. I do not trust my programming skills completely, so adjusted my program to size 9 and got the same result as Eric. I trust Eric’s programming skills, so I am pretty sure that there are no 3-symmetric permutations of size 10. Maybe there are some 3-symmetric permutations in size 18.

Let’s find 2-symmetric permutations. These are permutations with the same number of ascends and descends inversions and non-inversions. For n to be the size of such permutation n choose 2 needs to be divisible by 2. That means n has to have remainder 0 or 1 modulo 4. The first nontrivial case is n = 4. There are six 2-symmetric permutations: 1432, 2341, 2413, 3142, 3214, 4123. We can also group them into reversible pairs: 1432 and 2341, 2413 and 3142, 3214 and 4123. If we look at rotational symmetry we get different pairs: 1432 and 4123, 2341 and 3214, 2413 and 3142.

You can try to find non-trivial 4-symmetric permutations. Good luck! The smallest nontrivial size is 33. Finding 5-symmetric permutations is way easier: the smallest nontrivial size is 28. The sequence of nontrivial sizes as a function of n is: 1, 4, 9, 33, 28, 165, 54, 1029, 40832, 31752, 28680, 2588680, 2162700, and so on. My computer crashed while calculating it.

Share:Facebooktwitterredditpinterestlinkedinmail

Shapes of Symbols in Latin Squares

Once John Conway showed me a cute way to enumerate Latin squares of size 4, up to the movements of the plane. It was a joint result with Alex Ryba, which is now written in a paper Kenning KenKen.

For starters, I want to remind you that a Latin square of size n is an n by n table filled with integers 1 through n, so that every row and column doesn’t have repeated integers. KenKen is a game that John Conway likes, where you need to recover a Latin square given some information about it.

Let me start by describing a particular shape of four cells that one digit can occupy in a Latin square of size 4. There are only seven different shapes. To get to the beautiful result, we need to number these seven shapes in a particular order starting from zero. The shapes are shown below.

Shape 0 Shape 1 Shape 2 Shape 3 Shape 4 Shape 5 Shape 6

There are 12 different Latin squares up to movements of the square and relabeling the digits. Here is how Conway and Ryba matches shapes and squares. For each Latin square, take the shapes of all four digits, remove the duplicate shape numbers and sum the leftover shape numbers. You will get a unique number from 1 to 12 that represents a particular Latin square. For example, consider the square on the picture below.

Square 12

Digit 1 is shape 4, digits 2 and 4 form shape 2, and digit 3 forms shape 6. Shape 2 is used twice, and we ignore multiplicities. So we have shapes 2, 4, and 6 used. The resulting Latin square is number 2 + 4 + 6, that is 12. It is a fun exercise to try to find all the squares. For example, square 1 can only use shapes 0 and 1. But shape 1 uses exactly one corner. So the first square should use each of the digits in shape 1.

John likes finding interesting ways to remember which shape is which. You can find his and Alex’s suggestions in the paper which Alex submitted to the arxiv.

Oops! While I was writing this essay, arxiv rejected the paper.

Share:Facebooktwitterredditpinterestlinkedinmail

A Domino-Covering Problem

I do not remember where I saw this problem.

Problem. Invent a connected shape made out of squares on the square grid that cannot be cut into dominoes (rectangles with sides 1 and 2), but if you add a domino to the shape then you can cut the new bigger shape.

This problem reminds me of another famous and beautiful domino-covering problem.

Problem. Two opposite corner squares are cut out from the 8 by 8 square board. Can you cover the remaining shape with dominoes?

The solution to the second problem is to color the shape as a chess board and check that the number of black and white squares is not the same.

What is interesting about the first problem is that it passes the color test. It made me wonder: Is there a way to characterize the shapes on a square grid that pass the color test, but still can’t be covered in dominoes?

Share:Facebooktwitterredditpinterestlinkedinmail

Fair-Share Sequences

Every time I visit Princeton, or otherwise am in the same city as my friend John Conway, I invite him for lunch or dinner. I have this rule for myself: I invite, I pay. If we are in the same place for several meals we alternate paying. Once John Conway complained that our tradition is not fair to me. From time to time we have an odd number of meals per visit and I end up paying more. I do not trust my memory, so I prefer simplicity. I resisted any change to our tradition. We broke the tradition only once, but that is a story for another day.

Let’s discuss the mathematical way of paying for meals. Many people suggest using the Thue-Morse sequence instead of the alternating sequence of taking turns. When you alternate, you use the sequence ABABAB…. If this is the order of paying for things, the sequence gives advantage to the second person. So the suggestion is to take turns taking turns: ABBAABBAABBA…. If you are a nerd like me, you wouldn’t stop here. This new rule can also give a potential advantage to one person, so we should take turns taking turns taking turns. Continuing this to infinity we get the Thue-Morse sequence: ABBABAABBAABABBA… The next 2n letters are generated from the first 2n by swapping A and B. Some even call this sequence a fair-share sequence.

Should I go ahead and implement this sequence each time I cross paths with John Conway? Actually, the fairness of this sequence is overrated. I probably have 2 or 3 meals with John per trip. If I pay first every time, this sequence will give me an advantage. It only makes sense to use it if there is a very long stretch of meals. This could happen, for example, if we end up living in the same city. But in this case, the alternating sequence is not so bad either, and is much simpler.

Many people suggest another use for this sequence. Suppose you are divorcing and dividing a huge pile of your possessions. A wrong way to do it is to take turns. First Alice choses a piece she wants, then Bob, then Alice, and so on. Alice has the advantage as the first person to choose. An alternative suggestion I hear in different places, for example from standupmaths, is to use the Thue-Morse sequence. I don’t like this suggestion either. If Alice and Bob value their stuff differently, there is a better algorithm, called the Knaster inheritance procedure, that allows each of them to think they are getting more than a half. If both of them have the same value for each piece, then the Thue-Morse sequence might not be good either. Suppose one of the pieces they are dividing is worth more than everything else put together. Then the only reasonable way to take turns is ABBBB….

The beauty of the Thue-Morse sequence is that it works very well if there are a lot of items and their consecutive prices form a power function of a small degree k, such as a square or a cube function. After 2k+1 turns made according to this sequence, Alice and Bob will have a tie. You might think that if the sequence of prices doesn’t grow very fast, then using the Thue-Morse sequence is okay.

Not so fast. Here is the sequence of prices that I specifically constructed for this purpose: 5,4,4,4,3,3,3,2,2,2,2,1,1,0,0,0. The rule is: every time a turn in the Thue-Morse sequence switches from A to B, the value goes down by 1. Alice gets an extra 1 every time she is in the odd position. This is exactly half of her turns. That is every four turns, she gets an extra 1.

If the prices grow faster than a power, then the sequence doesn’t work either. Suppose your pieces have values that form a Fibonacci sequence. Take a look at what happens after seven turns. Alice will have pieces priced Fn + Fn-3 + Fn-5 + Fn-6. Bob will have Fn-1 + Fn-2 + Fn-4. We see that Alice gets more by Fn-3. This value is bigger than the value of all the leftovers together.

I suggest a different way to divide the Fibonacci-priced possessions. If Alice takes the first piece, then Bob should take two next pieces to tie with Alice. So the sequence might be ABBABBABB…. I can combine this idea with flipping turns. So we start with a triple ABB, then switch to BAA. After that we can continue and flip the whole thing: ABBBAABAAABB. Then we flip the whole thing again. And again and again. At the end we get a sequence that I decided to call the Fibonacci fair-share sequence.

I leave you with an exercise. Describe the Tribonacci fair-share sequence.

Share:Facebooktwitterredditpinterestlinkedinmail

Winning Nim Against a Player who Plays Randomly

I recently wrote about my way of playing Nim against a player who doesn’t know how to play. If my move starts in an N-position, then I obviously win. If my move starts in a P-position, I would remove one token hoping that more tokens for my opponent means more opportunity for them to make a mistake. But which token to remove? Does it make a difference from which pile I choose?

Consider the position (2,4,6). If I take one token, my opponent has 11 different moves. If I choose one token from the first or the last pile, my opponent needs to get to (1,4,5) not to lose. If I choose one token from the middle pile, my opponent needs to get to (1,3,2) not to lose. But the first possibility is better, because there are more tokens left, which gives me a better chance to have a longer game in case my opponent guesses correctly.

That is the strategy I actually use: I take one token so that the only way for the opponent to win is to take one token too.

This is a good heuristic idea, but to make such a strategy precise we need to know the probability distribution of the moves of my opponent. So let us assume that s/he picks a move uniformly at random. If there are n tokens in a N-position, then there are n − 1 possible moves. At least one of them goes to a P-position. That means my best chance to get on the winning track after the first move is not more than n/(n−1).

If there are 2 or 3 heaps, then the best strategy is to go for the longest game. With this strategy my opponent always has exactly one move to get to a P-position, I win after the first turn with probability n/(n−1). I lose the game with probability 1/(n−1)!!.

Something interesting happens if there are more than three heaps. In this case it is possible to have more than one winning move from a N-position. It is not obvious that I should play the longest game. Consider position (1,3,5,7). If I remove one token, then my opponent has three winning moves to a position with 14 tokens. On the other hand, if I remove 2 tokens from the second or the fourth pile, then my opponent has one good move, though to a position with only 12 tokens. What should I do?

I leave it to my readers to calculate the optimal strategy against a random player starting from position (1,3,5,7).

Share:Facebooktwitterredditpinterestlinkedinmail

Playing with Pascal’s Triangle

The beautiful Pascal triangle has been around for many years. Can you say something new about it?

Pascal Triangle Mod 2

Of course you can. Mathematicians always find new way to look at things. In 2012 RSI student, Kevin Garbe, did some new and cool research related to the triangle. Consider Pascal’s triangle modulo 2, see picture which was copied from a stackexchange discussion.

A consecutive block of m digits in one row of the triangle modulo 2 is called an m-block. If you search the triangle you will find that all possible binary strings of length 2 are m-blocks. Will this trend continue? Yes, you can find any possible string of length 3, but it stops there. The blocks you can find are called accessible blocks. So, which blocks of length 4 are not accessible?

There are only two strings that are not accessible: 1101 and 1011. It is not surprising that they are reflections of each other. Pascal’s triangle respects mirror symmetry and the answer should be symmetric with respect to reflection.

You can’t find these blocks on the picture, but how do we prove that they are not accessible, that is, that you can’t ever find them? The following amazing property of the triangle can help. We call a row odd/even, if it corresponds to binomial coefficients of n choose something, where n is an odd/even number. Every odd row has every digit doubled. Moreover, if we take odd rows and replace every double digit with its single self we get back Pascal’s triangle. Obviously the two strings 1101 and 1011 can’t be parts of odd rows.

What about even rows? The even rows have a similar property: every even-indexed digit is a zero. If you remove these zeros you get back Pascal’s triangle. The two strings 1101 and 1011 can’t be part of even rows. Therefore, they are not accessible.

The next question is to count the number of inaccessible blocks of a given length: a(n). This and much more was done by Kevin Garbe for his RSI 2012 project. (I was the head mentor of the math projects.) His paper is published on the arxiv. The answer to the question can be found by constructing recurrence relations for odd/even rows. It can be shown that a(2r) = 3a(r) + a(r+1) − 6 and a(2r+1) = 3a(r) + 2a(r+1) − 6. As a result the number of inaccessible blocks of length n is n2n + 2. I wonder if there exists a direct proof of this formula without considering odd and even rows separately.

This RSI result was so pretty that it became a question at our entrance PRIMES test for the year 2013. In the test we changed the word accessible to admissible, so that it would be more difficult for applicants to find the research. Besides, Garbe’s paper wasn’t arxived yet.

The pretty picture above is from the stackexchange, where one of our PRIMES applicants tried to solicit help in solving the test question. What a shame.

Share:Facebooktwitterredditpinterestlinkedinmail

Chameleon Coins

We all have played with problems in which we had real coins and fake (counterfeit) coins. For this post I assume that the fake coins are always lighter than the real coins. My coauthor Konstantin Knop invented a new type of a coin: a chameleon coin. This coin can mimic a fake or a real coin. It can also choose independently which coin to mimic for each weighing on a balance scale.

You cannot find the chameleon coin in a mix with real coins if it does not want to be found, because it can consistently behave as a real coin. Let’s add classic fake coins to the mix, the ones that are lighter. Still the task of identifying the chameleons using a balance scale cannot be achieved: the chameleons can pretend to be fake coins. We can’t identify the fake coins either, as the chameleons can mess things us up by consistently pretending to be fake.

What we can do is to find a small number of coins some of which are guaranteed to be fake. Consider the simplest setup, when we have one fake coin and one chameleon in our mix of N coins. That is we have N − 2 real coins. Our task now is to find TWO coins, one of which has to be fake. As usual we want to do it in the smallest number of weighings that guarantees that we’ll find the two coins. Let me give you a fun problem to solve:

Puzzle. The total number of coins is four. And as above we have one chameleon and one classic fake. In two weighings find two coins so that one of them is guaranteed to be fake.

If you want to learn more, we just wrote a paper titled Chameleon Coins.

Share:Facebooktwitterredditpinterestlinkedinmail

3-Pile Nim as an Automaton

In my paper with Joshua Xiong, Nim Fractals, we produced a bijection between P-positions in the three-pile Nim and a three-branch Ulam-Warburton automaton. We also defined a parent-child relationship on games that is induced by this bijection. Namely, two consecutive P-positions in a longest optimal game of Nim are the ones that correspond to a parent-child pair in the automaton. A cell in the Ulam-Warburton automaton has exactly one parent. That means, if (a,b,c) is a Nim P-position, then exactly one of (a − 1,b − 1,c), (a − 1,b,c − 1), and (a,b − 1,c − 1) must be a P-position and a parent of (a,b,c). (See our paper for more details.)

Now I want to explicitly write out the rules of an automaton which will generate the Nim P-positions in 3D.

Let me restrict the evolution of the automaton to the non-negative octant. That is, we consider points (a,b,c) in 3D, where each coordinate is a non-negative integer. We define the neighbors of the point (a,b,c) to be the points that differ from (a,b,c) in two coordinates exactly by 1. So each point strictly inside the octant has 12 neighbors. (There are three ways to choose two coordinates, and after that four ways to choose plus or minus 1 in each of them.

There is a geometric interpretation to this notion of neighborhood. Let us correspond a unit cube to a point with integer coordinates. The center of the cube is located at the given point and the sides are parallel to the axes. Then two points are neighbors if and only if the corresponding cubes share one edge. Now it becomes more visual that a cube has 12 neighbors, as it has 12 edges.

Here is the rule of the automaton. Points never die. We start with the patriarch, (0,0,0), one point being alive. The non-alive point is born inside the non-negative octant if it has exactly 1 alive neighbor that is closer to the patriarch. In other words the point (a,b,c) is born if and only if exactly one out of three points (a − 1,b − 1,c), (a − 1,b,c − 1), and (a,b − 1,c − 1) is alive. It follows that the points that are born at the n-th step has a coordinate sum 2n.

Consider for example the starting growth. At the first step the points (0,1,1), (1,0,1) and (1,1,0) are born. At the next step the points (0,2,2) and (2,0,2) and (2,2,0) are born. while the (1,1,2) will never be born as starting from the second step it has at least two live neighbors: (0,1,1) and (1,0,1) that are closer to the patriarch.

Theorem. In the resulting automaton, the points that are born at step n are exactly the P-positions of Nim with the total of 2n tokens.

Proof. Only the points with an even total can be born. Now we proceed by induction on the total number of tokens. The base case is obvious. Suppose we proved that at step n exactly P-positions with 2n tokens are born. Consider a P-position of Nim: (a,b,c) such that a + b + c = 2n + 2. Remember, that bitwise XOR of a, b, and c is zero. Consider the 2-adic values of a, b, and c (aka the smallest powers of 2 dividing a, b, and c). There should be exactly two out of these three integers that have the smallest 2-adic value. Suppose these are a and b. Then (a − 1,b − 1,c) is a P-position, while (a − 1,b,c − 1) and (a,b − 1,a − 1) are not. That means by the inductive hypothesis (a,b,c) has exactly one alive neighbor. So the position (a,b,c) is born at time n + 1.

Now we need to proof that nothing else is born. For the sake of contradiction suppose that (a,b,c) is the earliest N-position to be born. That means it has a live neighbor that is a P-position closer to the patriarch. WLOG we can assume that this neighbor is (a − 1,b − 1,c).

If a − 1 and b − 1 are both even, then (a,b,c) is a P-position, which is a contradiction. Suppose a − 1 and b − 1 are both odd. Then their binary representations can’t have the same number of ones at the end. Otherwise, (a,b,c) is a P-position. That is a and b have different 2-adic values. Suppose a has a smaller 2-adic value, Then, for (a − 1,b − 1,c) to be a P-position a and c has to have the same 2-adic value. That means (a,b − 1,c − 1) is a P-position too. Now suppose a − 1 and b − 1 are of different parities. Without loss of generality suppose a − 1 is odd and b − 1 is even, then c is odd. Then (a − 1,b,c − 1) is a P-position too. Thus we can always find a second neighbor with the same number of tokens. That is, both neighbors are alive at the same time; and the N-position (a,b,c) is never born. □

One might wonder what happens if we relax the automaton rule by removing the constraint on the distance to the patriarch. Suppose a new point is born if it has exactly one neighbor alive. This will be a different automaton. Let us look at the starting growth, up to a permutation of coordinates. At step one, positions (0,1,1) are born. At the next step positions (0,2,2) are born. At the next step positions (0,1,3), (1,2,3) and (0,3,3) are born. We see that (0,1,3) is not a P-positions. What will happen later? Will this N-position mess up the future positions that are born? Actually, this automaton will still contain all the P-positions of Nim.

Theorem. In the new automaton, the points that are born at step n and have total of 2n tokens are exactly the P-positions of Nim with the total of 2n tokens.

Proof. Only the points with an even total can be born. Now we proceed by induction on the total number of tokens. The base case is obvious. The birth of the points that have total of 2n tokens and are born at step n depend only on the points with the total of 2n − 2 tokens that are born at step n − 1. By the inductive hypothesis, those are the P-positions with 2n − 2 tokens. So the points have total of 2n tokens and are born at step n match exactly the first automaton described above. To reiterate, N-positions with 2n tokens are born after P-positions with 2n tokens, so they do not influence the birth of P-positions with 2n + 2 tokens. □

Share:Facebooktwitterredditpinterestlinkedinmail

The Longest Optimal Game of Nim

In the game of Nim you have several piles with tokens. Players take turns taking several tokens from one pile. The person who takes the last token wins.

The strategy of this game is well-known. You win if after your move the bitwise XOR of all the tokens in all the piles is 0. Such positions that you want to finish your move with are called P-positions.

I play this game with my students where the initial position has four piles with 1, 3, 5, and 7 tokens each. I invite my students to start the game, and I always win as this is a P-position. Very soon my students start complaining that I go second and want to switch with me. What should I do? My idea is to make the game last long (to have many turns before ending) to increase the chances of my students making a mistake. So what is the longest game of Nim given that it starts in a P-position?

Clearly you can’t play slower then taking one token at a time. The beauty of Nim is that such an optimal game starting from a P-position is always possible. I made this claim in several papers of mine, but I can’t find where this is proven. One of my papers (with Joshua Xiong) contains an indirect proof by building a bijection to the Ulam-Warburton automaton. But this claim is simple enough, so I want to present a direct proof here. Actually, I will prove a stronger statement.

Theorem. In an optimal game of Nim that starts at a P-position the first player can take one token at each turn so that the second player is forced to take one token too.

Proof. Consider a P-position in a game of Nim. Then find a pile with the lowest 2-adic value. That is the pile such that the power of two in its factorization is the smallest. Suppose this power is k. Notice that there should be at least two piles with this 2-adic value.

The first player should take a token from one of those piles. Then the bitwise heap-sum after the move is 2k+1−1. Then the Nim strategy requires the second player to take tokens from a pile such that its value decreases after bitwise XORing with 2k+1−1. All piles with the 2-adic value more than k will increase after xoring with 2k+1−1. That means the second player has to take tokens from another pile with 2-adic value k. Moreover, the second player is forced to take exactly one token to match the heap-sum. □

In the position (1,3,5,7) all numbers are odd, so I can take one token from any pile for my first move, then the correct move is to take one token from any other pile. My students do not know that; and I usually win even as the first player. Plus, there are four different ways I can start as the first player. This way my students do not get to try several different options with the same move I make. After I win several times as the first player, I convince my students that I win anyway and persuade them to go back to me being the second player. After that I relax and never lose. I am evil.

Share:Facebooktwitterredditpinterestlinkedinmail