Archive for the ‘Sequences’ Category.

86 Conjecture

86 is conjectured to be the largest power of 2 not containing a zero. This simply stated conjecture has proven itself to be proof-resistant. Let us see why.

What is the probability that the nth power of two will not have any zeroes? The first and the last digits are non-zeroes; suppose that other digits become zeroes randomly and independently of each other. This supposition allows us to estimate the probability of 2n not having zeroes as (9/10)k-2, where k is the number of digits of 2n. The number of digits can be estimated as n log102. Thus, the probability is about cxn, where c = (10/9)2 ≈ 1.2 and x = (9/10)log102 ≈ 0.97. The expected number of powers of 2 without zeroes starting from the power N is cxN/(1-x) ≈ 40 ⋅ 0.97N.

Let us look at A007377, the sequence of numbers such that their powers of 2 do not contain zeros: 1, 2, 3, 4, 5, 6, 7, 8, 9, 13, 14, 15, 16, 18, 19, 24, 25, 27, 28, 31, 32, 33, 34, 35, 36, 37, 39, 49, 51, 67, 72, 76, 77, 81, 86. Our estimates predicts 32 members of this sequence starting from 6. In fact, the sequence has 30 conjectured members. Similarly, our estimate predicts 2.5 members starting from 86. It is easy to check that the sequence doesn’t contain any more numbers below 200 and our estimate predicts 0.07 members after 200. As we continue checking larger numbers and see that they do not belong to the sequence, the probability that the sequence contains more elements vanishes. With time we check more numbers and become more convinced that the conjecture is true. Currently, it has been checked up to the power 4.6 ⋅ 107. The probability of finding something after that is about 1.764342396 ⋅10-633620.

Let us try to approach the conjecture from another angle. Let us check the last K digits of powers of two. As the number of possibilities is finite, these last digits eventually will start cycling. If we can show that all the elements inside the period contain zeroes, then we need to check the finite number of powers of two until this period starts. If we can find such K, we can prove the conjecture.

Let us look at the last two digits of powers of two. The sequence starts as: 01, 02, 04, 08, 16, 32, 64, 28, 56, 12, 24, 48, 96, 92, 84, 68, 36, 72, 44, 88, 76, 52, 04. As we would anticipate, it starts cycling. The cycle length is 20, and 90% of numbers in the cycle don’t have zeroes.

Now let’s continue to the last three digits. The period length is 100, and 19 of them either start with zero or contain zero. The percentage of elements in the cycle that do not contain zero is 81%.

The cycle length for the last n digits is known. It is 4 ⋅ 5n-1. In particular the cycle length grows by 5 every time. The number of zero-free elements in these cycles form a sequence A181610: 4, 18, 81, 364, 1638, 7371, 33170. If we continue with our supposition that the digits are random, and study the new digits that appear when we move from the cycle of the last n digits to the next cycle of the last n+1 digits, we can expect that 9/10 of those digits will be non-zero. Indeed, if we check the ratio of how many numbers do not contain zero in the next cycle compared to the previous cycle, we get: 4.5, 4.5, 4.49383, 4.5, 4.5, 4.50007. All of these numbers are quite close to our estimation of 4.5. If this trend continues the portion of the numbers in the cycle that don’t have zeroes tends to zero; however, the total of such numbers grows exponentially. We can even estimate that the expected growth is 4 ⋅ 4.5n-1. From this estimation, we can derive the conjecture:

Conjecture. For any number N, there exists a power of two such that its last N digits are zero-free.

Indeed, the last N digits of powers of two cycle, and there are an increasing number of members inside that cycle that do not contain zeroes. The corresponding powers of two don’t have zeroes among N rightmost digits.

So, how do we combine the two results? First, the expected probability of finding the power of two larger than 86 that doesn’t contain zero is minuscule. And second, we most certainly can find a power of two that has as many zeroless digits at the end as we want.

To combine the two results, let us look at the sequences A031140 and A031141. We can deduce from them that for the power 103233492954 the first zero from the right occupies the 250th spot. The total number of digits of that power is 31076377936. So 250 is a tiny portion of the digits.

As time goes by we grow more and more convinced that 86 is the largest power of two without zeroes, but it is not at all clear how we can prove the conjecture or whether it can be proven at all.

My son, Sergei, suggested that I claim that I have a proof of this conjecture, but do not have enough space in the margin to fit my proof in. The probability that I will ever be shamed and disproven is lower than the probability of me winning a billion dollars in the lottery. Though then, if I do win the big bucks, I will still care about being shamed and disproven.

Share:Facebooktwitterredditpinterestlinkedinmail

On the Perfidy of Negative Numbers

Tanya Khovanova, Alexey Radul

Perfidy is to parity as odious is to odd and evil is to even. As a reminder, odious numbers are numbers with an odd number of ones in their binary expansions. From here you can extrapolate what the evil numbers are and the fact that the perfidy of an integer is the parity of the number of ones in its binary expansion. We live in a terrible world: all numbers are perfidious.

So why are we writing about the perfidy of negative numbers? One would expect it to be a natural extension of the perfidy of positive numbers, but it turns out that the naive way of defining it doesn’t work at all. Is there hope? Could negative numbers be innocent of evil and free of odiousness? Is zero an impenetrable bulwark against perfidy? Not quite, but something interesting does happen to evil as it tries to cross zero. Read on.

To define perfidy for negative numbers, let us study how perfidy behaves for positive numbers. It is easiest to think about the perfidies of power-of-two-sized chunks of non-negative integers at a time. Let us denote by Tn the string of perfidies of the integers from 0 to 2n−1, with evil being zero and odious being 1. So T0 = 0, T1 = 01, T2 = 0110, T3 = 01101001, …. The recurrence relation governing the Tn is Tn+1 = TnTn, where T is the bitwise negation of the string T, and juxtaposition is concatenation. The limit of this as n tends to infinity is the (infinite) sequence of perfidies of non-negative integers. This sequence is called the Thue-Morse sequence: 0,1,1,0,1,0,0,1,1,0,0,1,0,1,1,0,1,0,0,….

So defining the perfidy of negative numbers is equivalent to extending the Thue-Morse sequence to the left. If we are to define “the” perfidy of negative numbers, that definition should preserve most of the properties of the Thue-Morse sequence after extension.

So, let’s see. We asked around, and most people said that the binary expansion of a negative integer should be the binary expansion of its absolute value, but with a minus sign. Defining perfidy as parity of number of ones in this binary expansion corresponds to the following extended Thue-Morse sequence in which we mark values corresponding to negative indices with bold font: … 0, 1, 1, 0, 1, 1, 0, ….

One of the major properties of the Thue-Morse sequence is its fractal property: if you replace every zero of the Thue-Morse sequence by 0,1 and every one by 1,0, you will get the Thue-Morse sequence back. Clearly, our new extended sequence doesn’t have this property.

Another set of properties for the Thue-Morse sequence, called avoidance properties, is a long list of patterns that the sequence avoids. For example, the Thue-Morse sequence doesn’t contain any overlapping squares — patterns axaxa, where a is a character and x is a word. But you can see above, our first extension contains it. So this definition is wrong, not just once but twice (and two wrongs only make a right under very unusual circumstances). Perfidy is stymied by the cross-over from zero to minus one. Are negative numbers protected from the ravages of evil? (and odiousness?)

Unfortunately, there are many people, for example John Conway, who inadvertently extend the reach of perfidy by arguing that the binary expansion of a negative integer should be different. Indulge in a flight of fancy and imagine the binary expansion that consists of infinitely many ones to the left: …1111. What happens when you add 1 to it? The carry gets pushed infinitely far away, and you get …000000 — zero. So it is quite reasonable to let …1111 be the binary expansion of −1. Similarly, the string …1110 represents −2, …1101 represents −3, etc. Continuing this we see that the binary expansion of a negative integer −n is the bitwise negation of the binary expansion of n − 1 (including the leading zeros). This is called the Two’s complement representation.

Why is two’s complement a reasonable representation? Suppose you were trying to invent a binary notation for negative numbers, but you wanted to pursue uniformity by not using a minus sign. The problem is that the standard definition of the binary representation allows you to represent only positive numbers. But you can solve this problem with modular arithmetic: modulo any fixed N, every negative number is equivalent to some positive number (by adding enough multiples of N), so you can just represent it by representing that positive number. If you choose N to be a power of two, modding out by it is just truncation of the binary representation. If you let those powers of two tend to infinity, you get the two’s complement representation described above.

Aside: When you are building a computer, uniformity is money, because special cases cost special transistors. The two’s complement idea lets one build arithmetic units that just operate on positive numbers with some number of bits (effectively doing arithmetic modulo 2k), and leave the question of negativeness to the choice of representatives of those equivalence classes.

If we take two’s complement as the binary expansion of negative numbers, how will we define the perfidy? Is the number of ones in the infinite string …1111 corresponding to −1 even or odd?

We can’t answer that question, but we know for every binary expansion of negative numbers the parity of the number of zeroes. Thus we can divide all negative integers in two classes with different perfidy. We just do not know which one is which.

Let us consider two cases. In the first case we consider a negative number odious if the number of zeroes in its binary expansion is odd. The corresponding extended Thue-Morse sequence is: … 0, 1, 1, 0, 0, 1, 1, 0, …. The negative half is the reflection of the classical Thue-Morse sequence. In the second case we consider a negative number odious if the number of zeroes in its binary expansion is even. The corresponding extended Thue-Morse sequence is: … 1, 0, 0, 1, 0, 1, 1, 0, …. The negative half is the bitwise negation of the reflection of the classical Thue-Morse sequence.

Can we say that one of the sequences is better than the other? Both of them respect the fractal property of the classical Thue-Morse sequence. Let us look at the avoidance properties. The avoidance properties are symmetric with respect to switching zeroes with ones and with respect to changing the direction of the sequence. Hence, the negation, the reflection, and the reflection of the negation of the Thue-Morse sequence will continue to respect these properties.

Thus, we only need to check the avoidance properties of the finite subsequences that span both negative and non-negative indices. We claim that for both definitions of perfidy, any finite middle subsequence of the extended Thue-Morse sequence occurs as a subsequence in the classical Thue-Morse sequence. So any avoidance properties that are true for the Thue-Morse sequence will also be true for both extensions.

Indeed, it is easy to show that the strings T2n defined above are palindromes. So for the first definition of perfidy the string in the middle will be a substring of T2nT2n for some large n, and for the second definition a substring of T2nT2n. But the classical Thue-Morse sequence contains the subsequence T2nT2nT2nT2nT2nT2nT2nT2n. So whichever way we extend the Thue-Morse sequence to the left any finite middle part will always be a repetition of a piece in the classical Thue-Morse sequence. Thus, all the avoidance properties will hold.

We see that there are two logical ways to define perfidy for negative integers. There are two clear groups of numbers with the same perfidy, but which is called evil and which odious is interchangeable. So evil doesn’t stop at zero after all, but at least it gets an identity crisis.

Share:Facebooktwitterredditpinterestlinkedinmail

Ten Coins

Among ten given coins, some may be real and some may be fake. All real coins weigh the same. All fake coins weigh the same, but have a different weight than real coins. Can you prove or disprove that all ten coins weigh the same in three weighings on a balance scale?

When I first received this puzzle from Ken Fan I thought that he mistyped the number of coins. The solution for eight coins was so easy and natural that I thought that it should be eight — not ten. It appears that I was not the only one who thought so. I heard about a published paper with the conjecture that the best you can do is to prove uniformity for 2n coins in n weighings.

I will leave it to the readers to find a solution for eight coins, as well as for any number of coins less than eight. I’ll use my time here to explain the solution for ten coins that my son Sergei Bernstein suggested.

First, in every weighing we need to put the same number of coins in both pans. If the pans are unbalanced, the coins are not uniform; that is, some of them are real and some of them are fake. For this discussion, I will assume that all the weighings are balanced. Let’s number all coins from one to ten.

Consider two sets. The first set contains only the first coin and the second set contains the second and the third coins. Suppose the number of fake coins in the first set is a and a could be zero or one. The number of fake coins in the second set is b where b is zero, one or two. In the first weighing compare the first three coins against coins numbered 4, 5, and 6. As they balance the set of coins 4, 5, and 6 has to have exactly a + b fake coins.

In the second weighing compare the remaining four coins 7, 8, 9, and 10 against coins 1, 4, 5, and 6. As the scale balances we have to conclude that the number of fake coins among the coins 7, 8, 9, and 10 is 2a + b.

For the last weighing we compare coins 1, 7, 8, 9, and 10 against 2, 3, 4, 5, and 6. The balance brings us to the equation 3a + b = a + 2b, which means that 2a = b. This in turn means that either a = b = 0 and all the coins are real, or that a = 1, and b = 2 and all the coins are fake.

Now that you’ve solved the problem for eight and less coins and that I’ve just described a solution for ten coins, can we solve this problem for nine coins? Here is my solution for nine coins. This solution includes ideas of how to use a solution you already know to build a solution for a smaller number of coins.

Take the solution for ten coins and find two coins that are never on the same pan. For example coins 2 and 10. Now everywhere where we need 10, use 2. If we need both of them on different pans, then do not use them at all. The solution becomes:

The first weighing is the same as before with the same conclusion. The set containing the coin 1 has a fake coins, the set containing the coins 2 and 3 has b fake coins and the set containing coins 4, 5, and 6 has to have exactly a + b fake coins.

In the second weighing compare the four coins 7, 8, 9, and 2 against 1, 4, 5, and 6. As the scale balances we have to conclude that the number of fake coins among 7, 8, 9, and 2 is 2a + b.

For the last weighing we compare coins 1, 7, 8, and 9 against 3, 4, 5, and 6. If we virtually add the coin number 2 to both pans, the balance brings us to the equation 3a + b = a + 2b, which means that 2a = b. Which in turn means, similar to above, that either all the coins are real or all of them are fake.

It is known (see Kozlov and Vu, Coins and Cones) that you can solve the same problem for 30 coins in four weighings. I’ve never seen an elementary solution. Can you provide one?

Share:Facebooktwitterredditpinterestlinkedinmail

The Weights Puzzle

From the 1966 Moscow Math Olympiad:

Prove that you can choose six weights from a set of weights weighing 1, 2, …, 26 grams such that any two subsets of the six have different total weights. Prove that you can’t choose seven weights with this property.

Let us define the sequence a(n) to be the largest size of a subset of the set of weights weighing 1, 2, …, n grams such that any subset of it is uniquely determined by its total weight. I hope that you agree with me that a(1) = 1, a(2) = 2, a(3) = 2, a(4) = 3, and a(5) = 3. The next few terms are more difficult to calculate, but if I am not mistaken, a(6) = 3 and a(7) = 4. Can you compute more terms of this sequence?

Let’s see what can be said about upper and lower bounds for a(n). If we take weights that are different powers of two, we are guaranteed that any subset is uniquely determined by the total weight. Thus a(n) ≥ log2n. On the other hand, the total weight of a subset has to be a number between 1 and the total weight of all the coins, n(n+1)/2. That means that our set can have no more than n(n+1)/2 subsets. Thus a(n) ≤ log2(n(n+1)/2).

Returning back to the original problem we see that 5 ≤ a(26) ≤ 8. So to solve the original problem you need to find a more interesting set than powers of two and a more interesting counting argument.

Share:Facebooktwitterredditpinterestlinkedinmail

Baron Münchhausen and the Riemann Hypothesis

by Tanya Khovanova, Konstantin Knop, Alexey Radul and Peter Sarnak

Let n coins weighing 1, 2, … n be given. Baron Münchhausen knows which coin weighs how much, but his audience does not. Define a(n) to be the minimum number of weighings the Baron must conduct on a balance scale, so as to unequivocally demonstrate the weight of at least one of the coins.

In the paper Baron Münchhausen’s Sequence, three of us completely described the Baron’s sequence. In particular, we proved that a(n) ≤ 2. Here we would like to outline another proof idea, which is interesting in part because it touches the Riemann hypothesis.We denote the total weight of coins in some set A as |A|.

Lemma. Numbers n that can be represented as Ti + Tj + Tk = 3n, where i ≤ j < k, such that there is a subset A of coins from j + 1 to k such that n = Tj + |A|, can be done in two weighings.

Proof. Suppose Ti + Tj + Tk = 3n and there is a subset A of coins from j + 1 to k such that n = Tj + |A|. We propose the two weighings

[1…j] + A = n

and

[1…i] + B = n + A,

where B is the complement of A in {j + 1, j + 2, … , k}.

If we sum up twice the first weighing with the second weighing we get

3[1…i] + 2[(i + 1)…j] + 2A + B = 3n + A.

In other words, three times the weight of the coins that were on the left side in both weighings, plus twice the weight of the coins that were on the left side in only the first weighing, plus the weight of the coins that were moved from the left cup to the right cup plus the weight of the coins on the left cup in only the second weighing equals three times the weight of the coin on the right cup in both weighings. Hence three times the weight of the coin on the right cup in both weighings can’t be less than the weight of the k other coins that participated plus the weight of the j coins that were on the left cup in the first weighing and weren’t moved to the right cup, plus the weight of the i coins that were one the left cup in both the first and the second weighing. But because Ti + Tj + Tk = 3n, then 3n is the smallest possible weight of any set of i plus j plus k coins, the coin on the right cup in both weighings has to be the n-coin.

We checked that any number up to 600,000 except 20 can be represented so as to satisfy the Lemma. To show how to solve 20 coins in two weighings is easy, and, as usual, is left as an exercise for the reader. Next, we want to look at the following lemma.

Lemma. Given a set of consecutive numbers {(j + 1), … , k}, if k > 2j + 2, then it is possible to find a subset in the set that sums up to any number in the range from j + 1 to (j + k + 1)(k – j)/2 – j – 1.

We won’t prove the lemma, but it means that if k is about twice larger than j, then we have a lot of flexibility for building our set A in the weighing above. For moderately large n (where 600000 >> “moderately large”), it is not hard to prove that this flexibility is sufficient.

Now the question becomes: can we find such a decomposition into triangular numbers? It is enough to find a representation Ti + Tj + Tk = 3n, where Tk is at least 81% of 3n.

We know that decompositions into triangular numbers are associated with decompositions into squares. Namely, if Ti + Tj + Tk = 3n, then (2i + 1)2 + (2j + 1)2 + (2k + 1)2 = 24n + 3. If the largest square is at least 81% of 24n + 3, then the largest triangular number in the decomposition of 3n is at least 81%.

There is a theorem (W. Duke, Hyperbolic distribution problems and half-integral weight Maass forms, in Inventiones Math 92 (1988) p.73-90) that states that in the limit the decompositions of numbers into three squares are equidistributed. That is, if we take some region on the unit sphere x2 + y2 + z2 = 1 (for example, the region |z| > 0.8) and view decompositions of 24n + 3 into squares as points on the sphere x2 + y2 + z2 = 24n + 3, then, as n grows, decompositions whose projections fall into our chosen region are guaranteed to appear.

This theorem is great, because it tells us that for large enough n we will always be able to find a decomposition of 24n + 3 into triangle numbers where one of the triangle numbers will be much bigger than the others, and it will be possible to prove the weight of the n coin in two weighings. Unfortunately, this summary, as stated, does not tell us how large that n needs to be. So we need some exact estimates.

The number of decompositions of m into sums of three squares is about the square root of m. More precisely, it is possible to compute a number N, such that for any number m > N, with at most one exception, the number of decompositions is at least Cm1/2−1/30, where C is a known constant.

The more specific statement of Duke’s theorem is that if the number of solutions to the quadratic x2 + y2 + z2 = 24n + 3 is large, for a computable value of “large”, then the solutions are equidistributed. More precisely, let us denote 3n by m and fix an area Ω on the unit sphere. Then the number of solutions (x, y, z) such that the unit vector (x, y, z)/|(x, y, z)| belongs to Ω is

1/(4π) Ωh(8m+3) + E(m),

where h(8m+3) is the total number of solutions of x2 + y2 + z2 = 24n + 3, and E(m) is an error term, which starting from some number satisfies the inequality: E(m) ≤ 1000m1/2-1/7.

That’s pretty good, because combining these two lets us, at least in principle, actually calculate an N such that for all n > N except maybe one a(n) = 2. After that we hoped to write a program to exhaustively search smaller numbers by computer.

This situation is still somewhat annoying, because that possible exception must then be propagated into the proof, and if we are not careful, possibly into the final theorem. (“No matter how many coins the Baron has, he can prove the weight of one in at most two weighings, except maybe one number of coins, and we don’t know which…”) This is where the Riemann Hypothesis comes in. If the Riemann Hypothesis is true, then that exception isn’t there, and all is sunlight and flowers.

The beauty of the Baron’s puzzle is such that we actually do not need the Riemann hypothesis. As we can use unbalanced weighings, it is enough to find a good decomposition for one out of the four numbers 3n, 3n-1, 3n-2, or 3n-3.

Instead of finding all these exact estimates we found a different elementary proof of our theorem. But we are excited that methods that are used in very advanced number theory can be used to solve a simple math problem that can be described to middle school children.

It would be great if someone decided to finish this proof.

Share:Facebooktwitterredditpinterestlinkedinmail

Another Coins Sequence, jointly with Alexey Radul

Konstantin Knop sent me the following coins puzzle, which was created by Alexander Shapovalov and first appeared at the Regional round of the all-Russian math Olympiad in 2000.

Baron Münchhausen has 8 identical-looking coins weighing 1, 2, …, 8 grams. The Baron knows which coin is which and wants to demonstrate to his guests that he is right. He plans to conduct one weighing on a balance scale, so that the guests will be convinced about the weight of one of the coins. Can the Baron do this?

This being a sequence-lover blog, we want to create a sequence out of this puzzle. The sequence is the following: Let the Baron initially have n identical-looking coins that weigh exactly 1, 2, …, n grams. Then a(n) is the minimum number of weighings on a balance scale that the Baron needs in order to convince his guests about the weight of one of those coins.

The original puzzle can be restated as asking whether a(8) = 1. The sequence is defined starting from index 1 and the first several terms are easy to calculate: 0, 1, 1, 1, 2, 1, 1, 1. Can you continue this sequence?

Let’s look at where ones occur in this sequence:

Theorem. If the weight of a coin can be confirmed with one weighing, then one cup of that weighing must contain all the coins with weights from 1 to some i, and the other cup must contain all the coins with weights from some j to n. Furthermore, either the scale must balance, or the cup containing the 1-gram coin must be lighter.

Proof. What does it mean for the Baron to convince his guests about the weight of some coin with one weighing? From the perspective of the guests, a weighing is a number of coins in one cup, a number of coins in the other cup, and a number of coins not on the scale, together with the result the scale shows (one or the other cup heavier, or both the same weight). For the guests to be convinced of the weight of some particular coin, it must therefore be the case that all possible arrangements of coin weights consistent with that data agree on the weight of the coin in question. Our proof strategy, therefore, is to look for ways to alter a given arrangement of coin weights so as to change the weight given to the coin whose weight is being demonstrated, thus arriving at a contradiction.

First, obviously, the coin whose weight k the Baron is trying to confirm has to be alone in its group: either alone on some cup or the only coin not on the scale. After that we can divide the proof of the theorem into several cases.

Case 1. The coin k is on a cup and the scale is balanced. Then we want to show two things: k = n, and the coins on the other cup weigh 1, 2, …, i grams for some i. For the first part, observe that if k < n, then the coin with weight k+1 must not be on the scale (otherwise it would overbalance coin k). Therefore, we can substitute coin k+1 for coin k, and substitute a coin one gram heavier for the heaviest coin that was on the other cup, and produce thereby a different arrangement with the same observable characteristics but a different weight for the coin the Baron claims has weight k.

To prove the second part, suppose the contrary. Then it is possible to substitute a coin 1 gram lighter for one of the coins on the other cup. Now, if coin k-1 is not on the scale, we can also substitute k-1 for k, and again produce a different arrangement with the same observable characteristics but a different weight for the coin labeled k. On the other hand, if k-1 is on the scale but k-2 is not, then we can substitute k-2 for k-1 and then k-1 for k and the weighing is again unconvincing. Finally, if both k-1 and k-2 are on the scale, and yet they balance k, then k=3 and the theorem holds.

Consequently, k = n = 1 + 2 + … + i is a triangular number.

Case 2. The coin k is on the lighter cup of the scale. Then: first, k = 1, because otherwise we could swap k and the 1-gram coin, making the light cup lighter and the heavy cup heavier or unaffected; second, the 2-gram coin is on the heavy cup and is the only coin on the heavy cup, because otherwise we could swap k with the 2-gram coin and not change the weights by enough to affect the imbalance; and finally n = 2 because otherwise we could change the weighing 1 < 2 into 2 < 3.

Thus the theorem holds, and the only example of this case is k = 1, n = 2.

Case 3. The coin k is on the heavier cup of the scale. Then k = n and the lighter cup consists of some collection of the lightest available coins, by the same argument as Case 1 (but even easier, because there is no need to maintain the balance). Furthermore, k must weigh exactly 1 gram more than the lighter cup, because otherwise, k-1 is not on the lighter cup and can be substituted for k, making the weighing unconvincing.

Consequently, k = n = (1 + 2 + … + i) + 1 is one more than a triangluar number.

Case 4. The coin k is not on a cup and the scale is not balanced. Then, since k must be off the scale by itself, all the other coins must be on one cup or the other. Furthermore, all coins heavier than k must be on the heavier cup, because otherwise we could make the lighter cup even lighter by substituting k for one of those coins. Likewise, all coins lighter than k must be on the lighter cup, because otherwise we could make the heavier cup even heavier by substituting k for one of those coins. So the theorem holds; and furthermore, the cups must again differ in weight by exactly 1 gram, because otherwise we could swap k with either k-1 or k+1 without changing the weights enough to affect the result on the scale.

Consequently, the weight of the lighter cup is k(k-1)/2, the weight of the heavier cup is 1 + k(k-1)/2. Thus the total weight of all the coins is n(n+1)/2 = k2+1. In other words, case 4 is possible iff n is the index of a triangular number that is one greater than a square.

Case 5. The coin k is not on a cup and the scale is balanced. This case is hairier than all the others combined, so we will take it slowly (noting first that all the coins besides k must be on some cup).

Lemma 1. The two coins k-1 and k-2 must be on the same cup, if they exist (that is, if k > 2). Likewise k-2 and k-4; k+1 and k+2; and k+2 and k+4.

Proof. Suppose they’re not. Then we can rotate k, k-1, and k-2, that is, put k on the cup with k-1, put k-1 on the cup with k-2, and take k-2 off the scale. This makes both cups heavier by one gram, producing a weighing with the same outward characteristics as the one we started with, but a different coin off the scale. The same argument applies to the other three pairs of coins we are interested in, mutatis mutandis.

Lemma 2. The four coins k-1, k-2, k-3 and k-4 must be on the same cup if they exist (that is, if k ≥ 5).

Proof. By Lemma 1, the three coins k-1, k-2, and k-4 must be on the same cup. Suppose coin k-3 is on the other cup. Then we can swap k-1 with k-3 and k with k-4. Each cup becomes heavier by 2 grams without changing the number of coins present, the balance is maintained, and the Baron’s guests are not convinced.

Lemma 3. If coin k-4 exists, that is if k ≥ 5, all coins lighter than k must be on the same cup.

Proof. By Lemma 2, the four coins k-1, k-2, k-3 and k-4 must be on the same cup. Suppose some lighter coin is on the other cup. Call the heaviest such coin c. Then, by choice of c, the coin with weight c+1 is on the same cup as the cluster k-1, …, k-4, and is distinct from coin k-2 (because c is on a different cup from k-3). We can therefore swap c with c+1 and swap k with k-2. This increases the weight on both cups by 1 gram without changing how many coins are on each, but moves k onto the scale. The Baron’s guests are again unconvinced.

Lemma 4. The theorem is true for k ≥ 5.

Proof. By Lemma 3, all coins lighter than k must be on the same cup. Further, if a coin with weight k+4 exists, then by the symmetric version of Lemma 3, all coins heavier than k must also be on the same cup. They must be on the other cup from the coins lighter than k because otherwise the scale wouldn’t balance, and the theorem is true.

If no coin with weight k+4 exists, that is, if n ≤ k+3, how can the theorem be false? All the coins lighter than k must be on one cup, and their total weight is k(k-1)/2. Further, in order to falsify the theorem, at least one of the coins heavier than k must also be on that same cup. So the minimum weight of that cup is now k(k-1)/2 + k+1. But we only have at most two coins for the other cup, whose total weight is at most k+2 + k+3 = 2k + 5. For the scale to even have a chance of balancing, we must have

k(k-1)/2 + k+1 ≤ 2k + 5 ⇔ k(k-1)/2 ≤ k + 4 ⇔ k(k-1) ≤ 2k + 8 ⇔ k2 – 3k – 8 ≤ 0.

Finding the largest root of that quadratic we see that k < 5.

So for k ≥ 5, the collection of all coins lighter than k is heavy enough that either one needs all the coins heavier than k to balance them, or there are enough coins heavier than k that the theorem is true by symmetric application of Lemma 3.

Completion of Case 5. It remains to check the case for k < 5. If n > k+3, then coin k+4 exists. If so, all the coins heavier than k must be on the same cup. Furthermore, since k is so small, they will together weigh more than half the available weight, so the scale will be unbalanceable. So k < 5 and n ≤ k+3 ≤ 7.

For lack of any better creativity, we will tackle the remaining portion of the problem by complete enumeration of the possible cases, except for the one observation that, to balance the scale with just the coin k off it, the total weight of the remaining coins, that is, n(n+1)/2 – k must be even. This observation cuts our remaining work in half. Now to it.

Case 5. Seven Coins. n = 7. Then 5 > k ≥ n – 3 = 4, so k = 4. Then the weight on each cup must be 12. One of the cups must contain the 7 coin, and no cup can contain the 4 coin, so the only two weighings the Baron could try are 7 + 5 = 1 + 2 + 3 + 6, and 7 + 3 + 2 = 1 + 5 + 6. But the first of those is unconvincing because k+1 = 5 is not on the same cup as k+2 = 6, and the second because it has the same shape as 7 + 3 + 1 = 2 + 4 + 5 (leaving out the 6-gram coin instead of the asserted 4-gram coin).

Case 5. Six Coins. n = 6. Then 5 > k ≥ n – 3 = 3, and n(n+1)/2 = 21 is odd, so k must also be odd. Therefore k=3, and the weight on each cup must be 9. The 6-gram coin has to be on a cup and the 3-gram coin is by presumption out, so the Baron’s only chance is the weighing 6 + 2 + 1 = 4 + 5, but that doesn’t convince his skeptical guests because it looks too much like the weighing 1 + 3 + 4 = 6 + 2.

Case 5. Five Coins. n = 5. Then 5 > k ≥ n – 3 = 2, and n(n+1)/2 = 15 is odd, so k must also be odd. Therefore k=3, and the weight on each cup must be 6. The only way to do that is the weighing 5 + 1 = 2 + 4, which does not convince the Baron’s guests because it looks too much like 1 + 4 = 2 + 3.

Case 5. Four Coins. n = 4. Then the only way to balance a scale using all but one coin is to put two coins on one cup and one on the other. The only two such weighings that balance are 1 + 2 = 3 and 1 + 3 = 4, but they leave different coins off the scale.

The remaining cases, n < 4, are even easier. That concludes the proof of Case 5.

Consequently, by the argument similar to the one in case 4 we can show that the number of coins in case 5 must be the index of a square triangular number.

This concludes the proof of the theorem.

Now we can describe all possible numbers of coins that allow the Baron to confirm a coin in one weighing, or, in other words, the indices of ones in the sequence a(n). The following list corresponds to the five cases above:

  1. n is a triangular number. For example, for six coins the weighing is 1+2+3 = 6.
  2. n = 2. The weighing is 1 < 2.
  3. n is a triangular number plus one. For example, for seven coins the weighing is 1+2+3 < 7.
  4. n is the index of a triangular number that is a square plus one. For example, the forth triangular number, which is equal to ten, is one greater than a square. Hence the weighing 1+2 < 4 can identify the coin that is not on the cup. The next number like this is 25. And the corresponding weighing is 1+2+…+17 < 19+20 +…+25.
  5. n is the index of a square triangular number. For example, we know that the 8th triangular number is 36, which is a square: our original problem corresponds to this case.

If we have four coins, then the same weighing 1+2 < 4 identifies two coins: the coin that weighs three grams and is not in a cup and the coin weighing four grams that is in a cup. The other case like this is for two coins. Comparing them to each other we can identify each of them. It is clear that there are no other cases like this. Indeed, for the same weighing to identify two different coins, it must be the n-gram on the cup, and the n-1 coin off the scale. From here we can see that n can’t be big.

As usual we want to give something to think about to our readers. We have given you the list of sequences describing all the numbers for which the Baron can prove the weight of one coin in one weighing. Does there exist a number greater than four that belongs to two of these sequences? In other words, does there exist a total number of coins such that the Baron can have two different one-weighing proofs for two different coins?

To conclude this essay we would like to note that the puzzle we are discussing is related to the puzzle in one of Tanya’s previous posts:

You have 6 coins weighing 1, 2, 3, 4, 5 and 6 grams that look the same, except for their labels. The number (1, 2, 3, 4, 5, 6) on the top of each coin should correspond to its weight. How can you determine whether all the numbers are correct, using the balance scale only twice?

The latter puzzle appeared at the last round of Moscow math Olympiad in 1991. The author of this problem was Sergey Tokarev.

Share:Facebooktwitterredditpinterestlinkedinmail

Papaya Words and Numbers, jointly with Sergei Bernstein

Here is a strange puzzle that was inspired by the palindrome problem. Suppose you have a sequence of words in some alphabet with the initial term a and all the other terms b: a, b, b, b, b, etc. Suppose this sequence generates palindromes every time you concatenate the first several terms, not counting the first term itself. So, ab, abb, abbb, and so on — are all palindromes. We call words b “papaya” words, when a exists, such that a and b generate the sequence with this palindrome property. Can you describe papayas?

Theorem. The word b is a papaya word if and only if b is a substring of Reverse(b)Reverse(b).

Proof. After we have added b so many times that the initial part a is much smaller than half of the concatenated string, the middle part of the concatenated string would consist of several words copies of the word b. The middle of the reverse string consists of several concatenations of Reverse(b). So the word b must be a substring of Reverse(b)Reverse(b). On the other hand, suppose b is a substring of Reverse(b)Reverse(b). Then Reverse(b)Reverse(b) is of the form xby, and we can choose a = y.

Theorem. Papaya words are either palindromes or concatenations of two palindromes.

Proof. Suppose our word consists of two palindromes cd. Then the reverse of it is dc and its double is dcdc. The word cd is a substring of dcdc, thus according to the first theorem, cd is a papaya word. Let’s do the other direction. Suppose the word b is a substring of Reverse(b)Reverse(b). Then Reverse(b)Reverse(b) is of the form xby. Then b = yx, and Reverse(b) = xy, which equals Reverse(x)Reverse(y). From here, Reverse(x) = x and Reverse(y) = y. If x or y has zero length, then our word is a palindrome. QED.

Hey, do you already know why we call these words papayas?

Just for fun we would like to study the structure of papaya words. Any one-character or two-character word is a papaya word, so the patterns are: a, aa, ab. For three-character words there are four patterns: aaa, aab, aba, abb. For four-character words there are 10 patterns: aaaa, aaab, aaba, abaa, aabb, abab, abba, abbb, abac, abcb. In this manner we get the sequence of the number of n-character papaya patterns: 1, 2, 4, 10, 21, 50 etc, which is sequence A165137 in the OEIS. This sequence depends on the number of letters in your alphabet. But the first n terms of these sequences are the same for all alphabets that have at least n letters.

Let us assume that we are working with an infinite alphabet. The complementary sequence would be the number of patterns for non-papaya words. The total number of patterns is described by sequence A000110 — Bell numbers: the number of word structures of length n using an infinite alphabet. So the beginning of this complementary sequence A165610 is: 0, 0, 1, 5, 31, 153, etc. The list of corresponding patterns is abc, aabc, abbc, abcc, abca, abcd, etc.

Historically, we first invented the corresponding sequence for numbers, not for words. We call a number a papaya number if it is a palindrome or a concatenation of two palindromes. If we use numbers instead of words in the problem, we need to carefully look at what happens if we encounter initial zeroes. Let’s take the papaya number 2200100, and see if we can find a number a, such that adding 22010 repeatedly to this sequence starting with a will always generate a palindrome. The number a must be 00100. But this is not a number. We have two choices: to say that we are working with strings of digits, or to allow several numbers to start the sequence before we add b repetitively and before getting to palindromes. In the latter case our sequence can start 0, 0, 100, 22010, 22010, and so on.

As we mentioned before, the number of patterns of papaya numbers will start the same as the number of patterns of papaya words. Later the sequence of patterns of numbers A165136 will be smaller than the corresponding sequence for words. As the sequence of Bell numbers is much more famous than the sequence A164864 of patterns of numbers, we expect the papaya patterns sequence corresponding to the infinite alphabet to be more interesting and important than the sequence of papaya patterns for numbers.

Though papaya numbers might be less important than papaya words with an infinite alphabet, they have an advantage in that we can generate more sequences with them. For example we can calculate the number of positive papaya number with n digits, as in the sequence A165135: 9, 90, 252, 1872, 4464, 29250, etc. And we can also calculate the sequence A165611 of n-digit non-papaya numbers: 0, 0, 648, 7128, 85536 etc.

Share:Facebooktwitterredditpinterestlinkedinmail

Archive Labeling Sequences, jointly with Gregory Marton

What follows is the story of a pair of integer sequences, which started life as a Google interview puzzle back in the previous century when VHS video tapes were in use:

Suppose you are buying VHS tapes and want to label them using the stickers that came in the package. You want to number the tapes consecutively starting from 1 and the stickers that come with each package are exactly one of each digit [“0″…”9”]. For your first tape you use only the digit “1”, and save all the other digit stickers for later tapes. The next time you will need a digit “1” will be for tape number 10. By this time you will have several unused “1” stickers. What is the next tape number such that after labeling the tape with that number you will not have any “1” stickers remaining?

We can formalize this puzzle in the following way:

Consider a function f which, for a given whole number x, returns the number of times that the digit 1 is needed to write all of the numbers between one and x. For example, f(13) = 6. Notice that f(1) = 1. What is the next largest x such that f(x) = x?

Thus f(x) is the number of “1” stickers needed to label all the tapes up to tape x. When f(x) = x then we have used all of the “1” stickers in labeling the first x tapes.

Let’s consider any non-zero digit. In the single and double-digit numbers, there are ten of each digit in the ones column, and ten of each digit in the tens column, so 20 altogether. Early on, the tape number is ahead of the digit count. By the time we get to 20-digit numbers, though, there should be, on average, two of any single non-zero digit per number. Thus, the number of times that any digit is used should eventually catch up with the tape numbers.

Encouraged by assurance of reaching our goal somewhere, we might continue our estimate. In the up-to three-digit numbers there are 300 of each non-zero digit; in the numbers below 10,000 there are 4000; then 50,000, and so on up to 1010, where f(x) and x must (almost) meet. In particular, there are 10,000,000,000 counts for any non-zero digit in the numbers below 10,000,000,000. Hence, were the puzzle asking about any of the digits 2–9, then ten billion could have been an easy answer or, at least a limit on how far we need to search.

Sadly, there is a 1 in the decimal representation of ten billion (and a few zeroes), so we require 1010+1 of the digit “1” to write the numbers [1…1010]. Thus, f(1010) ≠ 1010, so 1010 cannot be the answer to the original puzzle. Thus stymied, Gregory wrote a program to find the solution to the original Google’s puzzle. And the answer turned out to be 199,981.

Gregory was overstymied, so he actually wrote a program to solve the puzzle for any non-zero digit. He calculated the beginning of the sequence a(n), where a(n) is the smallest number x > 1 such that the decimal representation of n appears as a substring of the decimal representations of the numbers [1…x] exactly x times.

We already know that a(1) is 199,981. The sequence continues as follows: 199,981,   28,263,827,   371,599,983,   499,999,984,   10,000,000,000,   9,500,000,000,   9,465,000,000,   9,465,000,000,   10,000,000,000, ….

Did you expect this sequence to be increasing? You could have, because smaller numbers tend to contain smaller digits than larger numbers. Then why is the sequence not increasing? As Gregory failed to find a value for the digit 5 below ten billion, he noticed that it’s fairly easy to imagine a scenario where you have one less than the number you need, and then the next value has more than you need for equality, and then you equalize again later. In response, Alexey Radul, a friend of one author and a son of the other, suggested a related sequence:

Let a(n) be the smallest number x > 1 such that the decimal representation of n appears as a substring of the decimal representations of the numbers [1…x] more than x times.

The key difference being “more than” rather than “exactly”. Starting at 1 then, here are the first nine terms of each sequence:

n = >
1 199,981 199,991
2 28,263,827 28,263,828
3 371,599,983 371,599,993
4 499,999,984 499,999,994
5 10,000,000,000 5,555,555,555
6 9,500,000,000 6,666,666,666
7 9,465,000,000 7,777,777,777
8 9,465,000,000 8,888,888,888
9 10,000,000,000 9,999,999,999

Some of these pairs are interesting in their own right. Notice that 199,991 is ten more than the previously found 199,981. For all the numbers in between, the initial equality holds. Likewise for n=3, each of the numbers between 371,599,983 and 371,599,993 has exactly one three. Hence, the increase in a number by one is the same as the increase in the count of threes. A similar situation holds for n=4.

Gregory has submitted these two new sequences to the Online Encyclopedia of Integer Sequences, as they turned out not to be there yet, and they can be found using the identifiers A163500 for the “=” sequence and A164321 for the “>” sequence. It’s not surprising that the values matching this relaxed second condition are more well behaved than those with equality. Do you think the second sequence is always increasing? Wait a minute, let’s check that sequence for zero.

In counting zeroes, it is important to remember that we start with one, not zero. In this case the smallest number x such that x is less than or equal to the number of 0s in the decimal representations of [1 … x] is 100,559,404,366. But what is the corresponding number for the “=” sequence? It appears that no such number exists. The challenge of accurately proving it, as they say, is left as an exercise to the reader.

There is no reason that we should be constrained to single digits. The formal statement of the problem provides an obvious generalization, where we consider substrings of each of the numbers [1 … x] rather than digits in those numbers. We should note that we count every occurrence of a substring separately. Thus 11 will be counted twice as a substring of 1113.

We can prove that the “more than” sequence is increasing after the first term. Indeed, for two integers i and j, if i is less than j, then for every occurrence of j, by replacing j with i we get a smaller number with an occurrence of i.

Inspired, Tanya wrote another fancier and faster program to find values of this sequence for two-digit numbers. Here is the smallest number x for which the number of “10”s as substrings of the numbers [1…x] is more than or equal to x. And by a lucky strike the equality holds. The number is: 109,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,810. Now the reader can do an exercise and find the number for the “more than” sequence.

The value of a(11) might seem like a miracle: 119,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,811. Note how strikingly similar it is to the tenth element of the sequence! Can you explain that similarity between a(10) and a(11)?

Sadly, a(12) is not so pretty: 1,296,624,070,230,872,986,615,199,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,812.

We cannot leave off without at least mentioning that the sequence function should next take one more parameter: the base of representation.

We have found only the first few members of these new sequences, and there are many related sequences to be catalogued. We would love to hear tales from your explorations. Enjoy the sequence hunt!

Share:Facebooktwitterredditpinterestlinkedinmail

The Dirtiest Math Problem Ever (Rated R)

You have been warned. You are allowed to read this if you are 17 or over. Otherwise, you can ask your parents to read this to you. Here is a famous old condom puzzle in the version I heard when I was a teenager myself:

A man hires three prostitutes and wants to have sex with all three of them. They all might have different sexually transmitted diseases and they all want to use condoms. Unluckily, they have only two condoms. Plus, they are in the forest and can’t buy new condoms. Can the man have sex with all three of the women without danger to any of the four?

Everyone in this problem is so health-conscious, that it might not be such a dirty problem after all. I leave you with the fun challenge of figuring this problem out.

Another fun variation of this problem is when you have two men and two women and two condoms. Every woman wants to have sex with every man. How can they do that?

If you are a teacher and want to use these great puzzles for younger students, you can follow the example of MathWorld and pretend that it’s a glove problem between doctors and patients.

Recently my younger son and his MIT friends invented another variation of this problem:

Suppose three gay men all want to have sex with each other and every pair among them wants to do two penetrative sexual acts, switching roles. They want to avoid contaminating each other, and in addition, each man also does not want to cross-contaminate himself from either region to the other. How can they do that using exactly three condoms?

Let me remind you that they plan to perform six sexual acts altogether, meaning that six condoms would be enough. On the other hand, each of them needs two condom surfaces, so they can’t do it with less than three condoms. My son showed to me his solution, but I will postpone its publication.

Of course, you can say that this is a glove problem about three surgeons operating on each other.

In addition, you can generalize it to any number of gay men. Here is my solution for four men and four condoms, where letters denote people, numbers denote condoms, and the order of people represents roles: A12D, B32D, C2D, A14C, B34C, D4C, A1B, B3A, C21B, D41B, C23A, D43A.

Can you solve the problem for five or more people?

Share:Facebooktwitterredditpinterestlinkedinmail

Will We Get to a Palindrome?

Here is a palindrome problem by Nazar Agakhanov from the All-Russian Mathematical Olympiad, 1996:

Can the number obtained by writing the numbers from 1 to n, one after another, be a palindrome?

Of course, we should assume that n > 1.

When I give this problem to my friends, they immediately answer, “No, of course not.” The reasoning is simple. The last 9 digits of this palindrome should be 987654321 — all different digits. The earliest you can get these digits at the end of your string 1 to n, is when your number n is actually 987654321. By that time your string of digits is really huge. If we take a random number with 2k digits, then the probability of it being a palindrome is 10(-k). There is no reason to think that writing consecutive integers increases the probability of finding a palindrome. So the probability of hitting a palindrome is so low, that you can safely answer, “No, of course not.”

After confidently saying “no”, my friends usually stop thinking about the problem altogether. Only my friend David Bernstein suggested a simple solution for this problem. You can try to find the proof, but I do not insist that you do. I am about to give you many other fun problems to solve, and you can choose which ones you want to think about.

Naturally, we can replace the sequence of natural numbers in the Agakhanov’s problem with any sequence. So, one problem becomes 160,000 different problems when you plug all current sequences from the Online Encyclopedia of Integer Sequences into it.

For some periodic sequences every concatenation you create is a palindrome. For example, for the sequence of all zeroes, every set of the first n terms is a palindrome. More interestingly, you can think of an increasing sequence such that any first n terms comprise a palindrome, as we see in the sequence of repunits: 1, 11, 111, 1111, etc. Can you think of other sequences like this?

For some sequences, not every concatenation you create is a palindrome, but you can obtain an infinite number of palindromes. One example, suggested by Sergei Bernstein, is the sequence a(n) of the last digit of the greatest power of 2 that divides n. Can you think of other sequences like that?

For some sequences only the initial term itself is a palindrome. Beyond that you can’t obtain a palindrome by concatenating the first n terms. For a few of those sequences, this fact is easy to prove. Take, for example, the sequence of powers of ten, or the sequence of squares, or the sequence of prime numbers. Can you think of other similar sequences?

There are many sequences that do not produce a palindrome beyond the first term, even if you try many times. I suspect that they do not, in fact, ever produce a palindrome, but I have no clue how to prove that. I have in mind the sequence of the digits of π. Can you suggest other sequences like that?

Instead of solving the initial problem, I gave you a variety of other problems. My last challenge for you is to find other sequences that can replace the sequence of natural numbers in the Agakhanov’s problem so that the problem becomes solvable and the solution is interesting.

Share:Facebooktwitterredditpinterestlinkedinmail