Given an integer n, you must transform it into 0 using the following operations any number of times:
Change the rightmost (0th) bit in the binary representation of n.
Change the ith bit in the binary representation of n if the (i-1)th bit is set to 1 and the (i-2)th through 0th bits are set to 0.
Return the minimum number of operations to transform n into 0.
Example 1:
Input: n = 0
Output: 0
Example 2:
Input: n = 3
Output: 2
Explanation: The binary representation of 3 is "11".
"11" -> "01" with the 2nd operation since the 0th bit is 1.
"01" -> "00" with the 1st operation.
Example 3:
Input: n = 6
Output: 4
Explanation: The binary representation of 6 is "110".
"110" -> "010" with the 2nd operation since the 1st bit is 1 and 0th through 0th bits are 0.
"010" -> "011" with the 1st operation.
"011" -> "001" with the 2nd operation since the 0th bit is 1.
"001" -> "000" with the 1st operation.
class Solution {
public int minimumOneBitOperations(int n) {
int output = 0;
while (n > 0) {
output ^= n;
n >>= 1;
}
return output;
}
}
Solution2:
Note that the number of operations for n to become 0 is the same as the number of operations for 0 to become n...
Let's see how it can be done for numbers that are powers of 2. 1 -> 0 => 1 10 -> 11 -> 01 -> ... => 2 + 1 100 -> 101 -> 111 -> 110 -> 010 -> ... => 4 + 2 + 1 1000 -> 1001 -> 1011 -> 1010 -> 1110 -> 1111 -> 1101 -> 1100 -> 0100 -> ... => 8 + 4 + 2 + 1 We can find that for 2^n, it needs 2^(n+1) - 1 operations to become 0.
Now suppose the number we are given is 1110. We know it takes 15 operations for 0 to become 1000, and it takes 4 operations for 1000 to become 1110. We get the solution by 15 - 4.
From the above intuition, we can reduce n bit by bit, starting from the most significant bit.
int minimumOneBitOperations(int n) {
if (n <= 1)
return n;
int bit = 0;
while ((1 << bit) <= n)
bit++;
return ((1 << bit) - 1) - minimumOneBitOperations(n - (1 << (bit-1)));
}