This article will be permanently flagged as inappropriate and made unaccessible to everyone. Are you certain this article is inappropriate? Excessive Violence Sexual Content Political / Social
Email Address:
Article Id: WHEBN0000249992 Reproduction Date:
In mathematics, the binary logarithm (log_{2} n) is the logarithm to the base 2. It is the inverse function of the power of two function. The binary logarithm of a positive number n is the power to which the number 2 must be raised to obtain the value n. That is:
For example, the binary logarithm of 1 is 0, the binary logarithm of 2 is 1, the binary logarithm of 4 is 2, the binary logarithm of 8 is 3, the binary logarithm of 16 is 4 and the binary logarithm of 32 is 5.
The binary logarithm is closely connected to the binary numeral system. Historically, the first application of binary logarithms was in music theory, by Leonhard Euler. Other areas in which the binary logarithm is frequently used include information theory, combinatorics, computer science, bioinformatics, the design of sports tournaments, and photography.
Michael Stifel has been credited with publishing the first known table of binary logarithms, in 1544. His book Arthmetica Integra contains several tables that show the integers with their corresponding powers of two. Reversing the rows of these tables allow them to be interpreted as tables of binary logarithms.^{[1]}^{[2]}
Binary logarithms were considered more explicitly by Leonhard Euler in 1739. Euler established their application to music theory, long before their more significant applications in information theory and computer science became known. As part of his work in this area, Euler published a table of binary logarithms of the integers from 1 to 8, to seven decimal digits of accuracy.^{[3]}^{[4]}
In mathematics, the binary logarithm of a number n is written as log_{2} n. However, several other notations for this function have been used or proposed, especially in application areas.
Some authors write the binary logarithm as lg n.^{[5]}^{[6]} Donald Knuth credits this notation to a suggestion of Edward Reingold,^{[7]} but its use in both information theory and computer science dates to before Reingold was active.^{[8]}^{[9]} The binary logarithm has also been written as log n with a prior statement that the default base for the logarithm is 2.^{[10]}^{[11]}^{[12]}
Another notation that is sometimes used for the same function (especially in the German scientific literature) is ld n, from Latin logarithmus dualis.^{[13]} The DIN 1302, ISO 31-11 and ISO 80000-2 standards recommend yet another notation, lb n. According to these standards, lg n should not be used for the binary logarithm, as it is instead reserved for log_{10} n.^{[14]}^{[15]}^{[16]}
The number of digits (bits) in the binary representation of a positive integer n is the integral part of 1 + log_{2} n, i.e.^{[6]}
In information theory, the definition of the amount of self-information and information entropy is often expressed with the binary logarithm, corresponding to making the bit the fundamental unit of information. However, the natural logarithm and the nat are also used in alternative notations for these definitions.^{[17]}
Although the natural logarithm is more important than the binary logarithm in many areas of pure mathematics such as number theory and mathematical analysis, the binary logarithm has several applications in combinatorics:
The binary logarithm also frequently appears in the analysis of algorithms,^{[12]} not only because of the frequent use of binary number arithmetic in algorithms, but also because binary logarithms occur in the analysis of algorithms based on two-way branching.^{[7]} If a problem initially has n choices for its solution, and each iteration of the algorithm reduces the number of choices by a factor of two, then the number of iterations needed to select a single choice is again the integral part of log_{2} n. This idea is used in the analysis of several algorithms and data structures. For example, in binary search, the size of the problem to be solved is halved with each iteration, and therefore roughly log_{2} n iterations are needed to obtain a problem of size 1, which is solved easily in constant time. Similarly, a perfectly balanced binary search tree containing n elements has height 1 + log_{2} n.
However, the running time of an algorithm is usually expressed in big O notation, ignoring constant factors. Since log_{2} n = (log_{k} n)/(log_{k} 2), where k can be any number greater than 1, algorithms that run in O(log_{2} n) time can also be said to run in, say, O(log_{13} n) time. The base of the logarithm in expressions such as O(log n) or O(n log n) is therefore not important.^{[5]} In other contexts, though, the base of the logarithm needs to be specified. For example O(2^{log2 n}) is not the same as O(2^{ln n}) because the former is equal to O(n) and the latter to O(n^{0.6931...}).
Algorithms with running time O(n log n) are sometimes called linearithmic.^{[24]} Some examples of algorithms with running time O(log n) or O(n log n) are:
Binary logarithms also occur in the exponents of the time bounds for some divide and conquer algorithms, such as the Karatsuba algorithm for multiplying n-bit numbers in time O(n^{log2 3}).^{[29]}
In the analysis of microarray data in bioinformatics, expression rates of genes are often compared by using the binary logarithm of the ratio of expression rates. By using base 2 for the logarithm, a doubled expression rate can be described by a log ratio of 1, a halved expression rate can be described by a log ratio of −1, and an unchanged expression rate can be described by a log ratio of zero, for instance.^{[30]} Data points obtained in this way are often visualized as a scatterplot in which one or both of the coordinate axes are binary logarithms of intensity ratios, or in visualizations such as the MA plot and RA plot that rotate and scale these log ratio scatterplots.^{[31]}
In music theory, the interval or perceptual difference between two tones is determined by the ratio of their frequencies. Intervals coming from rational number ratios with small numerators and denominators are perceived as particularly euphonius. The simplest and most important of these intervals is the octave, a frequency ratio of 2:1. The number of octaves by which two tones differ is the binary logarithm of their frequency ratio.^{[32]}
To study tuning systems and other aspects of music theory that require finer distinctions between tones, it is helpful to have a measure of the size of an interval that is finer than an octave and is additive (as logarithms are) rather than multiplicative (as frequency ratios are). That is, if tones x, y, and z form a rising sequence of tones, then the measure of the interval from x to y plus the measure of the interval from y to z should equal the measure of the interval from x to z. Such a measure is given by the cent, which divides the octave into 1200 equal intervals (12 semitones of 100 cents each). Mathematically, given tones with frequencies f_{1} and f_{2}, the number of cents in the interval from f_{1} to f_{2} is^{[32]}
The millioctave is defined in the same way, but with a multiplier of 1000 instead of 1200.^{[33]}
In competitive games and sports involving two players or teams in each game or match, the binary logarithm indicates the number of rounds necessary in a single-elimination tournament required to determine a winner. For example, a tournament of 4 players requires log_{2} 4 = 2 rounds to determine the winner, a tournament of 32 teams requires log_{2} 32 = 5 rounds, etc. In this case, for n players/teams where n is not a power of 2, log_{2} n is rounded up since it is necessary to have at least one round in which not all remaining competitors play. For example, log_{2} 6 is approximately 2.585, which rounds up to 3, indicating that a tournament of 6 teams requires 3 rounds (either two teams sit out the first round, or one team sits out the second round). The same number of rounds is also necessary to determine a clear winner in a Swiss-system tournament.^{[34]}
In photography, exposure values are measured in terms of the binary logarithm of the amount of light reaching the film or sensor, in accordance with the Weber–Fechner law describing a logarithmic response of the human visual system to light. A single stop of exposure is one unit on a base-2 logarithmic scale.^{[35]}^{[36]} More precisely, the exposure value of a photograph is defined as
where N is the f-number measuring the aperture of the lens during the exposure, and t is the number of seconds of exposure.
Binary logarithms (expressed as stops) are also used in densitometry, to express the dynamic range of light-sensitive materials or digital sensors.^{[37]}
An easy way to calculate log_{2} n on calculators that do not have a log_{2} function is to use the natural logarithm (ln) or the common logarithm (log or log_{10}) functions, which are found on most scientific calculators. The specific change of logarithm base formulae for this are:^{[36]}^{[38]}
or approximately
The binary logarithm can be made into a function from integers and to integers by rounding it up or down. These two forms of integer binary logarithm are related by this formula:
The definition can be extended by defining \lfloor \log_2(0) \rfloor = -1. Extended in this way, this function is related to the number of leading zeros of the 32-bit unsigned binary representation of x, nlz(x).
The integer binary logarithm can be interpreted as the zero-based index of the most significant 1 bit in the input. In this sense it is the complement of the find first set operation, which finds the index of the least significant 1 bit. Many hardware platforms include support for finding the number of leading zeros, or equivalent operations, which can be used to quickly find the binary logarithm; see find first set for details. The fls and flsl functions in the Linux kernel^{[40]} and in some versions of the libc software library also compute the binary logarithm (rounded up to an integer, plus one).
fls
flsl
For a general positive real number, the binary logarithm may be computed in two parts:^{[41]}
Computing the integer part is straightforward. For any x > 0, there exists a unique integer n such that 2^{n} ≤ x < 2^{n+1}, or equivalently 1 ≤ 2^{−n}x < 2. Now the integer part of the logarithm is simply n, and the fractional part is log_{2}(2^{−n}x).^{[41]} In other words:
The fractional part of the result is log_{2} y, and can be computed recursively, using only elementary multiplication and division.^{[41]} To compute the fractional part:
The result of this is expressed by the following formulas, in which m_i is the number of squarings required in the i-th recursion of the algorithm:
In the special case where the fractional part in step 1 is found to be zero, this is a finite sequence terminating at some point. Otherwise, it is an infinite series that converges according to the ratio test, since each term is strictly less than the previous one (since every m_{i} > 0). For practical use, this infinite series must be truncated to reach an approximate result. If the series is truncated after the ith term, then the error in the result is less than 2^{−(m1 + m2 + ... + mi)}.
An alternative algorithm that computes a single bit of the output in each iteration, using a sequence of shift and comparison operations to determine which bit to output, is also possible.^{[42]}
The log2 function is included in the standard C mathematical functions. The default version of this function takes double precision arguments but variants of it allow the argument to be single-precision or to be a long double.^{[43]}
log2
Western culture, Music, Indian classical music, Musical tuning, Consonance and dissonance
Art, Image, Photographic film, France, Visual arts
Cryptography, Artificial intelligence, Software engineering, Science, Machine learning
Berlin, Pi, Isaac Newton, E (mathematical constant), Graph theory
Computer science, Number theory, Mathematics, Algebra, Topology
Computer science, Natural logarithm, E (mathematical constant), Pi, Mathematics
Computer science, Best, worst and average case, Recursion, Donald Knuth, Microsoft
Latin alphabet, Pi, September 11 attacks, Hexadecimal, Case sensitivity
E (mathematical constant), Information, Alan Turing, Entropy, Information entropy