Number Base Converter
Convert numbers between binary, octal, decimal, and hexadecimal. Enter a value in any base and instantly see the equivalent in all four number systems, plus the bit count.
Number Base Conversion: Understanding Binary, Octal, Decimal, and Hexadecimal
Number base conversion is a foundational concept in computer science, mathematics, and digital electronics. Every number we use in daily life is expressed in base 10 (decimal), but computers process data in base 2 (binary). Programmers routinely work with hexadecimal (base 16) because it provides a compact way to represent binary values, and octal (base 8) appears in file permission systems and legacy computing contexts. Understanding how to convert between these bases is essential for anyone working with digital systems.
How Positional Notation Works
All standard number systems use positional notation. In positional notation, the value of a digit depends on two things: the digit itself and its position within the number. The position determines which power of the base the digit is multiplied by. In decimal, the number 253 means 2×10² + 5×10¹ + 3×10⁰ = 200 + 50 + 3. The same principle applies to every other base.
In binary, the number 1101 means 1×2³ + 1×2² + 0×2¹ + 1×2⁰ = 8 + 4 + 0 + 1 = 13 in decimal. In hexadecimal, the number FF means 15×16¹ + 15×16⁰ = 240 + 15 = 255 in decimal. The conversion process always involves understanding what each digit contributes to the total value based on its position.
To convert from any base to decimal, you multiply each digit by the base raised to the power of its position and sum the results. To convert from decimal to another base, you repeatedly divide by the target base and collect the remainders. These two operations form the foundation of all base conversion.
Binary: The Language of Computers
Binary (base 2) uses only two digits: 0 and 1. Every piece of data inside a computer — from text and images to video and software — is ultimately represented as a sequence of binary digits (bits). A single bit can represent two states (on/off, true/false, 1/0). Eight bits form a byte, which can represent 256 different values (0 through 255).
Binary arithmetic follows the same rules as decimal arithmetic, but carries occur at 2 instead of 10. Adding 1 + 1 in binary gives 10 (which is 2 in decimal). Programmers who work at the hardware level or in embedded systems frequently read and write binary values. The bit count — the number of binary digits required to represent a number — indicates how much storage space a value needs.
Common bit widths in computing include 8 bits (byte), 16 bits (short), 32 bits (int), and 64 bits (long). An 8-bit unsigned integer can hold values from 0 to 255. A 32-bit unsigned integer can hold values up to 4,294,967,295. Understanding binary representation helps explain why computers have these specific limits.
Hexadecimal: Compact Binary Representation
Hexadecimal (base 16) uses digits 0–9 and letters A–F (where A = 10, B = 11, ..., F = 15). Hexadecimal is widely used in programming because it provides a direct, compact mapping to binary: each hex digit corresponds to exactly four binary digits. The binary number 1111 1111 can be written as FF in hex — much shorter and easier to read.
Color codes in web design use hexadecimal notation. The color white is #FFFFFF (three bytes: red = FF, green = FF, blue = FF). Memory addresses, MAC addresses, and error codes are also commonly displayed in hexadecimal. When debugging software or examining raw data in a hex editor, hexadecimal provides a readable format that still maps directly to the underlying binary data.
Converting between binary and hexadecimal is straightforward: group binary digits into sets of four (from right to left), then convert each group to its hex equivalent. For example, 1010 1100 becomes AC in hex (1010 = A, 1100 = C). This grouping property is the primary reason hexadecimal is preferred over decimal when working with binary data.
Octal: A Historical Base
Octal (base 8) uses digits 0–7. Each octal digit maps to exactly three binary digits, making it another convenient shorthand for binary. Octal was popular in early computing systems because many architectures used word sizes that were multiples of three bits (such as 12-bit, 24-bit, or 36-bit machines).
Today, the most common use of octal is in Unix and Linux file permissions. The chmod command uses octal digits to set read (4), write (2), and execute (1) permissions. For example, chmod 755 means the owner has full permissions (7 = 4+2+1), while the group and others have read and execute permissions (5 = 4+0+1). Understanding octal notation is essential for system administrators and developers who work with Unix-like operating systems.
While octal is less commonly encountered in modern application development than hexadecimal, it remains an important part of computing history and continues to serve specific practical purposes.
Practical Applications of Base Conversion
Base conversion appears across many domains. Network engineers use hexadecimal to read IPv6 addresses and MAC addresses. Security researchers examine binary data in hex dumps when analyzing malware. Embedded systems programmers set hardware register values using binary and hexadecimal. Digital logic designers work with binary to understand gate operations and circuit behavior.
In programming languages, number literals can often be written in multiple bases. Most languages support binary prefixes (0b1010), octal prefixes (0o12 or 012), and hexadecimal prefixes (0x0A). Being fluent in multiple bases helps programmers read and write these literals, understand bitwise operations, and debug low-level code more effectively.
Understanding base conversion also reinforces broader mathematical thinking about how numbers are represented and manipulated. The concept of positional notation is one of humanity's most important mathematical inventions, and exploring it across multiple bases provides deeper insight into the structure of arithmetic itself.
Frequently Asked Questions
How do I convert a decimal number to binary?
Divide the decimal number by 2 repeatedly and record the remainders from bottom to top. For example, 13 ÷ 2 = 6 remainder 1, 6 ÷ 2 = 3 remainder 0, 3 ÷ 2 = 1 remainder 1, 1 ÷ 2 = 0 remainder 1. Reading the remainders from bottom to top gives 1101 in binary.
Why is hexadecimal used in programming?
Hexadecimal provides a compact representation of binary data. Each hex digit maps to exactly four binary digits (bits), so a byte (8 bits) can be written as just two hex characters. This makes it much easier to read memory addresses, color codes, and raw data compared to long binary strings.
What is the difference between bits and bytes?
A bit is a single binary digit (0 or 1) and is the smallest unit of data in computing. A byte consists of 8 bits and can represent 256 different values (0 to 255). Storage capacities are typically measured in bytes (kilobytes, megabytes, gigabytes), while data transfer rates may be measured in bits per second.
Where is octal notation used today?
The most common modern use of octal is in Unix and Linux file permissions. The chmod command uses three octal digits to set owner, group, and other permissions. Each digit is the sum of read (4), write (2), and execute (1) permissions. Beyond file permissions, octal is occasionally seen in legacy computing contexts and some programming language features.
Can this converter handle very large numbers?
This converter handles numbers within JavaScript's safe integer range (up to 2^53 - 1, which is 9,007,199,254,740,991 in decimal). For most practical purposes — including color codes, file permissions, and typical programming tasks — this range is more than sufficient.
Related Calculators
Battery Life Calculator
Estimate device battery life based on capacity and usage.
Bitrate Calculator
Calculate video/audio bitrate, file size, and storage requirements for your media.
Book Royalties Calculator
Estimate author earnings from book sales. Calculate per-copy royalties, gross earnings, net income, and advance earn-out copies.