Every number system is built on the same idea: a base that determines how many unique digits exist, and positional notation where each position is worth a power of that base. We use base 10 by habit. Computers use base 2 internally. Programmers regularly encounter base 8 and base 16. Understanding how they all work unlocks a large part of how computers represent data.

The same number in four bases

The decimal number 255 looks completely different depending on which base you use to express it:

255
Decimal (base 10)
0d255
11111111
Binary (base 2)
0b11111111
377
Octal (base 8)
0o377
FF
Hex (base 16)
0xFF

255 is a significant number in computing — it is the maximum value of an 8-bit unsigned integer (one byte), and it is the maximum value of a single RGB color channel.

Binary (base 2)

Binary uses only two digits: 0 and 1. Each position is a power of 2. Binary maps directly to the physical reality of digital electronics — a transistor is either off (0) or on (1).

Binary 1011 = decimal ? Position: 8 4 2 1 Digit: 1 0 1 1 Value: 8 + 0 + 2 + 1 = 11

Every piece of data a computer handles — text, images, programs — is ultimately stored as binary. Understanding binary is how you understand bytes, bits, bitwise operations, and memory sizes.

A single binary digit is a bit. Eight bits form a byte. A byte can hold values from 0 (00000000) to 255 (11111111). Kilobyte, megabyte, gigabyte are all multiples of bytes — each 1024× larger than the previous.

Octal (base 8)

Octal uses digits 0–7. Each octal digit represents exactly three binary digits, which made it convenient in early computing when word sizes were multiples of 3. Today it is most commonly seen in Unix/Linux file permissions.

Unix file permission: chmod 755 7 = 111 binary = read + write + execute (owner) 5 = 101 binary = read + execute (group) 5 = 101 binary = read + execute (others)

The chmod 755 command makes a file executable by everyone but only writable by the owner. Each digit (7, 5, 5) maps directly to three permission bits.

Hexadecimal (base 16)

Hexadecimal uses digits 0–9 plus letters A–F for values 10–15. Each hex digit represents exactly four binary digits (a "nibble"), so one byte is always exactly two hex digits. This makes hex a compact, readable representation of binary data.

Hex digits and their values: 0=0 1=1 2=2 3=3 4=4 5=5 6=6 7=7 8=8 9=9 A=10 B=11 C=12 D=13 E=14 F=15 Hex FF = binary 1111 1111 = decimal 255 Hex 1A = binary 0001 1010 = decimal 26

Hex appears constantly in programming: CSS colors (#FF6B6B), memory addresses (0x7fff5fbff8a0), byte-level data in debuggers, cryptographic hashes (sha256: a9b8c7...), and UUID values.

Reference table: 0–16 in all four bases

DecimalBinaryOctalHex
0000000
1000111
2001022
4010044
81000108
10101012A
15111117F
16100002010
25511111111377FF
256100000000400100

Converting between bases in code

// JavaScript (255).toString(2) // → "11111111" (decimal to binary) (255).toString(16) // → "ff" (decimal to hex) parseInt("ff", 16) // → 255 (hex to decimal) parseInt("11111111", 2) // → 255 (binary to decimal) # Python bin(255) # → '0b11111111' hex(255) # → '0xff' oct(255) # → '0o377' int('ff', 16) # → 255
Tip: In most languages, integer literals can be written in any base using a prefix: 0b for binary, 0o for octal, 0x for hex. 0xFF, 0b11111111, and 255 are all the same value to the compiler.