Because binary numerals are based on powers of a small number, they get long. (In the example above, a three-digit decimal representation is equivalent to a ten-digit binary representation.) So, we abbreviate binary using base 16—hexadecimal, or just hex—which packs 4 bits together as one digit.
base two: | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
29 | 28 | 27 | 26 | 25 | 24 | 23 | 22 | 21 | 20 | ||
(higher places) | ... | 512s place | 256s place | 128s place | 64s place | 32s place | sixteens place | eights place | fours place | twos place | ones place |
base 16: | ||||
---|---|---|---|---|
162 | 161 | 160 | ||
(higher places) | ... | 256s place | sixteens place | ones place |
This works because 16 is a power of two, namely 24. So every power of 16 is also a power of two: 160=20; 161=24; 162=28; 163=212; 164=216; etc. Each group of four bits (counting from the right) represents a value between 0 and 15 (1111base 2=15base 10), and those are the sixteen digits of base 16.
But we can't use "10," "11." and so on as digits, because that would mess up place value. You wouldn't know whether the hexadecimal numeral 3124 means the four digits 3-1-2-4 or the three digits 3-12-4. Instead we have to use a single character to represent each digit, and by convention we use the letters a–f as the digits for 10 to 15. A=ten, b=eleven, ..., f=fifteen. Either capital or lower case letters can be used, but it's more common to see them in lower case.
Hex is used for representing colors (as you saw in Lab 1) and in IPv6 addresses (as you will see on the next page).
So, to convert the binary numeral 1101011101 to hexadecimal, start by dividing it into groups of four bits, from right to left: 11 0101 1101. Then figure out the value of each group and write down the corresponding hex digit: 35d.
35\text{d}_\text{base 16}
means
(3 \cdot 16^2) + (5 \cdot 16^1) + (13 \cdot 16^0)
= (3 \cdot 256) + (5 \cdot 16) + (13 \cdot 1)
= 768 + 80 + 13
= 861_\text{base 10}
It's convenient to choose base 16 as the abbreviated form of binary because pretty much all modern computers allocate memory in chunks of multiples of eight bits. (As we write this, the newest personal computers use 64 bits as their basic unit, but there are still a lot of 32-bit computers around.) Before the eight-bit byte became standard, people often used octal (base 8), in which each digit (0, 1, 2, 3, 4, 5, 6, or 7) represents three bits. Octal has the advantage that you don't have to remember what the digit C represents, but it's seen only rarely today.
Computers have several ways of representing colors, depending on whether they are intended for controlling three-color (red, green, blue) screen display, four-color (cyan, magenta, yellow, black) printing, or other purposes. On a computer screen, each pixel—each dot that makes up the picture on the screen—is assigned an RGB color defined by the intensity of Red, Green, and Blue in that color. The intensities can range from 0 to 255 (decimal notation), which is 0 to FF (hex notation). If (R,G,B)=(0,0,0) the color is black: no red, no blue, no green.
If (R,G,B) = (128,0,255), the color is purple: some red and a lot of blue, but no green at all. If all three colors are as bright as possible (all are 255), we see white. Instead of writing (255,255,255) for white and (128,0,255) for purple, we often use hex notation: FFFFFF and 8000FF. And this color is red 255, green 127, and blue 0, which we would code as FF7F00.
And <span style="color:#FF7F00;"> this color is red 255, green 127, and blue 0</span>, which we would code as <span style="color:#FF7F00;">FF7F00</span>.