All digital data used in computer systems is represented using 0s and 1s. Binary coding systems have been developed to represent text, numbers, and other types of data.
All data in a computer system consists of binary information.
‘Binary‘ means there are only 2 possible values: 0 and 1. Computer software translates between binary information and the information you actually work with on a computer such as decimal numbers, text, photos, sound, and video. Binary information is sometimes also referred to as machine language since it represents the most fundamental level of information stored in a computer system.At a physical level, the 0s and 1s are stored in the central processing unit of a computer system using transistors.
Transistors are microscopic switches that control the flow of electricity. If a current passes through the transistor (the switch is closed), this represents a 1. If a current doesn’t pass through (the switch is open), this represents a 0.Binary information is also transmitted using magnetic properties; the two different types of polarities are used to represent zeros and ones. An optical disk, such as a CD-ROM or DVD, also stores binary information in the form of pits and lands (the area between the pits).No matter where your data is stored, all digital data at the most fundamental level consists of zeros and ones.
In order to make sense of this binary information, a binary notation method is needed, also referred to a binary code.
Each binary digit is known for short as a bit. One bit can only be used to represent 2 different values: 0 and 1. To represent more than two values, we need to use multiple bits. Two bits combined can be used to represent 4 different values: 0 0, 0 1, 1 0, and 1 1.
Three bits can be used to represent 8 different values: 0 0 0, 0 0 1, 0 1 0, 1 0 0, 0 1 1, 1 0 1, 1 1 0 and 1 1 1. In general, ‘n’ bits can be used to represent 2^n different values.Consider the example of representing the decimal numbers 0 through 10.
There are more than 8 unique values to represent, which will, therefore, require a total of 4 bits (since 3 bits can only represent 8 different values). The table shows the binary equivalent for the numbers 0 through 10. This is an example of standard binary notation, or binary code.
While ASCII is still in use today, the current standard for encoding text is Unicode.
The basic principle underlying Unicode is very much like ASCII, but Unicode contains over 110,000 characters, covering most of the world’s printed languages. The relatively simple 8-bit version of Unicode (referred to as UTF-8) is almost identical to ASCII, but the 16- and 32-bit versions (referred to as UTF-16 and UTF-32) allow you to use just about any character in any printed language.In addition to numbers and text, binary code has also been developed to store other types of data, such as photographs, sound, and video. For example, when you zoom in very closely on a digital photograph, you will start to see the pixels which make up the photograph.
A single pixel has one color, which is typically represented using a combination of three color values. If you are using 8-bit color, each color value can be one of 2^8 or 256 unique values. Sound, video, and other data types can be broken down into binary code in a similar manner. Ultimately, all digital data consists only of binary information.
Bits and Bytes
You are probably familiar with the term ‘byte’, as in gigabytes (GB) of memory or storage capacity. Bits and bytes are often confused, so a brief note is in order when discussing bits and binary code.
What is a byte? One byte consists of 8 binary digits or 8 bits. Historically, computer systems used 8 bits to encode characters. ASCII is an example of 7-bit binary code, but more recent character sets use 8-bit binary code (or 16-bit or 32-bit).
As a result, 8 bits became the unit for storing data, and it was named byte – 1 byte stores 1 character.The unit symbol for byte is ‘B’, but it is more common to see kilobyte (kB), megabyte (MB), gigabyte (GB) and terabyte (TB). So while bytes have their origin in 8-bit computer architecture, bytes are now mostly used to describe the size of computer components, such as hard disk drives and random access memory (RAM).The ‘1 byte stores 1 character’ statement no longer holds true for 16-bit and larger character systems, but it helps to understand the origin of the term ‘byte’ and its relationship to storage capacity.
All digital data used in computer systems consists of binary information, which contains only 0s and 1s.
A single binary digit is referred to as a bit. Every ‘n’ bit can be used to represent 2^n different values. For example, if you are using an 8-bit binary coding system, this means every unique value is represented using 8 bits, which results in 2^8 or 256 unique values.Binary coding systems have been developed for numbers, text, images, video, sound, and other types of digital data. Commonly used coding systems for text include ASCII and Unicode.
Binary Code – Key Terms
- Binary: only 2 possible values: 0 and 1
- Machine language: represents the most fundamental level of information stored in a computer system
- Transistors: microscopic switches that control the flow of electricity
- Pits: form in which binary information is stored on an optical disk such as a CD-ROM or DVD
- Lands: area between the pits
- Binary code: a binary notation method
- Bit: each binary digit
- Character set: all the characters we want to represent
- ASCII: American Standard Code for Information Exchange developed from telegraphic codes; adapted to represent text in binary code
- Unicode: current standard for encoding text contains over 110,000 characters, covering most of the world’s printed languages
- Pixels: data and binary code that makes up digital photographs
- Bytes: consists of 8 binary digits or 8 bits
Close out the lesson by ensuring that you can confidently:
- Discuss the use of binary languages for computers
- Identify the basics of binary notation
- Contrast the differences between the two main binary coding languages
- Distinguish between bits and bytes