Basic Data Structures: The computer data, whatever it may represent is made of a certain component basically called bits and bytes. Following is the explanation of these basic components which constitute the data structures:
Bit: A Bit (sometimes abbreviations b) is the most basic information unit used in the computing and information theory. A single bit is a one or a zero, a true or a false, a flag” , which is ‘on’ or ‘off’, or in general, the quantity of information required to distinguish two manually exclusive states from each other.
Nibble: A Nibble is a computing term for the aggregation of four bits or half an octet (an octet being an 8-bit byte). As a nibble contains four bits, there are sixteen (24) Possible values, so a nibble corresponding to a single hexadecimal digit.
Byte: A byte is a collection of bits, originally variable is size, but now almost always eight bits. Eight-bit byte, also known as octets, can represent 256 values (28 Values, 0-255)‘Byte’ is the most often abbreviated as ‘B’ hence ‘MB’ for megabyte.
Kilobyte: A kilobyte is a unit of information or computer storage equal to 1024 bytes. It is commonly abbreviated as KB, kB, Kbyte or Kbyte. The term “Kilobyte” was first loosely used for a value of 1024 bytes(210) Because 210 Is roughly one thousand and powers of two are convenient for use with binary digital computers.
Megabyte: A megabyte is a unit of information or computer storage equal to approx. One million bytes. It is commonly abbreviated as MB.
One Megabyte (MB) = 2 20 Bytes = 1024 kilobytes
Gigabyte: A gigabyte is a unit of information or computer storage equal to approx one billion bytes. It is commonly abbreviated as GB in writing and gig in writing or speech.
One Gigabyte (GB) = 2 30 Bytes= 1024 megabytes
Terabyte: A terabyte is a unit of information or computer storage equal to approx one trillion bytes. It's commonly abbreviated as TB.
One Terabyte (TB) = 2 40 Bytes = 1024 gigabytes.Data Representation and Binary System
- Decimal Numeral System: A decimal usually refers to the base 10 numeral system. Decimal notation is the writing of numbers in the base 10 numeral system, which used various symbols for ten distinct quantities (0,1,2,3,4,5,6,7,8 and 9, called digits) to represent numbers. These digits are frequently used with a decimal point which indicates the start of a fractional part, and one of the sign symbols + (plus) or – (minus) to indicate sign. The decimal system is a positional number system; it has positions for units, tens, hundreds, etc. The position of each conveys what multiplier is to be used with that digit. Decimal is the most common numeral used around the world.
- Binary Numeral system: The binary numeral system represents numeric values using two symbols typically 0 and 1. Owning to its relatively straightforward implementation in electronic circuitry, the binary system is used internally all modern computers.
- Octal Number System: The octal numeral system is the base 8 number system and used the digit 0to7. Octal numerals can be made from binary numerals by grouping consecutive digits into groups of three (starting from the right). For example, the binary representation for decimal 74 is 1001010 which group in 1001010 so the octal representation is 112
- Hexadecimal Number System: In mathematics and computer science, hexadecimal or simply hex is a numeral system with a radix or base of 16 usually written using the symbols 0-9 and A-F. The current hexadecimal system was the first introduces in the computing world in 1963 by IBM. An earlier version, suing the digits 0-9 and U-Z, was used by the Bendix G-15 computer, introduced in 1956. For example, the decimal numeral 79 whose binary representation is 01001111 can be written as 4F in hexadecimal (4= 0100, F=1111). It is a useful system in a computer because there is an easy mapping from four bits to a single hex digit. A byte can be represented as two consecutive hexadecimal digits.
The Relation between Decimal, Binary, Octal, and Hexadecimal