1 byte is equal to 8 bit or not

1 byte is equal to 8 bit or not are There some case that it not equal
teeraAsked:
Who is Participating?
 
JoesmailConnect With a Mentor Commented:
Bits are rarely seen alone in computers. They are almost always bundled together into 8-bit collections, and these collections are called bytes. Why are there 8 bits in a byte? A similar question is, "Why are there 12 eggs in a dozen?" The 8-bit byte is something that people settled on through trial and error over the past 50 years.

With 8 bits in a byte, you can represent 256 values ranging from 0 to 255, as shown here:

  0 = 00000000
  1 = 00000001
  2 = 00000010
   ...
254 = 11111110
255 = 11111111

In the article How CDs Work, you learn that a CD uses 2 bytes, or 16 bits, per sample. That gives each sample a range from 0 to 65,535, like this:
    0 = 0000000000000000
    1 = 0000000000000001
    2 = 0000000000000010
     ...
65534 = 1111111111111110
65535 = 1111111111111111

0
 
JoesmailConnect With a Mentor Commented:
Plagiarism from this article:
http://computer.howstuffworks.com/bytes.htm
0
 
BudDurlandConnect With a Mentor Commented:
There were some variations of Unix that used 6,7,9,or 10 bit bytes, though I don't think they were in the majority
0
The new generation of project management tools

With monday.com’s project management tool, you can see what everyone on your team is working in a single glance. Its intuitive dashboards are customizable, so you can create systems that work for you.

 
giltjrConnect With a Mentor Commented:
A byte is 8 bits.  However there are also character "sets" or representation.  Example: ASCII is a 7-bit system.  Each character is represented by 7-bits, the 8th bit is (was) used for error checking as a parity bit.  EBCDIC is an 8-bit system, each character is represented by 8 bits.  There is Unicode which has both 8-bit and 16-bit characters, but a BTYE is still only 8 bits.  The 16-bit chracters are represented by 2 bytes.  There are other 16-bit character systems which are normally refered to as double byte character sets (DBCS).

It gets a bit more confusing.  As BudDurland stated there have been systems that use various other number of bits.  However for some of these it was because of how the hardware worked.  There have been computer system that have used other "bases" than ones that are multiple of 4.  IIRC HP's HP3000 were octal, or 7-bit systems.  But as I stated, this starts getting confusing because you are mixing data bus widths, memory addressing (physical and virtual) , word widths, physcally memory addressing, virtual memory addressing along with a few other "bits" of the many parts of a computer system.

Today most systems in used use 8-bits to represent 1 byte.  However 1 character could be represented by 7-bits, 8-bits or 16-bits.

0
 
ZolghadriConnect With a Mentor Commented:
1  byte = 8 bit
1 kilobyte = 1024 byte
1 megabyte = 1024 kbyte
1 gigabyte = 1024 mbyte
1 terabyte = 1024 gbyte

Have a nice time
0
 
Steve KnightIT ConsultancyCommented:
and a nibble is half a byte.
0
 
mbavisiCommented:
stupid question
0
All Courses

From novice to tech pro — start learning today.