Big vs Small Endianness

1. What are the arguments for and against making a system Big or Small
Endian?

2. Isn't Little Endianness somehow "counter-intutive"?

3. Why are most(all?) RISCs designed to be (primarily) Big Endian?

Any references/URLs on the Endianness issue would be welcome.
shail_bainsAsked:
Who is Participating?
 
BigRatConnect With a Mentor Commented:
Hallo shail_bains are we still alive? Scrapdog and I seem to be holding a conversation on our own! Perhaps we have bored you to death?
0
 
gatkinsoCommented:
This is primarily a religous issue.  Technically, one works just as well as the other.  (BTW which side do you think I belong to??????)
0
 
shail_bainsAuthor Commented:
Are you sure one works just as well as the other? If so, why did
the x86 designers choose Little E, while RISC designers normally
take up Big E (the new ones have both)?
0
Upgrade your Question Security!

Your question, your audience. Choose who sees your identity—and your question—with question security.

 
gatkinsoCommented:
In every class I took for my maters that touched this issue, the prof. basically said, "....one works as well as the other....".  TTFN
0
 
BigRatCommented:
  The transition from serial to parallel computers in the fifties brought in word sizes of 24 to 40 bits. After all the mark up on the machines were so enormous the actual hardware cost was not a problem. Rather the maintainance was a big factor.
   The early mini-computers of the late 60's, PDP-8 for example, were 8 bit machines. 16 and 32 bit arithmetic was performed by simply fetching the higher byte from memory, the carry on addition being saved between each "swipe". Thus the lower address contained the low order byte, the higher address the high-order byte. One simply incremented the addresses to get the next data byte.
   The Intel 8080 was an 8 bit machine, which provided 16 bit arithmetic on the PDP-8 basis, hence the Intel series 8086, 80186,80286,80386,80486 and Pentium are all Lendians.
   RISC processors require their data to be aligned on word boundaries (or at least 32-bit boundaries). This is not new the CDC 6600/7600 in the early 70's required this. They prefetch the data before execution. It is actually slightly easier to fetch 32 bits from an address rather than fetch and swop the bytes around, or present the memory with the higher addresses first. It might also be of historical interest to see how the designers of machines like the IBM 360 series have influenced the micro-designers; prehaps even they might have downsized themselves!
0
 
BigRatCommented:
I'll go for an answer here with more anecdotes from the past.
   If anybody actually says that Lendian is just as good as Bendian (which it actually is) and there is no real difference in implementation (which there probably is) then why did the Intel 80x86 designers have immediate operands in Bendian format and not Lendian? For the displacment operands, whose length is encoded in the first bits, such a format is entirely necessary; but why immediate operands?
   Of all the Big machines that I know (IBM,ICL,CDC and so on) the format is consistant. Of all the minicomputers which I have run across the format is also consistant; but Intel!?
   Getting back to the early minicomputers like the PDP-8 (a machine which you fall in love with or hate it) the 8-bit dataflow from memory to CPU was very cost effective. The problem with core storage was that the amplifer and logic needed to read the data was much more expensive than the address logic. Consequently storage was tall and thin.The Lendian format is then the only applicable.
   A very interesting machine I meet was the Dietz (a German manufacturer) 600 series. It had 128 8-bit registers, an 8-bit data flow, 16-bit byte addressing, and an order code that performed ONE operation of 8-bits. It had a DO instruction which executed the NEXT instruction n-times. You could therfore do 26-byte arithmetic! for example. The performance is of course a problem, since the 26-byte instruction is a 1 byte instruction executed 26 times.
   Consequently the RISC designers went back to the large mainframes where the data flow was very wide. 32 or 64 bit RISC machines are now common. The next generation will have 128 bit data flow and of course Bendian format.
0
 
ozoCommented:
The PDP-8 was a 12 bit machine.  Are you thinking of a PDP-11?
0
 
gatkinsoCommented:
Just how does this question apply to UNIX?
0
 
shail_bainsAuthor Commented:
BigRat's rely really doesn't answer my Question... I'd like to know why any particular Endian-ness is better or worse than the other...
0
 
BigRatCommented:
Ozo is correct, it was a PDP-11 which was in the lab.
0
 
BigRatCommented:
With great respect I think I have answered the questions posed.
Question 1 asked why design using one or the other. My argument is that it is based on cost grounds. The PDP-11 was not cheap but a danm sight cheaper that the mainframes of the day!
Question 2 asked if it was counter-intuitive. If you have an 8-bit data flow and read successive memory locations for the next bytes (higher bytes if you like) then it is certainly NOT counter-intuitive, but very logical. Question 3 asked why the RISC makers went back to Bendian. The answer is that with very wide dataflow it is simpler to use the Bendian format. It is of course NOT neccessary but just simpler. You will also find, when you look at individual machines closely, perferences chosen by the designers for no logical reason but habit. The ICL 2970 machine had internally various "things" which were to be found in the architecturally utterly dissimilar ICL 1904S machine. Why? The same design team designed both machines.
0
 
scrapdogCommented:
I found little endian to be very convenient to use while programming the 6502.  I think it makes more sense to put the least significant byte in the lowest memory location.  Maybe it is just me.


0
 
BigRatCommented:
Scrapdog
   Isn't this a four bit machine which takes two cycles to perform an 8-bit operation?
0
 
scrapdogCommented:
Actually I used a 6510 (which has exact same instructions/cycles as 6502).

It is an 8-bit processor.  However all instructions are 2 or more cycles.

LDA immediate (which has an 8-bit operand) is a 2-cycle instruction, while LDA absolute (which has 16-bit operand) is a 4-cycle instruction.
0
All Courses

From novice to tech pro — start learning today.