I was thinking about ints and shorts now in C#. Shorts in C# can hold values from -32768 to 32767. I know that other languages do this the other way round -32767 to 32768. I was wondering why either the negative or positive has one extra digit. I did study and understand this but it was 20 years ago and I have forgot and can't figure it out. If we have 16 bits and one of them is used to indicate whether the number is negative or positive then the remaining bits can therefore hold values from 0 to 32767.
I knocked a little spreadsheet together to illustrate this and tinker with but I can't figure out why either negative or positive numbers have one extra digit. Clearly there is a good reason - it just escapes me at the moment. If you want to look at my spread sheet you can get it at: