C/C++ Why use typedef for basic data types?


I've often seen in C/C++ programs that they are using a typdef for basic data types (below). What's the advantage of doing this?

typedef unsigned int UINT;
typedef unsigned char UCHAR;
typedef unsigned long ULONG;
typedef long double LONGDOUBLE;
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

One of the uses is for cross-platform programming. That way, all of the code can use UINT, and when porting an application to another platform, all that needs to be changed is the typedef (if necessary).
php-newbieAuthor Commented:
hmm, what's different about that from doing a global replace of let's say 'unsigned int'?
It's easier, and involves less work.

It's also more robust, because your global replace will only replace 'unsigned int', but not 'unsigned' or 'unsigned    int' (more than one space between the two), or ...

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
There is a difference if suppose you have somewhere in the code of unsigned integer. and with the change of unsigned int to ABC you will end up to modify the unsigned integer to ABCeger. To get rid of these common mistakes. This is the good article to understand the difference.


A typedef does not define new type, but is just another name for an existing type. A typedef can be used anywhere a regular type can be used.

Typedefs are used mainly for documentation and legibility purposes. Data type names such as char, int, long, double, and bool are good for describing what type of variable something is, but more often we want to know what purpose a variable serves. In the above example, long nDistance does not give us any clue what units nDistance is holding. Is it inches, feet, miles, meters, or some other unit? miles nDistance makes it clear what the unit of nDistance is.

typedef int testScore;
int GradeTest();
testScore GradeTest();

What is the first GradeTest() returning? A grade? The number of questions missed? The student’s ID number? An error code? Who knows! Int does not tell us anything. Using a return type of testScore makes it obvious that the function is returning a type that represents a test score.

Furthermore, typedefs allow you to change the underlying type of an object without having to change lots of code. For example, if you were using a short to hold a student’s ID number, but then decided you needed a long instead, you’d have to comb through lots of code and replace short with long. It would probably be difficult to figure out which shorts were being used to hold ID numbers and which were being used for other purposes.

However, with a typedef, all you have to do is change typedef short studentID to typedef long studentID and you’re done. Precaution is mandatory when changing the type of a typedef! The new type may have comparison or integer/floating point division issues that the old type did not.

Note that typedefs don’t mix particularly well with Hungarian Notation, and allow you to skirt some of the issues that using Hungarian Notation tries to prevent (such as being able to change a variable’s type without having to examine the code for areas where changing the type will be problematic).

Because typedefs do not define new types, they can be intermixed like normal data types. Even though the following does not make sense conceptually, syntactically it is valid C++:

Platform independent coding

One big advantage of typedefs is that they can be used to hide platform specific details. On some platforms, an integer is 2 bytes, and on others, it is 4. Thus, using int to store more than 2 bytes of information can be potentially dangerous when writing platform independent code.

Because char, short, int, and long give no indication of their size, it is fairly common for cross-platform programs to use typedefs to define aliases that include the type’s size in bits.
the main purpose of using typedefs are
1)to make machine dependent code ,only tyedefs need to change for different platforms
2)For  Better readability and documentation of the code
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Editors IDEs

From novice to tech pro — start learning today.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.