About __u32

Hi, I have seen some type like __u32, __u16, may I now where it is defined and what is that type for/
Who is Participating?

[Webinar] Streamline your web hosting managementRegister Today

bpmurrayConnect With a Mentor Commented:
This is used widely in Linux, but you can use these anywhere. They're simply shorthand for an unsigned 32- and 16-bit value respectively. For a PC, it'll resolve to :

typedef unsigned long __u32;
typedef unsigned short __u16;

Of course, there are the signed equivalents:

typedef long __s32;
typedef short __s16;

The reason these are used is that you know that the item is exactly that size always. For example, a 32 bit value can be a short, an int or a long, depending on the platform. This is resolved once, and the system can then be compiled without problem.
Hi william007,

These are usually special types that are defined for use internally within system headers to ensure exact and predictable integer types sizes. The __ at the start is a trick used to suggest that these are not portable types.

__u32 at a guess, is an unsigned 32bit number.
__u16 ditto 16bit.

jkrConnect With a Mentor Commented:
That's right, usually they are defined/typedef'd in 'asm/types.h' as

typedef unsigned char __u8;
typedef unsigned short __u16;
typedef unsigned int __u32;
typedef unsigned long long __u64;

(at least for Linux and alike)
Turn Raw Data into a Real Career

There’s a growing demand for qualified analysts who can make sense of Big Data. With an MS in Data Analytics, you can become the data mining, management, mapping, and munging expert that today’s leading corporations desperately need.

fridomConnect With a Mentor Commented:
The proper C answer would be to use the <stdint.h> header because there all the things are implemented in  a portable way. BTW even in gcc include is there also, remarkable exception to the rule MSVC does not have anything like that in any version, not even in the MSVC2005 development environment. So much about Microsoft supporting Standards....

While stdint.h contains definitions like uint32_t, these definitions make assumptions about the architecture: a uint32_t is always an unsigned long, even though it could be an int or a short on other hardware, a problem the __u32/__u16 has resolved.

You should check the standard and then the headers that's nonsense this file was introduced because it should provide everyone with guaranteed sized integers.

Actually, that's incorrect. Let's look at the types int32_t. This is defined as:

      typedef long int32_t;

The idea is that a 32 bit value is defined as a long. However, this is only true on LP32, ILP32 and LLP64 models. For ILP64 and LP64, this would be 64 bits - not quite what the developer might be expecting. However, the __u32 data type is *always* 32 bits, irrespective of architecture.
As said read the standard. And there you'll find:
7.18 2

2 Types are de&#64257;ned in the following categories:
  — integer types having certain exact widths;
  — integer types having at least certain speci&#64257;ed widths;
  — fastest integer types having at least certain speci&#64257;ed widths;
  — integer types wide enough to hold pointers to objects;
  — integer types having greatest width.

If it happens that long is 32 bit on a certain platform the int32_t if absolutly ok.

Now in 7.18.11 you'll find
1 The typedef name intN_t designates a signed integer type with width N , no padding
  bits, and a two’s complement representation. Thus, int8_t denotes a signed integer
  type with a width of exactly 8 bits.
2 The typedef name uintN_t designates an unsigned integer type with width N . Thus,
  uint24_t denotes an unsigned integer type with a width of exactly 24 bits.
3 These types are optional. However, if an implementation provides integer types with
  widths of 8, 16, 32, or 64 bits, it shall de&#64257;ne the corresponding typedef names.

so you are definitly wrong.

Apologies - you're right. However, since include files tend to migrate from machine to omachine, I still prefer to trust the _uN values because they're generated locally  when running ./configure.

BTW, you have an odd version of the standard: the fi ligature character (&#64257)  does not occur in the word "define".
No it's a problem of copy and paste. I've use kpdf for selecting and obviously it can not copy with the #.

I think this was a much awaited extension. Because exaclty of you idea about realy be assure that a type has  certain sid. I now think one should avoid the _u stuff, because they are anyway reserved to be used by the implementation. The right way is having stdint and "using" it.

william007Author Commented:
All Courses

From novice to tech pro — start learning today.