[Okta Webinar] Learn how to a build a cloud-first strategyRegister Now

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 208
  • Last Modified:

c++ program structure header/source files?

Learning c++, coming from .net / java background.

Why is the purpose of placing the signatures of functions in a header file while placing the actual definition in another source file. This does make sense, seems redundent, but two books on learning c++ have described this approach.

if i am reading and understanding correctly, they advise to do something like this:

____________________________________
//FILE 1 - Header file for ADD_FUNCTION

public int add_numbers(int number1, int number2);

_____________________________________
_____________________________________
//FILE2 - source file for ADD_FUNCTION
public int add_numbers(int number1, int number2){
      return number1 + number2;
}
_____________________________________
____________________________________
//FILE 3 - main program.

int main(args[]){
       
       cout << add_function(2,2);


}

__________________________________________

we do somthing like this? why not just have only source file, or why not include the definition in the header file?










0
mattososky
Asked:
mattososky
3 Solutions
 
jkrCommented:
The reason for that is to split up the code in so called "compilation units" (each .cpp file is such a unit). The corresponding interface to some xtent define the interface to that unit. In the early days of C, that was the easiest way of code reuse. Another reason to use more than one .c/.cpp file simply are the turnaround times when compiling. If you have a set of e.g. 20 .cpp files and you change only one, the other ones don't have to be compiled again, which speeds up the build process a lot.
0
 
jkrCommented:
0
 
peetmCommented:
Lots of reasons:

1. Think about the OS you're writing code for as a 'lead in'.

They don't want you to have access to the source, so they define the various function signatures in a header - so you can compile code *against* those binaries that you can't acyually build yourself.

where you can The main reason is that one often does want the definition in ones code [or indeed - *can't have it*]

2. You - and others in your team might want to control their own bits on a project - and are happy *only* to let you/others have pre-built bits - the bits they're responsible for ... after all - they know that if they give you the source - you *will* tinker with it - or worry about etc [or break it].  So, they give you compiled libs or objs of one sort or another, but *provide you with the means* to code against them.

3. There are - and most of these are for protection - other times when all of this [above] applies.  For example, research Opaque Structs for a blinding example of just how great have function sigs and structure defs elsewhere is super.  Really - if you're going to do nothing else - have a look for these!
0
Industry Leaders: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

 
peetmCommented:
Let's try that again - I promise I haven't been drinking!  In fact I don't know what happened - so I'll try and tidy that up and add a bit more - fingers crossed!

Lots of reasons:

The main reason is that one often wants to separate definition from declaration - in fact, it's often the case that one is forced to do that - see below, i.e., you *can't* have the implementation.


1. Think about the OS you're writing code on, as a 'lead in'.

They [MS if you want] don't want you to have access to the source code of Windows, so they define *most* of the various function-signatures and structs they use in a header [which they're happy to give you] - you need those so you can compile code *against* those binaries that you can't actually build yourself.

Now move closer to home.

2. You - and others in your team might want to control their own bits of a project - and are *only happy* to let others have pre-built bits - these 'others' get the 'the bits' they're *not* responsible for but that they *need* for their own bits to work ... after all - you know that if they give them the source - they *will* tinker with it - or worry about etc [or break it!]   So, you give them compiled libs or objs of one sort or another, but also *provide them with the means* to code against and use those libs etc.

2a.  How about if you change the implementation of some of your bits - but others have the source!  How can you *know* that they're building with the latest stuff?  A. Don't give 'em the source - give 'em a binary.  Your interfaces stay the same of course - they're defined in your headers.  You might add some more of course - that's what all the Windows' ???Ex stuff is about.

3. There are - and most of these are for 'protection' - other times when all of this [above] applies.  For example, research Opaque Structs for a blinding example of just how great it is to have function sigs and structure defs elsewhere to the source itself.  Really - if you're going to do nothing else - have a look for these!  Windows - and all other OSs - are littered with these - and for very good reasons.


There - hope that sounds a bit more structured!
0
 
cuziyqCommented:
Since you're coming from a .NET / Java background, then you know that in Java / C#, EVERYTHING (even stupid things like ints) are actually objects with an inheritance heirarchy and some built-in methods for manipulation them.  This is a built-in part of the language, and it all gets hidden from you.  When you want to use a certain group of functions, you must #import them.  These are the equivalent of header files for those languages, since they are defining a group of objects you will be using.

In C++, there are very few built-in object types.  Primitve things like ints, doubles, chars, etc are not objects at all.  You must define all of your objects yourself.  In order to provide yourself an interface to work with, you either have to write it yourself, or use one that somebody else has written (think MFC, ATL, or even SDL).  You would then #include (not #import) them into your project and begin working away.

There's certainly no rule that says you can't include the prototype and definition in the same file, but then you would be losing some key functionality.

Firstly, C++ can keep track of objects that haven't changed since the last compile if you keep them seperate.  That way, if you have a rather large project, and you make a change to one of the modules, C++ does not have to recompile the entire program (just like Java never has to recompile it's "header" files).  It only recompiles the .cpp file you changed.

Secondly, a header file keeps you type safe.  If you declare a prototype in a header file, and then you try to write a method that takes different args or has a different return type, the compiler can catch it for you.  Every experienced programmer will tell you that it's far, far better to have the compiler catch your bugs for you rather than having them only appear at runtime.

Thirdly, it assists with code reusability and readability.  Having a header file allows you to "black box" your code.  If you need to use a function that somebody else wrote, you do not have to know their implementation to use it.  All you need to know is what args it takes, and what it returns.  This enables large teams to work together because no one has to care about somebody else's implementations.  As long as they work.  In fact, changing those implementations can break things you were not aware of that depended on the function.

It's a tedious task to maintain separate header and code files for a small project.  That's why it doesn't seem to make sense.  But as soon as you work on a larger project and have to collaborate with others on it, headers are the only way to go.
0
 
peetmCommented:
cuziyg's comment

>>It's a tedious task to maintain separate header and code files for a small project.  That's why it doesn't seem to make sense.  But as soon as you work on a larger project and have to collaborate with others on it, headers are the only way to go.

That's about the best bottom line you'll get on this subject.

0
 
mattososkyAuthor Commented:
are there such things as interfaces in c++ then, or is that java / .net specific and header files in c++ are the implementation of that functionality?

and/or what is the relationship betweet java.c# interfaces and c++ header files?

0
 
jkrCommented:
Well, an *interface* in terms of OO is different to an interface to a moduke. The characteristics are declarations with pure virtual methods, e.g.

struct IVehicle  {

virtual void Drive () = 0;
};

class Car : public IVehicle {

void Drive (); // implement Drive for cars
};

class Train : public IVehicle {

void Drive (); // implement Drive for trains
};
0
 
peetmCommented:
The term 'interface' or its plural is often reserved for the more pure OO languages - like Java, C# and - if you must - C++ [I add what might seem like a pejorative wink there because C++ isn't, IMHO, 'pure']

However, the term is subjective and I wouldn't slap down any programmer that used it - as long as I didn't think the context was misleading.

Wikipedia's opening slavo on the term is "An interface defines the communication boundary between two entities", and I'd argue that one could quite easily label a C header file as such an entity.  Foldoc [http://foldoc.org/index.cgi?query=interface&action=Search] has the similar [!] "A boundary across which two systems communicate" -- so, 'ditto' there.



0

Featured Post

Prep for the ITIL® Foundation Certification Exam

December’s Course of the Month is now available! Enroll to learn ITIL® Foundation best practices for delivering IT services effectively and efficiently.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now