• C

Discussion: How can we avoid reinventing the wheel?


The usual discussion rules apply ...

I recently had a discussion with Axter and it turned out we have both created something that seemed unique to our current situation. We could have saved much time!


I think the problem is that there are so many different ways to write even the simplest things. If there were less, then tools could be made to search a pool of code in a sort of auto-complete system. Perhaps the language is at fault. Perhaps standards would help.

What are your views and experiences?

LVL 16
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Kent OlsenDBACommented:
Hi Paul,

Personally, I've never reinvented the wheel.  But I have invented tires, rims, hubcaps, spokes, valve stems, and even spare tires.  :)

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
PaulCaswellAuthor Commented:

Nice one Kent! :-)

What would be great is if the C/C++ community had a good non-profit web site in which you could store the source code for such tools.

I’ve seen many web sites in which you can add source code and tools, and I’ve posted some of my code on some of these web sites.

One of the main problems I’ve seen with most of these web sites is that they don’t have a really good search engine to find the code.

So even if you create this great tool, and posted on these web sites the chances of developers finding it is slim, unless there’s a very unique name associated with the tool.

Another problem is creating the documentation for posting your code.
When I posted my code in CodeGuru and CodeProject, I spent more time creating an introduction and describing the code, than I did in actually creating the code.

One of the things I’ve started doing now, is that when ever I create source code, which I think might be useful to others, I document the code using Doxygen tags, and I use Doxygen to create a web site for the code.

Check out the following link as an example:

Now compare that to my old way of documenting code:

The Doxygen program is free, and I’ve modified VC++ 7.1 IDE so I can call Doxygen right from the IDE.

IMHO, once you have some satisfactory documentation for your code, it’s far easier to publish it for others to use.

For anyone who’s interested in Doxygen, check out the following link:
SolarWinds® VoIP and Network Quality Manager(VNQM)

WAN and VoIP monitoring tools that can help with troubleshooting via an intuitive web interface. Review quality of service data, including jitter, latency, packet loss, and MOS. Troubleshoot call performance and correlate call issues with WAN performance for Cisco and Avaya calls

For anyone interested in how to call Doxygen from VC++ 6.0, 7.x, or 8.0 IDE, you can use the following method:
Go to External-Tools, and add the following tool.
Title:          Doxygen
Command: C:\Program Files\doxygen\bin\doxygen.exe
Arguments: $(ProjectDir)\Doxyfile
Initial Dir:    $(ProjectDir)

Check the following field:
"Use Output Window"

For VC++ 6.0 change $(ProjectDir) to $(WkspDir)
On VC++ 7.x and 8.0 use $(ProjectDir)

You would have to make sure that your project has a Doxyfile, which is created automatically if you run Doxygen for the first time using Doxygen IDE instead of VC++ IDE.
PaulCaswellAuthor Commented:
IMHO, the difficulty comes from the fact that the idea behind the code is not indexable, and once the idea is encoded its too late. If we could discover a way of describing the idea in some form that is at least structured if not concrete then we will be better able to leverage off that to get to a truly searchable library of code.

The internet is drastically improving matters. It is easy to extract keywords from your ideas and add 'download source C' and search. Unfortunately it will often still take as much time to tease out the nuggets or discover that there isnt one than it would to write the code in the first place.

Where are we going? How will we do this in 10 years time? Will Java be better once it grows up? Will there be another language that actually makes this possible, is is there already one?

forget it you can not avoid such things. It happens in other areas also but there the first one get a patent on it. Should that be the way to go? Patents on ideas you implement and which just happened to do the same as someone else code.

This nonprofit stuff is nonsense also. You have to index all the software you have to have the mirrors and you have to do manually inspect the code.

Another point is that you can not restrict the languages to less then some of them have, of course you can go away with one loop construct in the end you can even get away without any kind of loop construct. But even if you just happen to have gotos algorithms can look completely different.

Now another reason for re-inventing are licences issues. You can not use without thinking software under the GPL, so you have to avoid using it in alot of circumstances. Others software is written in-house so you do not even know about it.

Another thing is the NIH syndrom. Management policy may prevent you from using external libraries.

will Java improve that situation? Definitly not.

Where are we going? Who knows, it seems at the moment the greatest threat is patents on "trivial" software, just guess one time one will be able to patent something like a lexer, or some code tranformation tools and what else this people try and unforutnatly can get patented.

Another tendency you can watch, can you see your the GCC development. Only a few insiders know how it works and work on extending it, and they do not document it. You can see the other tendency in MS Software. With .NET and VC2005 they deprecated all of the Standard C library. You have to follow there "our tools knows how to do it, you don't have to and we do all we can to prevent you from understanding it also".

And see how sparse the market is for .NET development tools, sure you can get whatever language running there, but development tools? Nearly nothing there the only  usable tool this days is VC2005, it seems one can forget the Borland tools...

Fortunatly you do not have to follow them, if you use the web as interface you can avoid getting limited to what MS offers....


hi all,

firstable, a very interesting discussion ... thanks paul :))

i had some similar thoughts thinking about where we are going to be in next 5, 10 or 50 years.
how will our job change, will we only describe processes and a *tool* builds code from it,
actually thats what we do now already, but at a very low level with the today's-compilers
and for instance UML.
but how will it look - i wish i had a look on it now --
i think to take a journey through time after i took me coffee later :))

to your thought paul: will we ever find a way to describe processes/algorithms in a unique way,
so we can index it and search over it? will we need to define lots of diagrams to find what we
are looking for?

or do we need to reinvent the wheel again and again to dont forget how we make a wheel? personally
i learned a lot of things by reinventing... maybe not everything should be optimized to the limit :))

very interesting point from friedrich, almost everything in almost ever area was invented more than once.
and lots of invention are lost, because nobody cared about it in its time ...

looking forward to read more in this thread ... thanks again paul .. :)

friedrich btw.: in the linux world is another ide, called kDevelop, a free one. to me its a good alternative ..

PaulCaswellAuthor Commented:

Your points about market forces are valid. There isnt much to encourage us financially to avoid recoding the smaller things and, as C programmers nowadays working primarily on the smaller things, this is unlikely to change. I hadn't thought of patents and non-profit software being involved but you are right on both points.

I suspect/hope that the licensing system we have today will change. It is quite young right now.

I hope that an improved software design system will continue to eat away at the NIH issue. Maybe not but we can only hope.

Sparse markets, we have all seen, fix themselves in time. The web WILL become all-pervading but I believe it will go through a period of overload and unreliability before long. Perhaps deliberately caused but inevitable all-the-same. Once that phase is complete, with DOS, hacking, trojans, and spyware under control I believe it will become significantly more effective than it is now. How long this takes remains to be seen.


Your vision of describing software with diagrams is inevitable IMHO. I have come to the same conclusion myself. We cannot continue to produce the effective, increasingly complex and integrated code we need while staying with text-based languages. What the programs will look like I have no idea and look forward to still being here when we find out. UML is a good start but it is still not sufficiently intuitive for the layman to code the business model while the programmer codes for the machine.

I love your idea that reinvention is also of benefit as a training and clarification exercise. I have never looked at it that way but I believe you are right. We must balance the time taken against the value of what we learn but it will never completely go away.

I suspect the software diagram will become both 3D and animated but that may be more of a hope than a wish. Modern games are already demonstrating that possibility with their fast and good rendering capabilities. Hardware assist is a must and is available right now. I sincerely hope that gesture-driven drawing packages like we've seen in movies recently will become common. We could use them for the design and modelling before any code is written and hopefully, the code generation will be automated to an increasing extent.

Any more thoughts? Once we no longer duplicate code but just connect up to a unit that has the capability we want, how would one global program running on millions of machines on the internet effect our lives? Sentience?

PaulCaswellAuthor Commented:
... Sentience?

And will we need to call in the Governor of California? ;-)

Paul. I believe that its
 the language is at fault and  standards would help .
If  there are n ways of doing things in C and C++, the  there are only 20%  way of doing such things in Java.
In java "reinventing the whole wheel" as compared to C and C++ is very less.

I believe  "C and C++ should have there own Virtal  m/c just like JVM ".
At least that would reduce the number of  
1)Bad ways which are often uncaught.
2)At  least most part of code should be portable across platforms.
3)Loose type checking rules.
4)Unrestricted use of pointers.
5)Unhandled exceptions.
6)memory and resource Leaks.

Look at C and C++, when these languages are there, then why did on earth "Sun guys created Java" .
They wanted safe, secure,sensible  and  reuseable code and yet flexible as C and C++.
The flexibilty C and C++ offered  became problem too.
The beauty that C and C++ offered  like pointers,macros,templates soon became a raw material for "Bug fixing factory" .
How far a tool can grab?. A tool cannot  grab logic . Again this becomes the subject of AI.
I see following problems with  C and C++.
1)They big endeain and small endian problems.
2)For a single down to earth  N/W program you need to struggle in C and C++, you need to take care of lot of things as compared to Java.
3)Too many standards like  "Borland C++, ANSI C++,gcc,g++, Sun CC, HP aCC,AIX x_Clr,Microsoft VC++" many more. A single piece of code change in one of these standards may created totally unportable and up supportable code. They way of compilation is too different from one Another.
4)Day after day there rule changes.

If you want C you must be aware of the problems. java has it's own kind of problems.
And therfor it's not a fault of C or Java, it's how things are. And there are definitly not less ways of solving  a certain problem in Java then there are in C. And saying that Java is more portable as C is questionable also.

if you write ANSI C you can run it on more platform then you can run Java. and a lot of cross platform librarries exist which you can use for C programming, just a few example s a apr, the aolserver C stuff, libpcre, openssl and tons of others.

3) point three is a very loose understanding of Standards, and has nothing to do with what Standard stands for. the opposite is true you are talking about extensions to C which of course are not portable

4) is a FUD argument. C is much more stable then Java and C cod you one wrote has a good chance getting to compiled and run 10 years after it's initial version.

DineshJolania, There are so many points in your post I don't agree with, that I just had to reply :)

>> I believe  "C and C++ should have there own Virtal  m/c just like JVM ".
Sacrilege lol. Seriously : why ? The biggest disadvantage of Java, and you want to use it in C too ? C is so successfull because it's a low-level language that allows you to manipulate almost anything you want. Adding an extra VM layer would not only slow down your software considerably, it would also seriously limit the usefullness of C. Imagine having to use a VM on an embedded system - what a resource hog that would be !

>> At least that would reduce the number of  
>> 1)Bad ways which are often uncaught.
That is totally up to the programmer. Agreed, Java helps you a lot in making your code stable, but C gives you the advantage of having the choice what to do. If you choose not to add code to handle exceptions (for whatever reason), then you can.

>> 2)At  least most part of code should be portable across platforms.
If you follow the ANSI C standard, it is ! Only when you're using system specific libraries, your code becomes less portable. But even that can be resolved by using ported libraries.

>> 3)Loose type checking rules.
I have never really seen the advantage of that, other than to promote laziness.

>> 4)Unrestricted use of pointers.
How is the use of pointers restricted in C ?

>> 5)Unhandled exceptions.
Same as your point 1)

>> 6)memory and resource Leaks.
And you think they don't exist in Java ? Take it from me : I've spent a lot of time tracking down memory leaks in Java, and they're a lot harder to resolve !!
Again : it all depends on the programmer. C requires an attentive mind, that pays attention to details. This is not a disadvantage, imo, because it helps you to create robust code !

>> Look at C and C++, when these languages are there, then why did on earth "Sun guys created Java" .
>> They wanted safe, secure,sensible  and  reuseable code and yet flexible as C and C++.
Java is maybe safe, secure and sensible, but it's not by far as flexible as C or C++ ! You are limited by the VM for one, but also by the imposed error handling model, the garbage collector, etc.
On top of that, a Java application will NEVER be as fast and reliable as a well written C program.

>> The flexibilty C and C++ offered  became problem too.
If the flexibility creates grave problems for you, then maybe you shouldn't use C. Sure, it's easy to destroy a system if you don't pay attention, but remember this : "With great power comes great responsibility" -- C offers you god-like power over a computer, you have to be able to handle that well, or accidents will happen !
But to call that a problem of the language ? The language offers you the power ... the problems are caused by the programmer ! In Java however, most of the problems ARE created by the Java system itself (notably the VM and garbage collector).

>> I see following problems with  C and C++.
>> 1)They big endeain and small endian problems.
That's a feature, not a bug ! Leave it to the programmer to decide which to use when.

>> 2)For a single down to earth  N/W program you need to struggle in C and C++, you need to take care of lot of things as compared to Java.
Java makes it a lot more easier to write applications, that is true. However, there are a lot of nice libraries for C too ! They're not integrated into the language, but they're still there.

>> 3)Too many standards like  "Borland C++, ANSI C++,gcc,g++, Sun CC, HP aCC,AIX x_Clr,Microsoft VC++" many more. A single piece of code change in one of these standards may created totally unportable and up supportable code.
Java has the same problem between the different Java versions. Ever noticed all those "deprecated" messages spat out by the Java compiler ? Ever noticed the problems these can cause ?

>> 4)Day after day there rule changes.
Not if you follow an official standard.

Now, back to the original question. As always, an interesting subject ... thanks, Paul :)

Describing code in a uniform/standard way is something that I have been thinking about several times. Not only would it be handy to create a code repository like the subject of this thread, but there are lots of other applications. Generating code in the language of choice, starting from that uniform description is one that comes to mind. It would allow to describe what you want your software to do on a very high level, and some kind of super-compiler would generate the code based on that. In some ways this already exists (compilers perform this for one specific language eg.).

A lot depends on how detailed (or not) you want the description to be. To be usefull, ()ideally it should be something like :

  Retrieve a given web page and save all referenced URL's to a central database.

Just to give an example. How would you write an application that compiles this phrase into executable code, that does just what was expected ? Not an easy task at all ! You can expect to make use of some level of AI (to understand what the code needs to do), language recognition, and ingeneral techniques already used in a compiler, but with a much higher level of complexity.

Quite a challenge :) Anyone up for it ? :)
Dear Friedrich,
And saying that Java is more portable as C is questionable also .>>Portability is one the best features  of Java.
Lets  take a ground to earth example " Say I create  a window with title  hello world". In Java you will have to write the
code once  and it will work on all flavours of OS. If you wrote in  C , for every OS you need to add /modify code.

3) point three is a very loose understanding of Standards, and has nothing to do with what Standard stands for. the opposite is true you are talking about extensions to C which of course are not portable >> Yes quite correct, what I intended to say was extensions to C and not Standards, thanks a lot  for correcting my mistake.

4) is a FUD argument. C is much more stable then Java and C cod you one wrote has a good chance getting to compiled and run 10 years after it's initial version.>>  Initially we used to follow  K&R style. Then came ANSI  style.
Then lot  of  code was changed to  support ANSI.  We felt that there are lot of flaws with C and later  C++ was  evolved (Which  repaired lot of things  from C ).
Then came  C++. Initially very poor version. We used to access private members through  member pointers. Then this was disallowed. Then came lot of rules.
We had  special style of casting  like const_cast etc.
We had mutable keyword,exceptional handling, templates.
We had inlining, etc.
We had local classes,nested classes  etc.

So my point is this evolution was not all of sudden. Years  after year , I saw this is supported and that is not supported. After  15 years of my attention  to C and C++ language, I am  not in  postion to say that  "Whether I know good C and C++".  This "good" and "more good" are relative.
You get 100 books on C and C++ which mentions about  "How to improve performance,How not to program,
What is bad and what is good, Effective C++".  More information creates more complexity rather then helping  here.
C allows every silly way to  "Solve any damn kind of problem" and this becomes the root  cause of the problem too.
But you get  very less books on Java on such topics. Why ?. why programmer has to bother about  what is  efficient what is not. Why he should take care of his allocations/deallocations.?( In C++ there are lot of jargons like
auto_ptr,vector, smart pointers).There are lot of rules when one uses Multiple inheritance.

Year after year macro's  were added  to allow more functionality.  For eg .

#ifdef __STRICT_ANSI__
#ifdef VAR_ANSI
#ifdef BSD_COMP
#ifdef  __cplusplus
Well I suggest you take apart C and C++. I'm talking about C. And C software has a astoning way of just "stay". And even if there are changes backward compatiblity is always very high on the priority list and in fact you can even nowadays compile K&R C.

Now you picked exaclty on example where Java is better than C. But how about command line utitlites? They are legion and they are practical, and the whole power you get on Unices is based on simple command line programs. Most of this things are written in C. So in fact my C toolchain is much better filled than anything else out there. Want a grep with Java? Good luck.

Now even for the GUI stuff there do exist portable solutoins. You can use
GTK+ e.g for C
or wxwindows or QT for C++

So with the proper libraries I'd argue C is not worse or better then Java.

I agree fully that having GC is a very very large plus for Java, and I wished C has something like this also. This whole manual memory managment is just stupid and should be done by  a machine. Now I'm using the Boehm-Weiser GC and I'm qutie happy with it.

I do not try to squeeze out the extremst mini optimization. the C code I write is hopefully boring. So boring that you even dare to put your hands on it to correct things ;-)

Now my opinion to C++ is the complete opposite, For me C++ is a not needed language. To many black magic behind everything. I do not use C++, and If I had to I'd write Eiffel in C++. I wiched Objective-C would have get the attention C++ has catched...

I suggest you check out Ocaml,Smalltalk, Ruby, Eiffel, Oberon, D or Objective C for a much better way on doing OOP.

Hi Paul,
IMHO, the difficulty comes from the fact that the idea behind the code is not indexable, and once the idea is encoded its too late. If we could discover a way of describing the idea in some form that is at least structured if not concrete then we will be better able to leverage off that to get to a truly searchable library of code. >> >
No doubt  idea behind  the code is not indexable. But at least   function name , symbols, macros, file name,include files ,global variables can be indexed  using a tool called  cscope. I have been using it  to search 1Gb  of source code.
The best feature which I felt was search using regular experssions.  I can mantain conflicting functions, duplicate functions. The most powefull of all  is "Find this egrep pattern" .
If we had names like   binary_tree1(),bin_tree2(),Binary_Tree3(),Bin_Tree4(),binarytreecreate()
then an intelligent egrep serach will be  [B]*[b]*in.*[t]*[T]*ree.
My point is you can capture "function names,symbols etc" using " Intelligent  egrep patterns.".
We can maintain a Synoname list  for  example  
"delete node" could  correspond to "delete item","delete stack","pop item".
We can use certain heuristic function to evaluate the search order  , for eg. "delete node"   most commonly corresponds to  "deleting an node from link list". We should give higher priorities to such cases.
The idea is to  "develop the  heuristic function which captures most of logical  ineference".
We can  very well use AI's "semantic N/W".

>> IMHO, the difficulty comes from the fact that the idea behind the code is not indexable...

I don't think that is the main problem.
Just consider there is a way to find an algorithm you need. Then you can use it in your project. But having a lot of code written by different people, means also a lot of different coding styles. That is not necessary a problem, but often it is.

Consider the error handling. There are many ways reporting a error back to it's caller. Returning a errorcode, returning a boolean and requiring to call GetLastError.... In a larger software project you do not want to mix this. So it means you have to rewrite some of the code. At that point it is already less usefull. And if you find a bug, it will not go back to the original source, because it could also be introduced during the rewriting.

Maybe take a look at the c++ boost library. They want to write portable libraries for general use. They have written down guidelines (http://www.boost.org/more/lib_guide.htm#Guidelines) which include documentation and automatic testing. This way a consistant library collection is created. Because it is more and more widely used, there are now even books about boost.

So I think if you want to create a pool of code, it must not be a place to dump your code, but it must be properly thought about from the beginning, and only accept code which applies to certain requirements. That would also mean you have to spend extra time before submitting some code. And time is always a problem. But once you earn time because of code others have submitted, it should be worth it.
PaulCaswellAuthor Commented:
It feels to me like we are moving towards Aspect Oriented Programming.

The concept here, if my understanding is correct, is that you write the code and then add 'Aspects' to it, say error handling or GUI. I am currently looking for an XML/HTML parser that I can not only use to pull data out of the file but then modify it and put it back in. If it was written in some form of AOP language I could probably bind the parts together in a far easier fashion.

Don't bother to look for one for me, there's far more I need from it than I can describe, and I think that is the core of the problem. I can describe several parts of what I want but not the whole of it. It would probably take me as long to unambiguously describe it as write it and even then, the description would only be of use as a reference, not as a search definition. In AOP you can obtain parts, put them together and, instead of the whole becoming the sum of its parts, like the bricks of a house, the whole becomes a true blend of its parts, like a gourmet meal.

Well maybe, but if you want something more flexible you should try a dynamically-typed language like Smalltalk or Common Lisp. Another question still open for discussion, do you understand what will happen with aspects? How will you be able to find out what aspects are implemented....

PaulCaswellAuthor Commented:
Hi fridom,

I am not deeply knowledgeable about AOP but I understand the intent is to separate different aspects of the code into separate, replaceable parts. For example, error detection and handling or file I/O can be completely removed from the code.

I could then find my XML parser and add aspects that conform to my requirements because the parser itself is EXACTLY an xml parser, no more. There is always so much hassle when you finally find the code you seek, only to discover that it requires another library, which needs yet another one etc. With AOP there are no side effects to inclusion, it just folds in and takes over its task and nothing else.

AOP is in first a new buzzword with a serious background. The static "programming" people have found that they need more flexibility. Their first tries (especially in the OO-Section) were the Design Patterns, a lot of them are a direct expression on the shortcoming of the popular C++ object model.

Now this "patterns" are much less needed in language which do offer more flexibility.

Now they found out that they need something like specializin on more then one parameter. Now they found another thing. The objects are not extensible. If you have implemented a class you hardly can reopen it you can not simply add new things because that would break old things. For that the started working on AOP.

So the road is IMHO quite clear (and interesting), they postulate "static type safety", but they really are looking for weakin it in their "world view", because the used langauge does not permit easy extensions the must start working on giant frameworks to overcome the self imposed limit.

I bet most of this find they are working on "real advanced" stuff, and they are right. That is really advanced stuff for their view on the world of programming, but others simply do not have this problems because their tools allow for easy extensibility.

So what they really do they reinvent a wheel which other have solved since ages. So if you think AOP is the future then they simply have got you. It sounds impressing and "revolutionary" but it's in the end a work-around for limits put on them by their tools.

I suggest you check out the work done by the Haskell community, you'll be suprised....

I suggest also to check

Another good reading is "Essential COM" especially the first chapter.  

PaulCaswellAuthor Commented:
Hi Friedrich,

I had a look at haskell a while ago. You are right! They are doing some astonishing things!

I came across Corn a few months ago! http://corn.telefonia.pl/tutorial/index.html Again, a bit young for commercial use, and probably will never emerge! Imagine an oop language where classes can inherit cycliclically! They define a boolean class where an object of type 'false' is merely an inverted object of type 'true' and vice-versa. Take a peek at the tutorials. It'll mess with your head in a good way! :-)

I'm going back to look at Haskell again. I hope one-day it comes out of the purely academic environment into the 'real' world.

Haskell is quite "real" Just check out darcs

and pugs:

A few other things worth checking:
Ocaml http://caml.inria.fr/
Mozart/Oz: http://www.mozart-oz.org/
Common Lisp

PaulCaswellAuthor Commented:

We've covered the main topic tidily. We've had some side-discussions about porttability and linguistics and several other interesting topics. Lets have one more look into the future.

Assume we do, finally, discover a way of properly taking what is in our minds and building it out of pre-fabricated parts such as sticks or bricks rather than the houses of sand we build today, where will the world of computers go?

I have to admit a little trepidation. It worries me somewhat that in a decade's time, the program I run to analyse the attobytes and femtobytes of data I will be working with may consists of millions of processes running on millions of processors around the world. The data transfer rate would have to be astronomical! Who will hold the purse-strings? How will terrorists 'use' this system? Will the system get near-sentient? Will we be negotiating with the Vodaphones or the Googles, the communicators or the databases?

And who will the huffers and the puffers be?

It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today

From novice to tech pro — start learning today.