?
Solved

C# and C++

Posted on 2002-07-29
3
Medium Priority
?
290 Views
Last Modified: 2010-04-01
a rather general and tough question to answer, but...

i am trying to find information on the speed/performance of C++ vs C#.

anyone have information related to this?

what about some sample code i can run using each language to test speed/performance for mathematical computations?

please be as thorough as possible, and provide sample code or links to code if relevant.

thank you....
0
Comment
Question by:loyaliser
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 2
3 Comments
 
LVL 22

Accepted Solution

by:
ambience earned 1200 total points
ID: 7187587
Following are excerpts from MSDN article "Microsoft .NET Framework Delivers the Platform for an Integrated, Service-Oriented Web".

I cant get the url, so i am posting some related information from that article, if you have MSDN you can easily read that article.
The code generated by C# compilers is MSIL/managed code.

"the code in managed PE files generated by the language compilers is not x86 machine code or machine code targeted to any specific CPU platform"

"MSIL instructions cannot be executed directly by today's host CPUs. So, the common language runtime engine must first compile the managed MSIL instructions into native CPU instructions".

"The need to compile MSIL into native code raises several questions. For example, should the common language runtime convert all of the file's MSIL code to CPU instructions at load time? The advantage of doing this is that the program runs very fast. However, the disadvantage is that compiling the MSIL requires time, and this would significantly prolong the application's initialization time. In addition, very rarely does a user cause an application to execute all of its code. If the common language runtime compiles all of the MSIL to CPU instructions, it is likely that a lot of time and memory would be wasted.
      The consensus is that it would be much more efficient to have the common language runtime compile the MSIL instructions as functions are being called. When the common language runtime loads a class type, it connects stub code to each method. When a method is called, the stub code directs program execution to the component of the common language runtime engine that is responsible for compiling the method's MSIL into native code. Since the MSIL is being compiled just-in-time (JIT), this component of the runtime is frequently referred to as a JIT compiler or JITter. Once the JIT compiler has compiled the MSIL, the method's stub is replaced with the address of the compiled code. Whenever this method is called in the future, the native code will just execute and the JIT compiler will not have to be involved in the process. As you can imagine, this boosts performance considerably.
      The common language runtime is slated to ship with two JIT compilers, the normal compiler and an economy compiler. The normal compiler examines a method's MSIL and optimizes the resulting native code just like the back end of a normal, unmanaged C/C++ compiler. The economy JIT compiler is typically used on machines where the cost of using memory and CPU cycles is high (such as many Windows CE-powered devices). The economy compiler simply replaces each MSIL instruction with its native code counterpart. As you can imagine, the economy compiler compiles code much faster than the normal compiler; however, the native code produced by the economy compiler is significantly less efficient. Even so, the economy compiler will still produce code that executes much faster than interpreted code.
      When the common language runtime is ported to a new CPU platform, Microsoft first creates the economy JITter for that platform. Since the economy JITter is relatively easy to implement, this allows .NET applications to run on the new CPU platform in a very short period of time. Once the economy JITter is complete, Microsoft then focuses its efforts on the normal JITter to improve application performance.
      When Microsoft ships the .NET common language runtime, the normal JITter will be the default on most machines. However, on machines with small footprints, like Windows CE-powered devices, the economy JITter will be the default because it requires less memory to run.
      In addition, the economy JITter supports code pitching. Code pitching is the ability for the common language runtime to discard a method's native code block, freeing up memory used by methods that haven't been executed in a while. Of course, when the common language runtime pitches a block of code, it replaces the method with a stub so that the JITter can regenerate the native code the next time the method is called.
      For those of you who are used to developing in low-level languages like C or C++, you're probably thinking about the performance ramifications of all this. After all, unmanaged code is compiled for a specific CPU platform, and when invoked the code can simply execute. In this managed environment, compiling the code is accomplished in two phases. First, the compiler passes over the source code, doing as much work as possible in producing MSIL. But then, in order to actually execute the code, the MSIL itself must be compiled into native CPU instructions at runtime, requiring that more memory and additional CPU time be allocated to do the work.
      Believe me, since I approached the runtime from a C/C++ background myself, I was quite skeptical and concerned about this additional overhead. The truth is, managed code does not execute as fast and does have more overhead than unmanaged code. However, Microsoft has done a lot of performance work to address these issues. In addition, I've spoken to many developers at Microsoft who truly believe that in the future managed code will actually offer better performance than unmanaged code. Here's why: when the JITter compiles the MSIL code at runtime, it knows more about the execution environment than the compiler knows. For example, the JITter can detect that the host CPU is a Pentium III and generate CPU instructions that take advantage of any performance enhancements that Intel has made to the Pentium III over the Pentium II or Pentium.
      In addition, the JITter may generate code that uses CPU registers that a compiler would normally avoid using. Or, a JITter may be able to detect that a variable always contains a specific value and can generate small, fast code that works solely because the JITter can make a runtime assumption. Furthermore, memory allocations are significantly faster than allocating memory via the Win32 HeapAlloc function. I will address this issue more fully in a future article.
      Microsoft plans to offer a tool, tentatively called PreJit.exe, that can compile an entire assembly to native code and save the result on disk. When the assembly is loaded the next time, this saved version is loaded and the application starts up faster as a result. Because this tool takes advantage of information about other assemblies that have already been loaded when PreJit is run, it is best used in a warm-up mode. You turn PreJit on, and as assemblies are loaded, they are compiled and saved. After your application has run for a sufficient length of time, you turn off PreJit, and from then on the application will start up more quickly. Microsoft uses a variation on this same technique to make the key assemblies (such as the base framework) that will ship with .NET start faster."


Hope this helps
0
 
LVL 22

Expert Comment

by:ambience
ID: 7187593
0
 
LVL 9

Expert Comment

by:jasonclarke
ID: 7188122
For performance critical mathematical stuff, I believe you are likely to suffer a big hit by coding in C#.  The managed environment means that there is a big overhead associated with the run-time environment.

For example, C# array references are always subject to bounds checking.  This can clearly be very detrimental to system performance.

Another example is that you have very little control over how memory is managed in C#,  in effect you cannot easily control memory allocations for objects.  In some situations, this is a big problem.

But the only way to tell is to try it and see with an example that is similar to your requirement.
0

Featured Post

Free Tool: IP Lookup

Get more info about an IP address or domain name, such as organization, abuse contacts and geolocation.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

IntroductionThis article is the second in a three part article series on the Visual Studio 2008 Debugger.  It provides tips in setting and using breakpoints. If not familiar with this debugger, you can find a basic introduction in the EE article loc…
Container Orchestration platforms empower organizations to scale their apps at an exceptional rate. This is the reason numerous innovation-driven companies are moving apps to an appropriated datacenter wide platform that empowers them to scale at a …
The goal of the video will be to teach the user the difference and consequence of passing data by value vs passing data by reference in C++. An example of passing data by value as well as an example of passing data by reference will be be given. Bot…
The viewer will be introduced to the technique of using vectors in C++. The video will cover how to define a vector, store values in the vector and retrieve data from the values stored in the vector.
Suggested Courses

777 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question