10 *1GB and 1*10GB

i am trying to undersigned the difference between using 10*1GB vs1*10GB nic card . Hope someone will explain or put a link so I can read.
Using 10 off 1GB is sending  ten packet in 1000 bits per second speed  and 10GB is sending one packet in 10000bits per second.
the ethernet packet size is 1500(unless jumbo frame), this case 1GB will send 10 packets and 10GB will send 1 packet ,but because of speed the 10GB card can send ten time faster rate than 1GB card , this case it can send 10 packet or even more.
At the end of the day using 10*1GB is slower than using 1 *10GB, am i correct?
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

"At the end of the day using 10*1GB is slower than using 1 *10GB, am i correct?"
Depends on other factors:

How much data throughput you get is related to various and sundry factors:

NIC connection to system (PCI vs PCIe vs Northbridge, etc).

HDD throughput.

Bus contention.

Layer 3/4 protocol and associated overhead.

Application efficiency (FTP vs. SMB/CIFS, etc)

Frame size.

Packet size distribution (as relates to total throughput efficiency)

Compression (hardware and software).

Buffer contention, windowing, etc.

Network infrastructure capacity and architecture (number of ports, backplane capacity, contention, etc)

source link:

"The biggest benefit for 10Gb is not bandwidth, it's port consolidation, thus reducing total cost.”
source link:
The big difference is in the speed between 2 endpoints.

The aggregate bandwidth will be 10Gbit in both cases.

But in case of 10*1Gbit, the maximum speed between two endpoints will still be 1Gbit. You can have 10 connections to 10 different endpoints, each 1Gbit, but with an aggregate bandwidth of 10Gbit.

Using 1*10Gbit on both endpoints will allow a real throughput of 10Gbit.
At the end of the day using 10*1GB is slower than using 1 *10GB, am i correct?
I think that logic is little messy, but basically if we ignore other pieces in the chain and just measure the throughput on 1* 10Gb and 10*1Gb (and ignore how that amount of information came to switch - switch is just forwarding data), I guess you are right for the following reason.
By pure theoretical calculation it would be the same if you use 10*1Gb or 1*10Gb, but real in the life problem will be traffic load balancing from 10 network cards. Traffic is never load balanced ideally, so some network paths in case 10*1Gb will be used more than others, while 1*10 will not have that problem, so it can utilize whole bandwidth.

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
sara2000Author Commented:
Thank you for the explanations,
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today

From novice to tech pro — start learning today.