Network Architecture





Network design and methodology, also known as network architecture, is the design of a communication network. It is a framework for the specification of a network's physical components and their functional organization and configuration, its operational principles and procedures, as well as data formats used in its operation. In telecommunication, the specification of a network architecture may also include a detailed description of products and services delivered via a communications network, as well as detailed rate and billing structures under which services are compensated.

An analog PDN impedance is different from digital PDNs in many ways. Here’s what you need to know about each type of PDN.
This article is about building a Route Based site to site VPN tunnels with Redundant Routers in DC (HUB) in Cisco CSR1000V router with IOS XE. There are four Route Based IPsec VPN tunnels configured on two CSR1000V routers as redundant routers pair.
When you have a Wi-Fi, you might want to isolate the untrusted network from your network, since Wi-Fi is more vulnerable to attacks, as is a guest network. You will still be able to manage guest/Wi-Fi from your network. This is possible to do with an Edge router

Software-defined infrastructure is the buzz these days gaining a lot of importance. With software-defined infrastructure companies can be more agile and proficient. Nonetheless, a complete re-engineering of IT procedures is required to gain agility and efficiency.

The adoption of software-defined data centres is also increasing as it offers rapid delivery of services and cloud-like services to organisations. Another objective of organisations is to save money which can be achieved by delivering IT services in a restructured and coordinated method. Components and services of infrastructure are fully automated, triggered by business policies, amalgamated and centrally managed for performance.

A software-defined data centre can trace demand and responds automatically within no time by scaling up suitable resources. Software-defined solutions for considerations associated with computers; networking and storage predict results like 55% Opex savings and 75% Capex savings. Software-defined data centre’s technology help in eliminating traditional data centre silos and has a concrete server virtualisation infrastructure which has matured and improved in several medium and large companies.


Nothing can be achieved if anything simply has been deployed. Some of the other support is required to gain full fledge results. Similarly, Software-defined Data centres require some robust re-engineering of IT processes to achieve cost savings, business agility and productivity gains. Let’s have a look at five strategies which will help in reaping benefits of successful deployment of software-defined data centres.

Initiate Small

One of the biggest hurdles in deploying software-defined data centres is lethargy. Many people have a wrong assumption that organisations should transform complete data operations simultaneously. This is not at all necessary.

Rather, begin software-defined data centres with one, a small project which is related to low profile activity or service addressing one aspect of software-defined data centre environment that is computed, storage and networking.

Shifting storage capability which is inclusive of a database service is a seamless project to start with. To move VM’s vigorously without causing any disruptions can be achieved by using certain technology like VMware Live Migration. With this, the organisation can absorb Software-defined data centres and reap measurable benefits.

On the other hand, targeting e-commerce websites for the first experiment into a software-defined data centre can be risky. Multiple application services is a must for such projects like shipping, inventory, order management etc. and therefore there should be solutions which enables seamless working of computes, networking and software-defined cloud storage.

If any delay or failure is detected in new infrastructure which has high chances when something new is being implemented then this result is quite noticeable to senior management. Such top management people don’t like systems to go down especially those which are revenue generating. Initiating with something small, a non-mission critical project will enable decision makers in the IT to learn rapidly and can help in refining the processes for the subsequent project and can easily build a software-defined data centre expertise without any risk.

Necessary Skills

While deploying a software-defined data centre, the IT team should possess resources that are capable enough to understand systems orchestration and automation. Such skills are found in individuals who have worked closely with business, with some external service providers or ones who have experienced cross-departmental roles.

It is very important that software-defined data centre technologies are extremely vendor specific. If you are choosing Cisco solution, then you will be in need of people who have expertise in Cisco networking. It will be very easy and less risky if you have skills in-house as per your chosen platform rather than having another set of resources or retrofitting people to unfamiliar technology. Even if you have an excellent team with high capabilities and skills, software-defined data centres require spending money on training and development, support etc.

Evaluation of Vendor Contracts and Legacy Technologies

An intelligent IT leader will never replace vendor relationships and all systems just to deploy software-defined data centre capabilities. IT should consider business priorities for vendors which are based on purchasing power and long-term contracts and then accordingly align software-defined data centres purchases. Evaluation of hardware’s end of life status is also very necessary. If an organisation wishes to deploy Cisco software-defined data centre although has networking infrastructure of HP which is just two years old, choosing HP makes better sense.

Reconsideration of the IT Enterprise

Silos are already on the verge of a software-defined data centre. In this world of technology, it is very difficult to run IT with separate groups of networking, storage, applications and server. For a software-defined data centre, the technology barrier is maintaining silos. With a software-defined data centre, data must run spontaneously and a high level of coordination is required. Software-defined data centres offer more significant information from all the constituents which are then distributed across IT for better management and decision making.

With time roles also change. If a software-defined compute product is deployed like VMware, it will affect the network or storage group. Such organisations will have to deliver virtualised infrastructure based services as well as on standards which are optimised for a software-defined data centre. Moreover, it is important to initiate change and collaborate in new ways.

Deploy Metrics for Business

Monitoring performance as a task is done in manual and automated ways since years. However, then also such metrics don’t give much value to the business. Don’t use a lot of metrics. Overloading of metrics will cause a lot of confusion and no concrete conclusion can be derived. Select some few metrics which will help in deriving clear and measurable conclusions which will define success for the new infrastructure so deployed.

Metrics sometimes vary from project to project. Metrics should be chosen that have the ability to demonstrate how much more efficient, effective one can be in assisting users in new software-defined data centre oriented location. Common metric are the speed of deployment, agility, the capability to shift storage possessions with zero downtime, ease of use, user satisfaction, and total costs incurred.

Many conversations and debates have occurred whether the software-defined data centre is a methodology or technology. Truly speaking, it is a combination of both. A new alignment for delivering and managing IT is a prerequisite for software-defined data centre which is based on collaboration, business prioritisation and speed.

This article will show how Aten was able to supply easy management and control for Artear's video walls and wide range display configurations of their newsroom.
In this article, we’ll look at how to deploy ProxySQL.

If you’re involved with your company’s wide area network (WAN), you’ve probably heard about SD-WANs. They’re the “boy wonder” of networking, ostensibly allowing companies to replace expensive MPLS lines with low-cost Internet access. But, are they worth the investment?  As someone who makes and sells SD-WANs for a living, I do love the technology. However, even I know that SD-WANs aren’t a fit for every company. Here, then, are five reasons from an SD-WAN insider why not to buy an SD-WAN.

You might not save as much money as you thought

Numerous surveys show that a driver, if not the major driver, for SD-WANs is reduction in monthly spending for bandwidth. Proponents will point to the 90 percent difference between MPLS and Internet bandwidth. You will reduce costs, but often actual savings are much more conservative than the quoted 90 percent number. Many locations will require dual fiber links for reasons of resiliency, increasing costs. Service provider management, an inherent part of any MPLS service, must be assumed by the enterprise with SD-WAN -- another cost center. There are also security costs that need to be calculated, if branch offices are to use local Internet to improve cloud application performance.

So, where will cost savings come from? Depending on the SD-WAN selection, you can save the cost of replacing end-of-life routers at branch offices. Bandwidth costs will almost certainly reduce when replacing MPLS with Internet, unless you happen to be in a region where Internet availability is limited. SD-WANs offered by some Firewall-as-a-Service providers allow you to eliminate or reduce security as well as networking costs. You’ll also reduce your operational costs through the use of centralized configuration and management.

You might not be able to replace your MPLS networks

To be MPLS-free is the wish of any WAN manager, but there’s an excellent chance that with most SD-WANs, you’ll remain tied to the MPLS umbilical cord. Companies depending on latency-sensitive and loss-sensitive applications will not be able to deliver the kind of consistent, quality experience, day-in and day-out, with the Internet. As I mentioned, routing dynamics and Internet economics are such that there’s very little incentive for providers to deliver the kind of consistent latency and loss statistics needed by enterprise-grade application. This is particularly true when delivering services in underserved areas or between Internet regions. For those applications, organizations should retain MPLS or replace it with another SLA-backed backbone.

It will not make everything faster

The quality of experience (QoE) of some applications will improve with an SD-WAN when compared with MPLS, but not for all applications. SD-WANs are not WAN optimization, which applies a variety of compression, caching and protocol optimization, as well as link correction techniques to improve application efficiency, reduce latency, and minimize loss. SD-WANs are about controlling the overall network; WAN optimization improves one path across the network. SD-WANs may include WAN optimization techniques, but that’s the exception -- not the rule.

All SD-WANs can help improve application performance in three ways:

  • Applications requiring a lot of throughput (think: data replication or backup) will benefit from SD-WAN’s ability to leverage high-bandwidth Internet links.
  • Cloud and Internet application performance will improve by being able to access the Internet directly (direct Internet access, DIA) from a branch office, assuming secured Internet connection is provided. By contrast with MPLS, Internet traffic is commonly backhauled to a secured Internet portal. This can introduce significantly more latency into the connection through the so-called trombone effect.
  • Voice, video and other latency sensitive applications, in particular, benefit from the SD-WAN’s ability to select the path with the least latency.  Normally, Internet routing is application agnostic, routing traffic based on a combination of the number of hops and peering economics. By contrast, SD-WANs monitor the characteristics of the underlying transports and use that information, along with policies describing business logic, to select the optimum path to a destination.


Networking will not become easy

SD-WANs go a long way to making wide area networking more plug-and-play, but I don’t think anyone who’s deployed an SD-WAN will say it’s easy. Zero-touch deployment does make deployment far more rapid than configuring dozens of individual routers, but someone still needs to understand routing, policy configuration, network performance and more. Some vendors give you GUIs for those deployments, in which case large scale deployments may be tedious. Other vendors rely on CLI, in which case you’ll certainly want to retain the expertise of a networking engineer. Adding a multi-tunnel environment that’s used in overlay makes troubleshooting more challenging. Now you need to worry, not just about L3 and routing issues, but the SD-WAN, as well.


Security problems will not be solved

SD-WANs do not provide advanced security. They encrypt traffic, like any other VPN, which protects against wiretapping and man-in-the-middle attacks, but they provide none of the advanced security services needed to defend against malware penetration, advanced persistent threats and more. This is particularly important because SD-WANs rely on DIA to improve cloud and Internet performance. But direct internet access is only possible if those Internet connections can be secured against Internet-borne threats. You’ll still need to invest in IPS, malware protection, next generation firewall (NGFW) and other advanced security services, increasing the cost of an SD-WAN deployment.

As with any new technology, there are more than a few misconceptions around the value of SD-WANs. But there’s also real value to the technology around operational savings, end-to-end performance, and more. Understanding those benefits will help you get the most from you SD-WAN.

Microservice architecture adoption brings many advantages, but can add intricacy. Selecting the right orchestration tool is most important for business specific needs.
Data center, now-a-days, is referred as the home of all the advanced technologies. In-fact, most of the businesses are now establishing their entire organizational structure around the IT capabilities.
In the world of WAN, QoS is a pretty important topic for most, if not all, networks. Some WAN technologies have QoS mechanisms built in, but others, such as some L2 WAN's, don't have QoS control in the provider cloud.
If you are thinking of adopting cloud services, or just curious as to what ‘the cloud’ can offer then the leader according to Gartner for Infrastructure as a Service (IaaS) is Amazon Web Services (AWS).  When I started using AWS I was completely new to the offering and I really didn’t know what it was, how it worked or what to do once I had access.  This article will cover some of the main points to be aware of that may help you when you first start out using the Services they provide.
The 1st thing you will want to do is to create an account, AWS offer you the ability to use some of their services for free for a year as long as it falls within their specific ‘free tier’ limits.  I used my own personal account for self study and learning and I found these service limits to be perfectly fine for what I wanted to do and test.  By default you will have access to ALL services that they provide, and you will only be charged for any services that you use that fall outside of the initial free tier.
The complete service limitations on what the free tier offers can be found here.  The main services that feature within this and the most common that you will initially use are:
Compute (EC2)
  • 750 hours worth of EC2 Compute Capacity for RHEL, Linux or SLES t2.micro instances
  • 750 hours worth of EC2 Compute Capacity for Windows t2.micro instances
As an example, you could run a single Windows instance constantly for 1 month, or 2 Windows instances for half a month, etc.
Creating an OSPF network that automatically (dynamically) reroutes network traffic over other connections to prevent network downtime.
Security is one of the biggest concerns when moving and migrating your data from your on-premise location to the Public Cloud.  Where is your data? Who can access it? Will it be safe from accidental deletion?  All of these questions and more are important, and AWS knows and addresses this. 
Due to AWS being a global company deploying exactly the same services in all corners of the globe it has had to set the highest level of security conforming to all regulations in each country.  As a result, someone who is simply using S3 to store their personal photos gets the same level of security as a multi million dollar company who require the most vigorous of security regulations.
AWS complies with a number of different security standards that can be found here.
When it comes to Security, AWS operates within a shared responsibility model.  This means that the security ‘of’ the Cloud lies with AWS, and the security ‘in’ the cloud lays with you the user.  To break this down a bit further, the physical access to the Data Centres, Availability Zones, Regions, Edge Locations, Compute, Networking and Storage is the responsibility of AWS.  Your data and its encryption, configuration of your VPC security covering ACLs, Security Groups, IAM, patching of EC2 instances etc, is your responsibility. 
More information on the Shared model can be found here.
LVL 68

Expert Comment

by:Jim Horn
Excellent article.  Voted Yes

Expert Comment

by:Maidine Fouad
Good ,Perhaps as a  security suggestion you should include "Amazon Billing Alerts" .

The account credentials might be one day compromised ,the CC Credentials are hidden , but not the ability to purchase Extra instances witch hackers might abuse if they get access to it ?
AWS has developed and created its highly available global infrastructure allowing users to deploy and manage their estates all across the world through the use of the following geographical components
  • Regions
  • Availability Zones
  • Edge Locations
When architecting and designing your infrastructure it’s important to know where your data is being stored and where your instances and services are located.  This is fundamental when designing and implementing a highly available and scalable network with low latency that abides by any data laws that may be in operation.
If you are studying for the AWS certifications it’s important to know the differences between Regions/Availability Zones and Edge Locations.

What is an AWS Region?

A Region is essentially just that, a geographic location that Amazon has selected to run and operate its Cloud services from.  There are currently 12 different regions exist spanning across the globe at key locations:
North American Regions
  • US East (Northern Virginia)
  • US West (Northern California)
  • US West (Oregon)
  • AWS GovCloud (US) – Reserved for Government agencies only
South American Regions
  • São Paulo
EMEA Regions
  • EU (Ireland)
  • EU (Frankfurt)
Asia Pacific Regions
  • Asia Pacific (Singapore)
  • Asia Pacific (Tokyo)
  • Asia Pacific (Sydney)
  • Asia Pacific (Seoul)
  • China (Beijing) – Limited Public release

Expert Comment

by:prathap C
Hi Scott,

You have mentioned here as " many of the Edge Locations are located some distance away from some of the Regions " i cant get this point.have doubt like whether locations will come under region?

Justnow i have started to learn about cloud.

Thanks by,

Author Comment

by:Stuart Scott
Hi Prathap,

Thank you for your comment.  

Edge location are different from Regions, and as a result do not fall under 'Regions' as a location.  To put the global infrastructure in it's most simple form the different elements can be described as follows:

- Availability Zones (AZs): These are essentially the physical data centers of AWS. This is where the actual compute, storage, network, and database resources are hosted

- Regions: A Region is a collection of availability zones that are geographically located close to one other. This is generally indicated by AZs within the same city.  Regions do not include Edge Locations, only AZs

- Edge Locations: These are AWS sites deployed in major cities and highly populated areas across the globe and they far outnumber the number of availability zones available.  These are used to reduce latency to end users by using the AWS CDN service known as CloudFront.  You are unable to deploy your typical compute, storage, and database services in Edge Locations, the Edge Locations are reserved for simply reducing latency using CloudFront and Lambda@Edge services.

I hope this helps.


Hello to you all,

I hear of many people congratulate AWS (Amazon Web Services) on how easy it is to spin up and create new EC2 (Elastic Compute Cloud) instances, but then fail and struggle to connect to them using simple tools such as SSH (Secure Shell) and RDP (Remote Desktop Protocol) and their feelings quickly turn to frustration. 

Depending on your deployment method of your EC2 instances you may need to connect to them to perform additional configuration, install applications or to troubleshoot and incidents that may occur.  Without having a working method of connecting locally to your EC2 instances would prevent you from having full manageability of that host.

This Article has been written to cover the most common configuration problems that prevent connectivity between you and your EC2 instance.

Default or Non Default VPC (Virtual Private Cloud)?

Default VPC: Every AWS account comes with a Default VPC already created, this allows users to immediately deploy EC2 instances within this VPC and connect to it.  Simple you may think, and you would be right, many of the AWS networking components have already been set up on your behalf allowing you to connect to your instances with relative ease. However, these same components that are pre-configured take away some of the detailed design that your corporate infrastructure may require.  It comes with a predefined IP CIDR (Classless Inter-Domain Routing) block assigned which might not suit …
This article explores the design of a cache system that can improve the performance of a web site or web application.  The assumption is that the web site has many more “read” operations than “write” operations (this is commonly the case for informational sites) and for this reason, the site should be able to recognize repeated identical requests and return an immediate cached response, rather than going back to the database queries for the reformulation of the original response. 

The rationale for this strategy comes from recognition of the difference in speed between in-memory processes and disk-based processes.  While memory access is typically measured in nanoseconds, even a very fast disk spinning at 7200RPM requires 8.3 milliseconds for a single rotation, and the nature of file lookup or database operations is such that a great many disk rotations may be required for some queries.  Since the ratio of nanoseconds to milliseconds is several orders of magnitude, it follows that cache may produce substantial quantitative improvements in server performance.

Characteristics of a Cache
Popular cache systems include Memcached and Redis, and it is also possible to use the file system for cache storage, but in-memory systems will give the best performance.  All cache systems work in similar ways.  They are key:value data storage systems.  Access to a value in the cache is made …
This article is a step by step guide on how to create a basic PTP link using Ubiquiti airOS devices. This guide can be used on the following Ubiquiti AirMAX devices. Nanostation, Bullets, AirBridge, Nanobeam, NanoBridge to name a few. Please review all the AirMAX device here.. I will be focusing on the selected part of the diagram below for this guide using two Ubiquiti Nanostation M2's. You can use this setup to create a link between office buildings up to 50miles (depending on the device) 


The factory default IP address for the device is and the subnet mask is (/24) open internet explorer and connect to if you are using one of the latest firmware versions you will be redirected to https and you will see privacy error page
Note: You either need to be in the same IP address range or you would need to change your IP configuration on your PC to static Please follow the quickstart guide from ubiquiti to get the device connected to your PC..

Lets get started.
  • Click on Advanced and then on Proceed to..


Next you will need to enter in the default Username and Password “ubnt” for both.. In this guide I will be using Complaince Test for Country please select your appropaite region..


The next screen that appears is the Main Screen. On this screen you will see all your active connections to your device, firmware version, MAC address
LVL 16

Author Comment

by:Dirk Mare
Thank You

Expert Comment

you are welcome
I am interesting in Ubiquiti and mikrotik Devices do you ?
There are times where you would like to have access to information that is only available from a different network. This network could be down the hall, or across country. If each of the network sites have access to the internet, you can create a network bridge that can connect the two networks together.

If all you want to do is connect to you desktop remotely, then this is not for you. You would be better off using one of the commercial options like TeamViewer or SplashTop.

However, if you need to give access to many individuals, or you need systems to be able to access other systems in a different network like printers or fileservers, then this might be what you need.

This article will not cover routing which is required to take full advantage of this network bridge.

The things that you will need for each site;
  • A spare computer system.
  • A copy of PFsense (I used version 2.1.5)
PFSense can be downloaded from

A word of warning, PFSense is designed to take full control of the computer it is installed on. It will not be useable for any other purpose.

PFSense Installation

This article will not cover Installation instructions of PFSense. However I will say that I chose the default installation options and only configured one network card. I also made sure that the option "Disable all packet Filtering" was checked. This is found under System->Advanced->Firewall/NAT tab.

Imagine you have a shopping list of items you need to get at the grocery store. You have two options:
A. Take one trip to the grocery store and get everything you need for the week, or
B. Take multiple trips, buying an item at a time, to achieve the same feat.
Obviously, unless you are purposefully trying to get out of the house you’d choose “A”. But why do we so often times choose “B” when it comes to our data transmission performance? The key metric here is efficiency.How many trips do you want to take?

MTU…says you need to buy Milk in 1 Gallon containers rather than by the ounce!

MTU is an acronym that stands for the Maximum Transmission Unit, which is the single largest physical packet size, measured in bytes, a network can transmit. If messages are larger than the specified MTU they are broken up into separate, smaller packets also known as packet fragmentation or “fragmented”, which slows the overall transmission speeds because instead of making one trip to the grocery store you are now making multiple trips to achieve the same feat. In other words, the maximum length of a data unit a protocol can send in one trip, without fragmentation occurring is dictated by the MTU value defined.

Do I Really need to Manually Correct the MTU Value?

The correct MTU value will help you select the correct shopping cart size in order to be the most efficient in your grocery shopping so that you don’t have to take multiple trips. Shouldn’t I just leave…

Expert Comment

by:Jason Shaw
Would changing the MTU on on-side of VPN tunnel cause any issues with VPN ?
LVL 32

Author Comment

by:Blue Street Tech
Hi Jason, I assume you are only changing it on one side of a VPN tunnel. If I am correct, then it would only benefit one side of the connection. So if that connection is having the issues then it may remedy the problem, however for greater efficacy I'd do both ends (they most likely will not have the same MTU).
This article is focussed on erradicating the confusion with slash notations. This article will help you identify and understand the purpose and use of slash notations. A deep understanding of this will help you identify networks quicker especially when looking at route statements, or access-list statements.

Slash Notations indicate the number of network bits (or number of bits turned on) in a network. This is what defines your network range.

We will use the following most common Class C address range
IP Address =
Subnet Mask =
Gateway =

In the above example, our slash notation is /24 /24

Let me answer this by using the /27 notation to kill 2 birds with 1 stone. /27 in bits would be represented as
This is also a /32 notation.

Count the number of 1s from left to right, you will have 32 of them.

The common subnet people use (Class C) has a subnet mask shown below
This broken down into bits would be represented as
Count the number of 1s from the left, you will have 24 of them
This gives you a slash notation of /24

Get the picture?
So /27 would look something like this
Now, our subnet has changed in bit value and we need to convert that to decimal
One Octet is a set of 8 bits
When all turned on, they have this value
When all …
I wrote this article to help simplify the process of combining multiple subnets.
This can be used for route summarization also but there are other better ways to summarize routes,

This article is a result of questions I participate in here at Experts Exchange. This particular question is a practice test question posted at the following link 

I copied it here in case the link breaks years down the line

Question 2

Refer to the exhibit. The Lakeside Company has the internetwork in the exhibit. The Administrator would like to reduce the size of the routing table to the Central Router. Which partial routing table entry in the Central router represents a route summary that represents the LANs in Phoenix but no additional subnets?

A). /22 is subnetted, 1 subnet
        D [90/20514560] via 6w0d, serial 0/1

B.) /28 is subnetted, 1 subnet
        D [90/20514560] via 6w0d, serial 0/1

C.) /30 is subnetted, 1 subnet
        D [90/20514560] via 6w0d, serial 0/1

D.) /22 is subnetted, 1 subnet
        D [90/20514560] via 6w0d, serial 0/1

E.) /28 is subnetted, 1 subnet
        D [90/20514560] via 6w0d, serial 0/1

F.) /30 is subnetted, 1 subnet
        D [90/20514560] via 6w0d, serial 0/1


Answer: D


Expert Comment

by:Sandeep Udgirkar
Good Article for a novice.
Auditors face some challenges when reviewing router and firewall configurations.  I'm going to discuss a few of them in this article.  My assumption is that there is a device hardening standard in place, which points out the key elements of configuration. I am also assuming configuration review is only small, and not the most important part of audit program (design assessment, change control, access control, etc... have to be done as well).
The first challenge is that auditors don’t have access to devices so they cannot pull the configuration file by themselves. They have to ask network administrators to deliver configuration files to them. So, how auditors know that configuration was collected from the devices in scope of the audit and not from Cisco simulator (GNS3/Dynamips) or from some vanilla firewall or router? Unfortunately, the only solution is to watch over our administrator’s shoulder. The other bad news could be change control. If there is no good change control mechanism in place (i.e. starting form change logging to Cisco ACS) , the configuration file could be changing too often, and auditing it doesn’t make any sense. So, I assume the change control was audited before configuration audit was started. The good news here is that auditors can ask for configuration more than once during audit period or they do audit more than once a year. I would recommend to grep 'Cryptochecksum' (unfortunately works only on ASA/PIX, not  IOS) and retain results.  …

Expert Comment

Thank you for sharing this information!

Network Architecture





Network design and methodology, also known as network architecture, is the design of a communication network. It is a framework for the specification of a network's physical components and their functional organization and configuration, its operational principles and procedures, as well as data formats used in its operation. In telecommunication, the specification of a network architecture may also include a detailed description of products and services delivered via a communications network, as well as detailed rate and billing structures under which services are compensated.