Amazon Web Services (AWS), is a collection of remote computing services, also called web services, that make up a cloud-computing platform  operated from 11 geographical regions across the world. The most central and well-known of these services include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3". Other services include Elastic MapReduce (EMR), Route 53 (a DNS web service),  provides a highly available and scalable Domain Name System (DNS) web service, Virtual Private Cloud (VPC), storage, database, deployment and application services.

Share tech news, updates, or what's on your mind.

Sign up to Post

My article is about how we can achieve High Availability (HA) across Data Centers (DCs) that are in multiple geo-locations?


We know that IT is backbone of every business. It is important for critical business applications to keep running around the clock without any down time. Companies investment a lot in IT to make this possible for their applications.


Under normal circumstances, within a data center, multiple firewalls, routers, and load balancers are used to make HA of a server or application. But how can we achieve HA among multiple data centers?


Well the answer is a Global Server Load Balancer (GSLB).


A GSLB is set up in multiple DCs as an endpoint service. GSLB provides HA, disaster recovery and business continuity where data is synced via replications among the DCs. A GSLB keeps monitoring the health of each DC endpoint and redirects client traffic to a healthy site. The client’s DNS query, coupled with their geo-location is used to to direct traffic to the most optimal site. GSLB uses DNS as it is a good indicator of network latency.


There are many companies who provide GSLB like F5 and NGNIX. There are even cloud services like AWS's "AWS Global Accelerator" and Azure's Traffic Manager that also provide a GSLB solution for those businesses that find an on-site installation of GSLB cost prohibitive and as such, may find that a cloud services as a better solution.

The best part of GSLB provided by cloud providers like Microsoft Azure, Amazon AWS is that they not only provides GSLB for their cloud regions but they also provide GSLB for all on-premises DCs, or combination thereof. And secondly, there is no need to do a DNS flip manually.


Hope you found this article interesting and beneficial.


Thanks for reading.


Ravi Kumar Atrey

I had to put together a security group that conformed to Microsoft's requirements for Active Directory domain server use between an EC2 instance on AWS and domain servers in our private WAN. I was surprised there was no script for this and decided to put one together.
This short article aims to teach you how to create a Docker image, as well as set up a basic LAMP Stack.
This holiday season, we’re giving away the gift of knowledge—tech knowledge, that is. Keep reading to see what hacks, tips, and trends we have wrapped and waiting for you under the tree.
Often times it's very very easy to extend a volume on a Linux instance in AWS, but impossible to shrink it. I wanted to contribute to the experts-exchange community a way of providing a procedure that works on an AWS instance. It can also be used on any Linux virtual machine.
The decision to migrate to the cloud is not a simple one—many factors, such a cost, ease of use, and ongoing maintenance come into play. The goal is always for cloud platforms to make storage and backups more seamless and effective. Here’s a look at how Experts Exchange made the transition.
As managed cloud service providers, we often get asked to intervene when cloud deployments go awry. Attracted by apparent ease-of-use, flexibility and low computing costs, companies quickly adopt leading public cloud platforms such as Amazon Web Services (AWS) and Microsoft Azure.
Windocks is an independent port of Docker's open source to Windows. This article introduces the use of SQL Server in containers, with integrated support of SQL Server database cloning.
On Feb. 28, Amazon’s Simple Storage Service (S3) went down after an employee issued the wrong command during a debugging exercise. Among those affected were big names like Netflix, Spotify and Expedia.
In this series, we will discuss common questions received as a database Solutions Engineer at Percona. In this role, we speak with a wide array of MySQL and MongoDB users responsible for both extremely large and complex environments to smaller single-server environments.
In the wake of AWS' S3 outage, we want to discuss the importance of storage and data diversification in the event of a hack, crash, or system disruption. We spoke with Experts Exchange’s COO Gene Richardson for a deeper understanding.
Or at least that’s the word according to a new blog from Tech Target on AWS’s new Managed Services (MS) offering. According to the blog, AWS is launching their AWS MS program to expedite the adoption of cloud by Fortune 1000 and Global 2000 companies. The article published last week notes AWS’ belief that companies want:

[T]o add additional automation, make use of standard components that can be used more than once, and to relieve their staff of as many routine operational duties as possible.

Further explanation is provided in AWS’ announcement of their new product which they claim is designed to take over system monitoring, incident management, change control provisioning and patch management. Indeed, these are usually functions that fall under the auspices of IT Ops. And as the Tech Target article goes on to note:

After all of this, the only ones left standing could be application developers, despite — or thanks to — Amazon’s vast array of development tools.

So, if we follow AWS’s logic, we might think that they have sunk their claws into the whole IT management life-cycle. The question then becomes, has AWS set the stakes for IT management to meet its maker? Ummmm, not so fast, Cowboy.

One cloud to rule them all

The first thing to note in reading the Tech Target blog
Happy holidays! Your Ops team can pack their bags. IT management and IT management tools are dead.
Or at least that’s according to a new blog from Tech Target on AWS’s new Managed Services (MS) offering.
Exchange server is not supported in any cloud-hosted platform (other than Azure with Azure Premium Storage).
Monitoring systems evolution, cloud technology benefits and cloud cost calculators business utility.

Expert Comment

by:Steve Alder
Good article. Cloud cost calculators and can certainly help to keep costs down to a minimum. Something else that is also worthwhile considering to keep your cloud costs down is a parking service, so you are not paying for compute resources when you are not using them. Im sure there are a few options out there, but ParkMyCloud is certainly a good one.
Microservice architecture adoption brings many advantages, but can add intricacy. Selecting the right orchestration tool is most important for business specific needs.
If you are thinking of adopting cloud services, or just curious as to what ‘the cloud’ can offer then the leader according to Gartner for Infrastructure as a Service (IaaS) is Amazon Web Services (AWS).  When I started using AWS I was completely new to the offering and I really didn’t know what it was, how it worked or what to do once I had access.  This article will cover some of the main points to be aware of that may help you when you first start out using the Services they provide.
The 1st thing you will want to do is to create an account, AWS offer you the ability to use some of their services for free for a year as long as it falls within their specific ‘free tier’ limits.  I used my own personal account for self study and learning and I found these service limits to be perfectly fine for what I wanted to do and test.  By default you will have access to ALL services that they provide, and you will only be charged for any services that you use that fall outside of the initial free tier.
The complete service limitations on what the free tier offers can be found here.  The main services that feature within this and the most common that you will initially use are:
Compute (EC2)
  • 750 hours worth of EC2 Compute Capacity for RHEL, Linux or SLES t2.micro instances
  • 750 hours worth of EC2 Compute Capacity for Windows t2.micro instances
As an example, you could run a single Windows instance constantly for 1 month, or 2 Windows instances for half a month, etc.
AWS Glacier is Amazons cheapest storage option and is their answer to a ‘Cold’ storage service.  Customers primarily use this service for archival purposes and storage of infrastructure backups.  Its unlimited storage potential and low storage cost makes it a popular storage choice.
What can be sometimes overlooked are the retrieval costs of your data, depending on how much you retrieve and over what time period can make a huge difference.  This article will cover these costs and help you understand the considerations of data retrieval.  Its important to note that AWS prices are exclusive of any GST or VAT chargeable (more information on these fees can be found here.)
Retrieval Costs
When you find yourself in a situation where you need to retrieve data from Glacier you need to understand some of the costs to ensure you can complete it in the most cost effective way. 
You will probably be aware that AWS boasts a retrieval rate of $0.01 per GB; however this is not as simple to calculate as it seems.
It’s important to note that you can retrieve up to 5% of your total AWS Glacier for free each month which is monitored on a pro rata basis daily.  Therefore if for example you have 15TB of data stored on Glacier you could retrieve 25GB a day for free (15TB * 5% (0.05) / 30(days) = 25GB).  25GB would be your daily free allowance.
Any data retrieved over this amount per day will be chargeable based upon other …
LVL 68

Expert Comment

by:Jim Horn
Well written and illustrated.  Voting Yes.
Security is one of the biggest concerns when moving and migrating your data from your on-premise location to the Public Cloud.  Where is your data? Who can access it? Will it be safe from accidental deletion?  All of these questions and more are important, and AWS knows and addresses this. 
Due to AWS being a global company deploying exactly the same services in all corners of the globe it has had to set the highest level of security conforming to all regulations in each country.  As a result, someone who is simply using S3 to store their personal photos gets the same level of security as a multi million dollar company who require the most vigorous of security regulations.
AWS complies with a number of different security standards that can be found here.
When it comes to Security, AWS operates within a shared responsibility model.  This means that the security ‘of’ the Cloud lies with AWS, and the security ‘in’ the cloud lays with you the user.  To break this down a bit further, the physical access to the Data Centres, Availability Zones, Regions, Edge Locations, Compute, Networking and Storage is the responsibility of AWS.  Your data and its encryption, configuration of your VPC security covering ACLs, Security Groups, IAM, patching of EC2 instances etc, is your responsibility. 
More information on the Shared model can be found here.
LVL 68

Expert Comment

by:Jim Horn
Excellent article.  Voted Yes

Expert Comment

by:Maidine Fouad
Good ,Perhaps as a  security suggestion you should include "Amazon Billing Alerts" .

The account credentials might be one day compromised ,the CC Credentials are hidden , but not the ability to purchase Extra instances witch hackers might abuse if they get access to it ?

Review of Cloudberry Explorer Pro - Linked with Amazon Web Services (AWS) S3

Being a Cloud technology fan and in particular services that are provided by AWS, I was interested to find a growing number of vendors writing software and applications that claim to blend and mesh with AWS services providing enhanced functionality and a better user experience. 

One of these vendors within this industry are Cloudberry Lab, who are an Advanced Technology Partner with AWS.  Looking into their products further I could see they heavily centred their solutions on Backup and Storage integration with a number of cloud providers.  I was specifically interested in their products that aligned with the AWS S3 Service.
Following an overview of their range of products I decided to download Cloudberry Explorer Pro for S3.  It claimed that Explorer for Amazon S3 provides a user interface to Amazon S3 accounts allowing to access, move and manage files across your local storage and S3 buckets”.  It also boasted a number of additional features, many of which are listed below:

Picture1.pngSome of these I would find more useful than others, but I ultimately wanted to see how this software would change my experience of using S3 from my local PC perspective when moving and transferring data into and out of S3.  I was keen to use the features such as the Compare and Sync …
LVL 68

Expert Comment

by:Jim Horn
Nicely done.  Voted Yes.

Expert Comment

This program is very far from being a good tool.
It was written by people who never used a commander-type file manager, it'd seem.

Just a few of the many annoyances:

- Usual key shortcuts don't work either at all, or not as they should (just hit tab and see if you can find your cursor!)
- Directory comparison opens a new view where everything is different
- You have to click a tiny button to refresh a view, the program is unable to detect changes neither locally nor remotely.  Ctrl-R is of course not working.
- You can try and ask them to fix/implement things, but there's seemingly no development of this program anymore.

The one redeeming feature; it can copy between two s3 buckets directly.
This article provides a guide on how to optimise your costs within your AWS infrastructure when using some of the common services such as EC2, EBS, S3, Glacier, CloudFront, EIP & ELB.

Expert Comment

by:Jay Mukoja
Greatly helped as i am in the process of considering to migrate to AWS

Author Comment

by:Stuart Scott
Hi Jay,

Thank you for your comments!

As you are in the middle of potentially migrating to the public cloud (AWS in particular) the following artilce that I wrote a few weeks ago could also be of interest to you.


Checking the Alert Log in AWS RDS Oracle can be a pain through their user interface.  I made a script to download the Alert Log, look for errors, and email me the trace files.  In this article I'll describe what I did and share my script.
AWS has developed and created its highly available global infrastructure allowing users to deploy and manage their estates all across the world through the use of the following geographical components
  • Regions
  • Availability Zones
  • Edge Locations
When architecting and designing your infrastructure it’s important to know where your data is being stored and where your instances and services are located.  This is fundamental when designing and implementing a highly available and scalable network with low latency that abides by any data laws that may be in operation.
If you are studying for the AWS certifications it’s important to know the differences between Regions/Availability Zones and Edge Locations.

What is an AWS Region?

A Region is essentially just that, a geographic location that Amazon has selected to run and operate its Cloud services from.  There are currently 12 different regions exist spanning across the globe at key locations:
North American Regions
  • US East (Northern Virginia)
  • US West (Northern California)
  • US West (Oregon)
  • AWS GovCloud (US) – Reserved for Government agencies only
South American Regions
  • São Paulo
EMEA Regions
  • EU (Ireland)
  • EU (Frankfurt)
Asia Pacific Regions
  • Asia Pacific (Singapore)
  • Asia Pacific (Tokyo)
  • Asia Pacific (Sydney)
  • Asia Pacific (Seoul)
  • China (Beijing) – Limited Public release

Expert Comment

by:prathap C
Hi Scott,

You have mentioned here as " many of the Edge Locations are located some distance away from some of the Regions " i cant get this point.have doubt like whether locations will come under region?

Justnow i have started to learn about cloud.

Thanks by,

Author Comment

by:Stuart Scott
Hi Prathap,

Thank you for your comment.  

Edge location are different from Regions, and as a result do not fall under 'Regions' as a location.  To put the global infrastructure in it's most simple form the different elements can be described as follows:

- Availability Zones (AZs): These are essentially the physical data centers of AWS. This is where the actual compute, storage, network, and database resources are hosted

- Regions: A Region is a collection of availability zones that are geographically located close to one other. This is generally indicated by AZs within the same city.  Regions do not include Edge Locations, only AZs

- Edge Locations: These are AWS sites deployed in major cities and highly populated areas across the globe and they far outnumber the number of availability zones available.  These are used to reduce latency to end users by using the AWS CDN service known as CloudFront.  You are unable to deploy your typical compute, storage, and database services in Edge Locations, the Edge Locations are reserved for simply reducing latency using CloudFront and Lambda@Edge services.

I hope this helps.



With the spotlight very much on Cloud technology within the IT industry, it’s difficult to avoid the topic these days. Due to the constant flood of new information, added pressure, emphasis and focus on cloud migration is driving corporations to investigate and understand what ‘the cloud’ actually is and discuss if they should utilise this technology as a potential gain to their business. 


As this understanding and knowledge of the Cloud grows, many Senior Directors and CTOs are taking a different look at their own infrastructure estate with one question hanging over their heads: Should we consider migrating some/all of our services to the Cloud? 

This article is your guide when asking that very same question.


There are many pieces to consider and some of them will be more applicable to others depending on your line of business, your company size, and your strategy. However, all items should be considered before making a decision to ensure you have sufficient information to deliver a successful Cloud infrastructure environment for you and your customers.

Here are the questions to ask yourself when deciding whether or not to migrate to the cloud:


1. Do you need the Cloud?


The first question you should ask yourself is “Do I need the cloud?”  Before answering this question you need to have an understanding of what the Cloud can provide you and your business, its features and benefits, its potential gains, and its risks and restrictions. You need to have a grasp of all of these elements and also understand your current on-premise infrastructure estate to be able to make a comparison of the two to ascertain the full scope of benefits to you.


What is driving you to take a leap into the Cloud and harness what it has to offer? Is it a strategic reason? Financial? Something else? 

A key aspect and draw of the Cloud is its huge potential cost savings. One main reason for this is that it prevents the need to spend CAPEX (Capital Expenditure are funds used by an organisation to purchase or upgrade physical assets) on your own hardware and all the costs associated with this like provisioning, power and cooling. Also, Cloud services are typically billed as utilities in that you only pay for what you use when you use it.  However, be sure to understand ALL of the Cloud costs and how it relates to you and your business.  For example, AWS charges for all outbound data transfer leaving the AWS environment and if your business processes large amounts of data that is then sent externally, this is something that needs closer scrutiny.


2. What do you want to migrate to the Cloud?


There are so many different service options provided by public cloud vendors, it’s important to understand what it is exactly you want to migrate to a Cloud environment before investing. Do you want to migrate your entire on-premise Data Centre estate to the Cloud, or just your Web services? Perhaps you only want to use the Cloud for its endless Storage capabilities or as a DR function. Maybe you wish to create a testing and development environment that allows you to spin up and shut down instances on demand, preventing you from having to spend vast amounts of CAPEX on hardware within your own Data Centre. Whatever the reason, the use cases for migrating services to the Cloud are endless and it’s important to have a clear understanding of what it is you need to migrate to the Cloud to make the best use of its power and benefits.


Additional care and attention should be used when looking at storing confidential information in the Cloud to ensure there is sufficient security controls in place and that it meets with your current stringent security standards, as well as any certifications such as ISO/IEC 27001.

3. Will your applications and services benefit from the Cloud?


You may have services and applications that have been written ‘in house’ and may not be able to fully support many successful and important elements of Cloud technology. For example, elasticity and scalability maybe not be supported by a custom built application. Your application may not be able to function in a decoupled environment where your services are able to work independently of each other (AWS is very good at implementing a decoupled environment, allowing elements of the infrastructure to fail without adverse implications on other aspects of your application). Can your applications benefit from the on-demand element of the Cloud? 

It is up to you to ensure that what you want to achieve from the Cloud can be achieved with the services and applications you want to migrate. As a result, you may have to spend time and money to redesign your application to work as expected after migration. 


4. Is this financially a good idea?


Your reasons to move to the cloud may not be financial, however, the cost of this step is still something that needs to have your consideration. You must ensure you have a full understanding of all the costs that the Cloud services will incur. These include the operational costs, such as the services used themselves, the impact of potential change of bandwidth and data transfer, not to mention the costs of educational training for your employees on how to monitor, manage and control the services. If there is a requirement to redevelop existing applications to make sure they mesh into the Cloud environment, then this must also be factored into your decision. 

However, with these costs in mind, there are considerable savings to be had by utilising Cloud services:


  • The Clouds natural economy of scale - The Cloud itself brings a huge financial benefit that is hard to beat with its use of economies of scale. Cloud providers purchase a huge and vast amount of hardware to deploy globally throughout their data centres. This huge scale operation allows the provider to pass these massive reduction of costs onto their consumers, ultimately providing a great saving from the word “go” on Compute and Storage alone.
  • CAPEX reduction - As noted earlier in this article, you do not need to house and provision your own hardware and its associated Data Centre Costs (Power and Cooling - which alone can be 40-50% of the total IT Cost in a DC). In addition to this, depreciation of your own hardware no longer poses a problem.
  • No need to provision hardware for DR - By utilising the Cloud, you can easily plan for high disaster recovery without ever having to purchase additional hardware that may never be used, which presents a huge cost saving. Simply configure and architect your infrastructure to spin up another host when you need it, and only pay for it when doing so.
  • No hardware maintenance costs - Without the need to manage your own hardware,  you will no longer need that costly hardware maintenance contract. All hardware failures, upgrades, and maintenance are performed by the Public Cloud vendor. With this in mind, you must architect your estate to cope with such failures.
  • Optimisation of hardware - Static hosts within your on premise Data Centre very rarely run at full capacity due to over provisioning for future capacity management, and as a result, cost you significantly more than you actually need to pay out. Utilising the scalability and elasticity of the Cloud (with its ability to resize your Compute and Storage requirements at will and ease), enables you to consistently optimise your actual requirements, ensuring its kept efficient and streamlined.
  • Reduced implementation time - There are many ways to deploy Cloud infrastructure that can drastically decrease the time to deployment. Your whole Cloud infrastructure containing hundreds of instances, security, and load balancers within a highly available architecture, can be deployed via a script with just a few clicks. One example of this is to use CloudFront within AWS. You can have as much compute and storage as you need at your fingertips available to use in just a few minutes rather than the days/weeks it may take to have everything operational if provisioned yourself on premise.
  • Pay as you go - Most services within the Cloud are charged only on what you use and for the time you use it. If you stop your instances, you stop paying. If you delete your storage, you are no longer charged for it. If your load DNS servers perform so many requests, you will only be charged for that amount.
  • Reduced Licensing costs - Many cloud providers include the cost of all software licensing deployed with the instance type in their rates. This can save you a significant amount of money if you only need to use these services for a short period of time.
  • Flexibility - Utilising varied sized instances (Compute) within the cloud for short periods of time as and when you need them for testing, allows for great flexibility that would not be possible in a standard in-house scenario. If your instance isn't big enough for your application, with a simple configuration change, you can increase compute/storage capacity.
  • Staff reduction - From a purely business perspective, migrating services to the cloud often means a reduction of staff required within your organization. Some of the job functions are performed and incorporated by the Cloud providers themselves which often works out as a reduced cost in labor.

For the service(s) you are migrating, check to ensure this makes good financial sense from your current mode of operations? If not, communicate this to your Project sponsor and other key stakeholders in the migration to ensure this is highlighted at the right level of seniority preventing this from being overlooked.


5. What vendor should I choose?

There are many public cloud vendors in the marketplace. According to Gartner’s May 18, 2015 “Magic Quadrant for Cloud Infrastructure as a Service, Worldwide”, Amazon Web Services (AWS ) and Microsoft are the only vendors positioned in the Leaders’ Quadrant.


Many vendors are trying to get a segment of the Public Cloud market share, and each of these vendors offers and proposes different solutions and services that may suit your needs more than others. Carefully research each to identify who has the best suited services and tools to meet your business requirements; this may not necessarily be those positioned in the leaders’ quadrant  (Amazon Web Services and Microsoft).*


It is also important to review the SLAs that the vendors offer for each of the services that you will be using. What impact does this have on your current customer contracts? Do the vendors offer any kind of compensation should they breach their SLA? Remember to drill down on the agreement levels for each service and offering you will be utilising to ensure it doesn’t represent issues and problems with your current customer base.


6. What type of Cloud service do you need?


Typically Cloud services are delivered as 3 different models:


  • SaaS (Software as a Service) - This provides the lowest level of customisation and provides a service that is typically accessed through a portal front end, such as a web browser. All management of the underlying hardware and operating system is owned by the Cloud vendor, so administration of the system is minimal as you do not have access to these elements. All users see the same version of software. An example of a SaaS application is; Anyone can access it and everyone sees the same front end.
  • PaaS (Platform as a Service) - This allows users and developers to create, manage and build applications on top of a managed platform/operating system. Again, the hardware and operating system is managed by the vendor, so you don't need to worry about patch updates of the OS as this is taken care of under this model. You can however develop and customise you own applications on top of the OS giving greater flexibility than that of SaaS.
  • IaaS (Infrastructure as a Service) - This provides the greatest level of customisation to you, as you have full control over the host from the Operating System upwards. You can deploy whichever OS you need, and anything else on top of that stack. However, this also means you are responsible for all patching from the OS level up, too, though even with this model, the hardware is still managed by the Cloud vendor.

You need to be clear on what model you need. You might currently own and control everything to do with your infrastructure on-premise, but perhaps as a part of your migration, you could optimise how much you control and move towards more of a PaaS offering, offloading some of the OS patching responsibility to the Cloud vendor and giving you more time to concentrate on your application development. It is a good idea to use this migration as a means of optimising your current mode of operations; Different offerings could benefit you in more than one way.  When migrating, you need to employ a different mindset as moving like-for-like infrastructure is not necessarily the best way to deploy your environment.


When utilising Public Cloud providers, it is likely that you will not want to migrate ALL of your estate to the cloud. Instead, you want the Cloud to be seamlessly linked to your own corporate network as an extension, also known as a Hybrid Cloud environment. Look at the differences between Public and Hybrid Cloud environments as they each offer advantages.


7. Where will your data be stored?


It is important to know where your data is stored with your chosen public provider. You may have to adhere to data laws whereby your data must remain within a certain geographic location. Ensuring you have control over your data, its storage location, and any backup replication service that takes place is crucial in this instance. 


Keeping your data as close to the users of that data will result in low latency access, so this is an important factor when architecting your network. Maintaining a strong understanding of where you data is stored and replicated is critical when designing a resilient, highly available environment should there be a major disaster.


8. Can I use the migration to re-invent?


When making a change in your infrastructure as huge as migrating one or more service to the Cloud, it generates a fantastic opportunity that can be rare in some environments: it allows you to re-invent you current mode of operations. You have a chance to implement new compute and storage power for your systems, a new network design, and a slicker, more defined environment. 

Do not always try to implement a ‘like-for-like’ basis as to what you have on-premise. That environment has possibly grown and been added to as time and the years have gone on, and will likely not be as efficient as it could be. Learn from previous mistakes and look to mitigate any risks you have in your current infrastructure. Look at your end goal, look at the tools available, land ook at the services being offered and how you can utilise the service benefits to your gain. Use this chance to implement an efficient, secure, defined environment that is fully controlled and monitored, making full use of the vendor’s Cloud features.


9. Do you have the right resources?


One important and key element to a successful Cloud migration is to ensure you have the right skill set to identify, analyse, design, implement, and manage the migration. Depending on your requirements, it can be a fairly pain free exercise to move some service to the Cloud, however, has that service and migration been optimised to get the most out of the Cloud? Has the right instance type been used? Has it been designed to ‘self heal’ should a problem occur? Has it been configured with the highest level of security possible at both a network level and an instance level? 

It’s one thing to implement a Cloud offering, but it’s entirely another to maximise and architect your infrastructure as a self-healing, fault tolerant, highly available and resilient environment that automatically scales up and down and reacts to traffic demand whilst also performing self-monitoring and event notification services to support teams as and when required.


Implementing a poorly designed Cloud environment can have adverse effects against your original business plan and goals in the reason for migrating in the first place.


10. What are you timescales?


Do you have specific timescales to get your infrastructure migrated to a Cloud environment? If so, plan effectively to ensure you have the right resources on hand to implement it correctly and to your specifications. If this is your first migration, be realistic with how long it will take and the learning curve that will inevitably be experienced for your employees; the Cloud is a completely new way of deploying infrastructure. There will be issues and there will be problems as with any new IT infrastructure design and build. Do not underestimate the time required in implementing your solution specifically if it’s for a deadline that can’t be changed. For example, you may be deploying your web services to the Cloud in time for the launch of a new, highly anticipated product where traffic demand to your site is expected to rise dramatically, utilising the scalability and elasticity that the cloud provides. Make sure you are being realistic.


11.  How will this change the dynamics within my organisation?

Adopting the Cloud changes the dynamics of the IT department within your organisation no matter what. You will essentially be delivering and managing a new IT service. As a result, there may be new roles that need to be created that didn’t exist before. New processes and procedures will need to be created to aid with the migration and on-going support. New tools and monitoring will need to be deployed and understood to ensure the correct reporting of the infrastructure is still being retrieved to meet SLAs. As noted earlier, you may need to reduce your staff count as some of the roles may no longer be required such as hardware and cabling installation engineers. 


12.  Do you have the right Security?  

One element of the Public Cloud that some people are very wary and cautious of is security.  The question of "How safe is my data on the Cloud?" often comes up as many people do not know exactly where their data is. However, it's probably fair to say that Cloud security (when implemented correctly) is likely to be more stringent than you have on your own premises today.    

Physical security of Public Cloud providers is managed by the vendor and complies with robust compliance programs to reassure customers of the security they have in place. The security and compliance standards that vendors adhere to is easily available to all. For example, please see the Amazon Web Services whitepaper which covers in detail their physical security and also network and service security details.

Its important to note that Security at an instance and network level is a shared responsibility.  The vendor will monitor traffic for vulnerabilities across their estate, while you must ensure you architect your infrastructure in a way that complies to your own security controls through the use of Identity Access Management, Security Group, Access Control Lists, Firewalls, Multi-factor Authentication, and any other means you have in locking down your environment. It is important to have someone with the right skill set who understands Cloud security when migrating your infrastructure. They will be responsible for configuring your data correctly, and managing who can get to it so that it will be highly secure. Each vendor has different security controls so a specialist in your chosen vendor is crucial.


Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.


By no means are these the only factors to consider when thinking of deploying and migrating services onto the Public cloud, but they will have hopefully made you think of elements that you might not have necessarily considered at this stage within your plans. Should you end up migrating to the cloud, I hope this articles initially helped you decide to harness the full power of what the Cloud has to offer and are now able to reap its full benefits to your organization.

Thank you for taking the time to read my article, if you have any feedback please do leave a comment below.

If you liked this article or found it helpful please click the 'Good Article' button at the bottom of this article, it would be very much appreciated.


I look forward to your comments and suggestions.


LVL 68

Expert Comment

by:Jim Horn
Voted Yes.
LVL 37

Administrative Comment


Thank you for your revisions. I encourage you to use the "Author Bio" as I mentioned in my prior comment - it's intended for exactly that type of information, and appear to every reader right after the big green "Vote this Article as Helpful" button.
When using AWS as your chosen public cloud provider you will ultimately come to a point where you need to decide and define what your storage requirements are for your data that you wish to store on AWS. There are a variety of options to choose from depending on your needs, each with different attributes ranging from: temporary storage, permanent storage, highly available object based storage and even cold archival storage.

This article has been written to give you a high level overview, hopefully containing enough information to guide you in selecting the most appropriate storage service you require. Additional information has been provided for each option with links to offici al AWS documentation.

The remainder of this article will be broken down in to the following sections:
  • Defining you Storage Needs
  • AWS Storage Services
  • Moving Data into AWS
  • Costing

Defining your Storage needs

To understand what you need from your storage solution you need to understand and ascertain what elements are important to the data being stored. You need to ask yourself the following questions to help and guide you to the correct service/solution for your storage.
  • How critical is this data?
  • How sensitive is the data you need to store?
  • How often will the data be accessed?
  • How large is the data?
  • Who requires access to the data?
  • How much are you prepared to pay for to store the data?
  • Where is your data coming from?

AWS Storage Services


Author Comment

by:Stuart Scott
Thank you Shalomc, you're right! I also need to add S3 Infrequent Access too.  I shall get this updated!



Expert Comment

by:Wells Anderson
Well done! It can be confusing trying to weed one's way through all of Amazon's options. This article is a big help.


Amazon Web Services (AWS), is a collection of remote computing services, also called web services, that make up a cloud-computing platform  operated from 11 geographical regions across the world. The most central and well-known of these services include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3". Other services include Elastic MapReduce (EMR), Route 53 (a DNS web service),  provides a highly available and scalable Domain Name System (DNS) web service, Virtual Private Cloud (VPC), storage, database, deployment and application services.