[Last Call] Learn about multicloud storage options and how to improve your company's cloud strategy. Register Now







Amazon Web Services (AWS), is a collection of remote computing services, also called web services, that make up a cloud-computing platform  operated from 11 geographical regions across the world. The most central and well-known of these services include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3". Other services include Elastic MapReduce (EMR), Route 53 (a DNS web service),  provides a highly available and scalable Domain Name System (DNS) web service, Virtual Private Cloud (VPC), storage, database, deployment and application services.

Share tech news, updates, or what's on your mind.

Sign up to Post


Is there any similar kind of technology like snowball aws in Azure?
if not how to Transfer Petabytes of data to Azure?
Enroll in October's Free Course of the Month
LVL 10
Enroll in October's Free Course of the Month

Do you work with and analyze data? Enroll in October's Course of the Month for 7+ hours of SQL training, allowing you to quickly and efficiently store or retrieve data. It's free for Premium Members, Team Accounts, and Qualified Experts!

I have a Lex Bot with 3 slots. The user will provide the Lex bot with first name, last name, and birthdate. The user can also fill multiple slots at once, such as 'My name is Steve Jobs.' I'd like the bot to repeat back the users name (or first name, or last name or DOB depending on what was said) before asking the next question. ie -'Thank you, your name is Steve Jobs. May I please have your birthdate'

I figure I need to use the elicitslot dialog action in my lambda function. However, I'm not sure exactly how I can do this. I think I need to figure out someway of keeping track of which slot was just filled. Is there a good way to do that?
Can anyone help in the best way to run ansible on launch events in AWS autoscaling groups.  

I was planning on firing off a lambda task to call a playbook through jenkins. But the notification from AWG only has the instance ID and I don't really want to use user data for the instance to register to to ansible or jenkins, I would prefer that AWG notifies Jenkins of the event and it fires off the build.

Any suggestions of how to use Lamba to take the launch event from an ASG and and pass the IP address of the new instance to a jenkins project as the input for the inventory of invoking a play book. Or indeed an SQS queue.

Thank you
Hi experts,

I am planning to migrate bunch of windows 2003 servers to a cloud service. These servers host a Lagacy application and they consist of DCs, Exchange, SQL 2005,MOM 2005, Biztalk, IIS, and ISA 2005.

Is it possible to migrate the current group of servers to a cloud service. These servers also use SQL cluster and NLB. I have read the NLB is not supported by Azure.

Is there a tool I can run on the servers that can tell me if the servers are fully compatible with a cloud service. Or is there a checklist that I can use to see if my servers and applications like MOM 2005, ISA, Exchange etc. will work in the cloud. Will I have to consider upgrading the OS or application or redesigning the cluster and NLB nodes for the migration.

Many thanks
I have a bunch of complicated node apis deployed on ec2s. What would be the easiest way to make them lambda compliant (by adding handler method) and deploy them on lambda ?
I am looking for some steps and guidance.
Hi Experts,

I have been told by AWS suport that a private RDS database needs an external ip address to connect to Quick Sight.  Can anyone let me know how I can safely connect my MySQL RDS data source to a instance of Quick Sight keeping the database private?
I did follow an article https://stackoverflow.com/questions/44207552/aws-unable-to-connect-amazon-quicksight-to-rds
but I have subsequenetly been tolo that this configeration will not work.  Any suggestions would be appreciated.

Best Regards

Hello Experts,

I am running into an issue where my Windows 7 Pro client connects to a RAS on Server 2012 using L2TP. After I connect successfully, I can ping the RAS server on the local IP, however, I can't ping any other machines on that same subnet.

IF I log into the RAS server using RDP, I can then ping other local machines. Is it wrong for me to expect that I should be able to ping to the other machines on the same subnet? Do I need additional routes or VPNs?

The RAS server is on AWS EC2 and so are the other machines. I have allowed all traffic from each subnet using the Security Groups on AWS.

Several times a day for the past 2 days we have been losing connection to our website internally for about 40 minutes. Connection returns with no changes on our part.

The website  moved to AWS several months ago. Before the move to AWS this issue never occurred, as the website resided here.

We have (long ago) disabled edns on our DNS servers, but we also use forwarders, so that should not even be an issue.

As far as we can tell, access from outside of our organization remains unaffected, although, obviously, we cannot test from the customers of all ISPs.

Is there something we should look for that we don’t know about? Do AWS websites sometimes send even larger packets that don’t make it through our firewall?
Is there some protocol beyond EDNS that we don’t know about, that would sporadically come into effect, hence causing an intermittent outage?
Long story short, we've tried to migrate a machine from our datacenter (VMWare) to AWS and it gets to the Ctrl+Alt+Delete to unlock screen, but I cannot RDP into it because it seems to think there is no network connection present.  To my knowledge, my team has deleted any drivers by VMWare.

Of course I can take a screenshot through AWS, but no pinging or access in any other way.

I've tried
1. Mounting it and trying some registry changes when mounted on another instance, but that usually ends up with a blue screen.
2. Adding NICs
3. Changing instance type to C3.2XL

Bottom line is, red x = nada.

Any advice would be greatly appreciated.
I’ve installed the GNOME 3 desktop on an Oracle Linux 7.3 instance on Amazon AWS (AMI ID OL7.3-x86_64-HVM-2016-11-09).  The desktop seems a bit off, however.  As seen in https://imgur.com/a/EgAON the resolution is poor and, more importantly, the drop-down applications menu is missing.   I’m using TigerVNC (VNC Viewer  6.17.731) to connect to the server.  The desktop was installed with      yum groupinstall -y "Server with GUI"
Any insights would be welcome.
Are You Ready for GDPR?
Are You Ready for GDPR?

With the GDPR deadline set for May 25, 2018, many organizations are ill-prepared due to uncertainty about the criteria for compliance. According to a recent WatchGuard survey, a staggering 37% of respondents don't even know if their organization needs to comply with GDPR. Do you?

Hi all,
Not sure if this is possible or not, I have 2 virtual ESXI hosts locally and I would like to create a vcenter in an AWS EC2 instance to manage the hosts from there, is this possible? which amazon service would help me to achieve this?
aws volume tagging need to be done for the instances. I can get the instances and volumes and block device of only one need multiple volumes and devices attached to it.

 for j in $(aws ec2 describe-volumes  --filters Name=attachment.device,Values=/dev/sda1 Name=attachment.instance-id,Values=i--xxxxxxxx --query 'Volumes[*].{ID:VolumeId}' --region us-west-1 --output text); do
      echo $j
      aws ec2 create-tags --resources $j --tags Key=Name,Value=SSVD

Is there a way I can get mulltiple devices.
As per the limitation of Amazon RDS not being able to do Distributed Queries (i.e. Linked Servers) to OnPremise MSSQL host (as per https://aws.amazon.com/blogs/database/implement-linked-servers-with-amazon-rds-for-microsoft-sql-server/ documentation), I want to know if its possible to setup something of a 'reverse proxy' for allowing a MSSQL RDS instance, connect to said proxy and send SQL calls to an OnPremise SQL host instead?

As per Amazon support -- "It’s an internal IP resolution and routing issue.  When the SQL Server is inside a VPC, the SQL Server isn’t able to use the customer provided DNS entries. This causes DNS lookup failure.  Additionally, the server’s routing tables don’t allow the server to see the customer’s VPC Gateway meaning, there’s no routing path for traffic back to your on-prem servers even if lookup succeeded (or used IP addresses). // If the IP address doesn’t appear in the VPC, we will not route the traffic through the correct network interface, and on-premise database servers would fall into this category." (As per support as well -- "There’s an open enhancement request to fix this but honestly, it’s pretty old and hasn’t gotten much traction for prioritization.")

To get around the limitation of what Amazon has done with the internal IP resolution / routing, primarily with RDS and the TDS Protocol outside of a VPC, I would like to 'trick' the Amazon MSSQL RDS instance into thinking that it is communicating with a Windows EC2 instance running…
Hi everybody,
I have set up an Amazon SES account to send my transactional mail. It looks like working good.
But I need to track remote server response of each of email (when it is send, delivered and accepted by the server)
In this document they suggest some alternatives http://docs.aws.amazon.com/ses/latest/DeveloperGuide/monitor-sending-activity.html

Thru the alternatives I gave a try to  Amazon CloudWatch. Cloudwatch started giving me some information at the overall but not at the email detail (delivery time, etc)

Anybody can help me in this issue?
Thank you
Is it possible to acquire "subcondition" by Amazon PA-API?
I set a request parameter with ResponseGroup = OfferFull and made a request but the returned XML did not contain a subcondition.

In the API document, explanation about subcondition is described, but just the document is old, was subcondition obsolete?

somebody help
I want to modify the RDS instance size to have higher CPU during peak hours, then lower off hours. It would be much more cost effective. So for example, 7am to 5pm M-F I want to use a db.r3.xlarge, but off hours, I only have a handful of users, so even a db.t2.small would be fine. I understand there would be a little down time during the switch, but I want to automate it. Thanks again.

I have setup a full AD on AWS and able to join EC2 instances to the domain. However, i need to promote an EC2 instance as a domain controller to host the licensing server for RDC. i am currently unable to do so as the AD user accounts do not have sufficient privilege to promote a server to DC.

is there any way of doing this?
Hi All,

I am having Issues trying to access a Tomcat JSP page from AWS ELB I can access it from the localhost,
but ELB isn't allowing me to get access out.

my EC2 Instance has security roles
HTTP 80 ::/0
TCP 8095

The ELB has a listener for HTTP 80 HTTP 8095 with secturiy for
http 80
TCP 8095
I keep getting errors that may be causing some problems. At some point our exchange servers starts rejecting connections and I get two errors in my event viewer logs. See the errors below, also Code 2 Exchange Data Migration service just says starting but never starts.. Is it even necessary?

1. Log Name:      Application
Source:        MSExchange RBAC
Date:          8/17/2017 1:45:21 PM
Event ID:      70
Task Category: RBAC
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      EXCHANGE.Digitus89.hosting
(Process w3wp.exe, PID 1248) Fail to create runspace for user Digitus89.hosting/Microsoft Exchange System Objects/Monitoring Mailboxes/HealthMailbox86a4ec7017034ae8ad56f0bc624f76ce because the user has reached the maximum number of connections allowed. Max allowed connections: 18.
Event Xml:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
    <Provider Name="MSExchange RBAC" />
    <EventID Qualifiers="49152">70</EventID>
    <TimeCreated SystemTime="2017-08-17T17:45:21.000000000Z" />
    <Security />
    <Data>Digitus89.hosting/Microsoft Exchange System Objects/Monitoring …
Get free NFR key for Veeam Availability Suite 9.5
Get free NFR key for Veeam Availability Suite 9.5

Veeam is happy to provide a free NFR license (1 year, 2 sockets) to all certified IT Pros. The license allows for the non-production use of Veeam Availability Suite v9.5 in your home lab, without any feature limitations. It works for both VMware and Hyper-V environments

Hello team,

We have created a vpn tunnel (VPC) from our physical office to Amazon. The tunnel is active but I can't connect to any of my Ec2 machines using the private ip address. Is there anything additional that needs to be done in the amazon side to make this work? I'm not sure if there's a firewall or something I will need to configure as well.

Thank you!
here is the below shows us when I did scan .
 /dev/ram0                  [      16.00 MiB]
  /dev/dbbackupvg/dbbackuplv [    1000.00 GiB]
  /dev/ram1                  [      16.00 MiB]
  /dev/root                  [       9.99 GiB]
  /dev/ram2                  [      16.00 MiB]
  /dev/ram3                  [      16.00 MiB]
  /dev/ram4                  [      16.00 MiB]
  /dev/ram5                  [      16.00 MiB]
  /dev/ram6                  [      16.00 MiB]
  /dev/ram7                  [      16.00 MiB]
  /dev/ram8                  [      16.00 MiB]
  /dev/ram9                  [      16.00 MiB]
  /dev/ram10                 [      16.00 MiB]
  /dev/ram11                 [      16.00 MiB]
  /dev/ram12                 [      16.00 MiB]
  /dev/ram13                 [      16.00 MiB]
  /dev/ram14                 [      16.00 MiB]
  /dev/ram15                 [      16.00 MiB]
  /dev/xvdf                  [     500.00 GiB]
  /dev/xvdg                  [       1.95 TiB] LVM physical volume
  /dev/xvdh                  [     500.00 GiB]
  4 disks
  16 partitions
  1 LVM physical volume whole disk
  0 LVM physical volumes

I modified volume in aws from 1TB-2TB and created new directory and want to mount that 2TB and I will resixe that volume once mounted .
But when I tried to mount getting error message unknown  file system LVM .

Can someone please help me out.
Hi - I have been running Unifi on AWS Ubuntu instance for over a 12 months and in that time have successfully upgraded new releases. I am managing 10 networks through tyhe Unifi controller.
However, I am having a problem this time with the upgrade from 5.4.14 to 5.4.20.
I will point out to you at this stage that I am not very experienced with Ubuntu and generally get by on information ansd instructions which I find online. This is the output which I am getting from the server (using pUTTY) when I run the upgarde command:
Preconfiguring packages ...
(Reading database ... 130432 files and directories currently installed.)
Preparing to unpack .../unifi_5.5.20-9565_all.deb ...
Previous setting (UniFi 5.4.14) is found.
Unpacking unifi (5.5.20-9565) over (5.4.14-9202) ...
E: Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable)
E: Unable to lock the administration directory (/var/lib/dpkg/), is another process using it?
dpkg: warning: subprocess old post-removal script returned error exit status 100
dpkg: trying script from the new package instead ...
E: Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable)
E: Unable to lock the administration directory (/var/lib/dpkg/), is another process using it?
dpkg: error processing archive /var/cache/apt/archives/unifi_5.5.20-9565_all.deb (--unpack):
subprocess new post-removal script returned error exit status 100
Previous setting (UniFi 5.4.14) is found.
E: …

I currently have a few EC2 instances and a bunch of S3 buckets. I need to figure out how to download all of the logs on one of my S3 buckets and then search the logs to see what IP and access occured. is there a built in method on amazon aws to do this or what is the best easiest way to get information out of the log files?

I need to be able to search probably a couple thousand logs spanning 2016 through 2017 time period for any IP and a particular file that was accessed.



I have tried a number of times to reduce the size of my EBS volume. I keep running into problems.

I am using Centos 7.3.1611 and Plesk (set up through Amazon marketplace)

I have managed to copy all the files over from the old server to the new using the instructions here (https://superuser.com/questions/1123799/how-to-decrease-size-the-ebs-root-volume-of-the-rhel-instance-in-aws) - as well as similar other guides.

The only difference is because I am using xfs instead of ext4 I was unable to do the e2label

When I attach the new volume to the instance as /dev/sda1 it does not load. I cannot connect to it using SSH and websites do not load.

I am at a loss now. I have tried so many times. I made a mistake making my initial EBS volume size way too big and it is costing me quite a bit of money every day.

I am unsure if the problem is because I am formatting my drive is XFS. This was the format of the original drive when I first set up the instance. I have not been able to find a guide that details how to shrink an EBS volume that is formatted in XFS, they are all for ext4.

If any one could provide some assistance that would be much appreciated.
Hi All,

we have configured cloudfront URL. when I am hitting in browser it is working fine whereas hurl.it I'd a tool and if we hit the URL it is getting internal server error.

why I am seeing this result in hurl tool.  any config mismatch on cloudfront?






Amazon Web Services (AWS), is a collection of remote computing services, also called web services, that make up a cloud-computing platform  operated from 11 geographical regions across the world. The most central and well-known of these services include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3". Other services include Elastic MapReduce (EMR), Route 53 (a DNS web service),  provides a highly available and scalable Domain Name System (DNS) web service, Virtual Private Cloud (VPC), storage, database, deployment and application services.