Amazon Web Services (AWS), is a collection of remote computing services, also called web services, that make up a cloud-computing platform  operated from 11 geographical regions across the world. The most central and well-known of these services include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3". Other services include Elastic MapReduce (EMR), Route 53 (a DNS web service),  provides a highly available and scalable Domain Name System (DNS) web service, Virtual Private Cloud (VPC), storage, database, deployment and application services.

Share tech news, updates, or what's on your mind.

Sign up to Post

Hi Expert

 when i run the apache in the localhost:80 it show the following errors
" Not Found, HTTP Error 404. The requested resource is not found"

Please note i have installed the apache in the Centos 7 in the AWS instance.

For your information I did further test by using the following command in the Linux by using the telnet command
" telnet 80", show the following errors.

# telnet 80
Connected to
Escape character is '^]'.

I have test the outbound traffic http port 80 was open and inbound traffic port 80 open.

Appreciate your prompt response.
Expert Spotlight: Joe Anderson (DatabaseMX)
LVL 13
Expert Spotlight: Joe Anderson (DatabaseMX)

We’ve posted a new Expert Spotlight!  Joe Anderson (DatabaseMX) has been on Experts Exchange since 2006. Learn more about this database architect, guitar aficionado, and Microsoft MVP.

I wish to route TCP traffic from one colo location onto amazon web services ( central canada) , then onto amazon web services ( singapore) . What is the preferred way to do this using aws services ?
Hi Experts

Could you point how to run MSSQLServer queries directly on AWS server environment instead of using  SQLServer management Studio?

Today the company started using AWS. My queries are completelly developed locally. I was warned that if I continue to run the queries locally, by using MSSQLServer management studio, as I use to do, the traffic costs between AWS and my local machine would be too expensive.  So I'm planning to run at AWS environment, avoiding that costs.

Thanks in advance!
Hello all,
        I am currently trying to create an ansible playbook to backup all of our EC2 instances in AWS.
I have tries a couple of different ways of going about this and I am at a loss.

        If anyone has done this please if you could shre the playbook I would be most grateful.
Having experience/Exposure to the below streams over 10 years
@ Administration-Linux-Unix/Oracle DB/Weblogic servers- Fusion Middleware

Kindly advise, Now moving to AWS platform, could leverage for current/future trend  ?
Hi Guys,

I have a test environment in AWS, which consists of 5 EC2 units. Each EC2 is in separate autoscaling group with min 1 and max 4.
What I want to do is terminate all 5 servers every day at 10 pm and recreate them at 8 am.  This can't achieve by stopping an EC2 because autoscaling spins out a new instance straightaway.

I found this article, which says about setting the desired capacity. 

My question is  - is it possible to write a lambda function, which will amend autoscaling to 0 ? If so, can I have an example please ?
I'd like to have a subdomain or something similar(CNAME?) of a domain that I own redirected to an Amazon S3 htm file without the user seeing the name of the S3 file in the address bar when the user is redirected. (Godaddy called this "masking"). all while the subdomain address shows SSL Encryption.

i.e. I would like: 

to redirect to: 

I've attempted:
Creating a custom bucket in S3 that is the same name as my domain name, then going to Route 53 to get the naming servers, and changing the nameservers for in Godaddy to Route 53 nameservers so I can point Route 53 to S3 files.

This didn't work for me. So I...

Looked at setting up CloudFront to serve HTTPS requests to my Amazon S3 buckets...
I've been instructed to to this:
"- Since your files in bucket "v-tours" reside inside folder "Canton Museaum of Art/Salon Style 2/Tablet & Web Files", you should use this as value for 'Origin Path'.

In step 5, you should choose 'redirect HTTP to HTTPS' so that HTTP requests from client will be redirected to HTTPS by CloudFront.

As you are using custom domain " ", you must use this value for 'Alternate Domain Names (CNAMEs)'. Also, kindly install a custom certificate on CloudFront…
I have a new Server 2016 instance running the new AWS Lightsail platform.  I would like to setup a site to site VPN between the local office network and the server.  There isn't much documentation on VPCs with Lightsaol compared to EC2.  The subnet that the lightsail instance is on is restricted in the VPC config, so I can add that subnet to the VPC. I can setup VPX peering between EC2 instances and lightsail but I can't figure out how to make the lightsail instance visible when setting up the VPN connection.
Hi Experts,

could you suggest the best way to convert custom application text log file to json format please

Read the whole file and convert line by line to json on a windows server.

Since application generate log in its custom location, may be I should read this directory for each file > convert each line to json object, place it in a different location. Also we should not duplicate or read the same line again :)

Also is it possible to apply similiar logic to windows events log.

please advice
Hi Experts,

I get the following error as below for wagtail (Django application) inside a Docker container.  Please see the uwsgi logs inside the docker container as below.

*** Starting uWSGI 2.0.18 (64bit) on [Mon Mar  4 03:56:36 2019] ***
compiled with version: 5.4.0 20160609 on 04 March 2019 01:00:36
os: Linux-4.4.0-1057-aws #66-Ubuntu SMP Thu May 3 12:49:47 UTC 2018
nodename: e56d42de8c73
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 8
current working directory: /home/ntdl/code
writing pidfile to /tmp/
detected binary path: /usr/local/bin/uwsgi
setgid() to 33
setuid() to 33
chdir() to /home/ntdl/code
your memory page size is 4096 bytes
detected max file descriptor number: 1048576
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to UNIX address /tmp/ntdl.sock fd 8
uwsgi socket 1 inherited UNIX address @ fd 0
inherit fd0: chmod(): No such file or directory [core/socket.c line 1797]
Python version: 3.6.2 (default, Jul 17 2017, 23:14:31)  [GCC 5.4.0 20160609]
Python main interpreter initialized at 0x971510
python threads support enabled
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 543168 bytes (530 KB) for 20 cores
*** Operational MODE: threaded ***
WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter 0x971510 pid: 24 (default app)
*** uWSGI is running in 

Open in new window

Angular Fundamentals
LVL 13
Angular Fundamentals

Learn the fundamentals of Angular 2, a JavaScript framework for developing dynamic single page applications.

I have a web server in an AWS VPC.  
Only about 50 IP Addresses are open to see the page on this server.  I filter the IP addresses through Security Group, so if someone is not authorized, they cannot touch the machine at all..
 The issue I am running into is that all of the folks that cannot get in simply spin and timeout.  I would like them to get a message "You are not authorized to view this site".  How can I do that before I hit the page?
 I need to be able to check with the users IP address.

Thank you ion advance,
Hi Windows Server experts,

Just need a general guidance on this scenario pls

In AWS EC2 Windows Server 2016 (serverA), have manually installed softwares,  imported/configured SSL certificates, etc for one of my application, my application works fine in https....all good at this stage.

As you know, in AWS, we have an option to create an AMI (Snapshot) from an EC2. Using this AMI we can create subsequent servers instances (serverB, serverC, etc , so we dont have to reinstall and configure softwares once again.

My Request is:
If I launch and create new EC2 Windows server based on above AMI, Do I have to perform anything extra for SSL certificates to work on serverB, serverC, so on? would that works as it was working on the serverA.

please suggest. and advice.

Thanks in advance

So i use Amazon S3 upload file to store uploaded Files,

Now what im trying to do is  a custom attach document to a email which i am using phpmailer to attach, my issue is im not sure how to get the file from Amazon S3 and attach it to the PHPMailer.

I have only front end and jq setup, where you click on a attachment which sends a ajax request to the backend which uses a file path to go to the S3 but im not sure how to bring back the file to pass into phpMailer.

any tips or advice ?
Hi Experts,

I get the following error when I run the docker

root@ip-10-252-14-11:/home/ubuntu/workarea/sourcecode/NTDL-TEST/Harvest-Trove-Pictures# sudo docker start trove_pull
Error response from daemon: invalid header field value "oci runtime error: container_linux.go:247: starting container process caused \"exec: \\\"start\\\": executable file not found in $PATH\"\n"
Error: failed to start containers: trove_pull

Open in new window

the docker file contents are as follows

FROM ubuntu:16.04


RUN apt-get update -y
RUN apt-get install -y software-properties-common python-software-properties curl
RUN add-apt-repository -y ppa:fkrull/deadsnakes

RUN apt-get update -y && apt-get install -y curl
RUN apt-get update -y && apt-get install -y \
        git \
        python3.6 \
        python3.6-dev \
        nginx \
        sqlite3 \
        nodejs \
        build-essential \
        libmagickwand-dev \
        cron \
        nginx \

RUN rm -f /usr/bin/python3
RUN ln -s /usr/bin/python3.6 /usr/bin/python3
#RUN curl | python3

RUN mkdir -p /home/trove/trove
WORKDIR /home/trove

COPY . .
COPY ./ ./
RUN chmod -R 755 /home/trove
RUN chown -R www-data:www-data /home/trove
COPY . .

COPY build/docker/uwsgi_params .
COPY build/docker/uwsgi.ini .
COPY trove-variables.env .
RUN pip3 install --no-cache-dir uwsgi
RUN pip3 install --no-cache-dir -r requirements.txt

COPY build/docker/start 

Open in new window

Monitor and alert recommendations request.

As our company is moving services out to cloud based services, I am seeking some real world feedback on what systems monitors are being used.  My current platform is pretty much a VMware house running Microsoft everything.  Syslog and performance monitors were chosen based on ease of use and preference of the party responsible for operation. ie: VROPS and log insight for me, and RedGate for the SQL DB admins.  I have held a preference for systems that do not require agents just to avoid possible issues with compatibility when the monthly MSFT updates come out. (this may be wrong, feel free to express if you think so) I expect the cloud service of choice will be largely AWS as that is getting many in our dev team excited. Ease of use is of course a priority, because no one wants to do more work than they have to.  Budget comes into the picture somewhere, but I have never had a problem paying for good tools. Cost management of cloud services is expected to be a priority.

So what is working for you?  If you represent a product please say so.  I still want your input, but would appreciate acknowledgment of  bias.

Thanks in advance for your time.
I cannot for the life of me figure out what IP4 to put in this

I am trying to add a new adaptor to my EC2 server and I have to assign an IP address forst

Which entails the subnet etc...

On the current EC2 server in the IIS I see these that are in use
In my AWS COnsole I have tried everything I know to add this IP and get either an "Overlap" message or a :Invalid" message
Little help?
Hi Powershell experts,

please help.

Not able to fix below issue. Please note, I'm initiating/sending this cmd from aws cloudformation.

Step 1:
Within cloudformation script/scope, I get a value for RDSInstance.Endpoint.Address, which I'm passing it as a variable to below Step 2. Assume the value is ''

Step 2: Cloudformation script looks like this when invoking
  command: !Sub |
    powershell.exe foreach($line in Get-Content C:\filepath.txt) {If (Test-Path -Path $line) {(Get-Content $line).replace('', ${RDSInstance.Endpoint.Address}) | Set-Content $line}}

Open in new window

Step 3: But on the target server it failed, sorry I didnt capture the exact log. but it failed for missing single quote around the variable value.
powershell.exe foreach($line in Get-Content C:\filepath.txt) {If (Test-Path -Path $line) {(Get-Content $line).replace('', | Set-Content $line}}

Open in new window

How do I place single quote or handle this situation pls

please help, thanks in advance
Hi Powershell experts,

Getting this strange error while executing below ps line. Please note, I'm initiating/sending this cmd from aws cloudformation to execute it on target windows server.

How script looks in cloudformation
  command: !Sub |
    powershell.exe foreach($line in Get-Content C:\filepath.txt) {If (Test-Path -Path $line) {(Get-Content $line).replace('', 'new_db_url') | Set-Content $line}}

Open in new window

Error from log file on the target windows server log.
2019-02-25 10:06:40,687 [ERROR] Command 5-replaceDBURLinEachFile (powershell.exe foreach($line in Get-Content C:\filepath.txt) {If (Test-Path -Path $line) {(Get-Content $line).replace('', 'new_db_url') | Set-Content $line}}
) failed
2019-02-25 10:06:40,687 [DEBUG] Command 5-replaceDBURLinEachFile output: 'Set-Content' is not recognized as an internal or external command,
operable program or batch file.

Open in new window

But if I execute below line on the target powershell replaces and works fine.
foreach($line in Get-Content C:\filepath.txt) {If (Test-Path -Path $line) {(Get-Content $line).replace('', 'new_db_url') | Set-Content $line}}

Open in new window

pls help, thx in advance
AWS Design help

I have a Ubuntu server (basically a LAMP stack) hosted on Linode. There's a PHP script that gets heavy usage that I'd like to move to AWS (along with the DB it uses).

I was thinking of putting the PHP Script on AWS Elastic beanstalk and having it connect to an Aurora auto scaling MYSQL database.  Most of the PHP script does read operations with the exception of a call to increment a hit counter.

How do I go about keeping the master database (on linode) and the AWS RDS Aurora database in Sync so I can offload the heavily-used PHP script

Or am I completely going about this the wrong way .. please advise
OWASP: Avoiding Hacker Tricks
LVL 13
OWASP: Avoiding Hacker Tricks

Learn to build secure applications from the mindset of the hacker and avoid being exploited.

Hi AWS and Powershell experts.

As part of Cloudformation, having trouble executing script and powershell as part of userdata section.
please help.

Have attached the ec2 section for your quick reference.

quick update.
1. powershell script in commands section executes fine. I can validate on the target server.

however,  as per the requirement, I need to execute persist, runAsLocalSystem, and part of user data execution.

            - <persist>false</persist>
            - <runAsLocalSystem>true</runAsLocalSystem>
            - <powershell>
            - New-ItemProperty -Path "HKLM:\Software\LIFERAY" -Name 'Environment' -Value 'dev'
            - New-ItemProperty -Path "HKLM:\Software\LIFERAY" -Name 'KEY' -Value 'LIFERAY'
            - </powershell>

Open in new window

HI Powershell Experts

trying to execute a PS script as part of AWS userdata

This is how I invoke from aws cloudformation userscript.

    powershell.exe (Set-Content -Path C:\filepath.txt -Value "C:\Program Files\app\folder\Agility.Server.Web\web.config`nC:\Program Files\app\folder\Agility.Server.Web\web.config`nC:\Program Files\app\folder\CoreWorkerService\Agility.Server.Core.Executor.exe.config`nC:\Program Files\app\folder\CoreWorkerService\Agility.Server.Core.ExportService.exe.config`nC:\Program Files\app\folder\CoreWorkerService\Agility.Server.Core.WorkerService.exe`nC:\Program Files\app\folder\CoreWorkerService\Agility.Server.StreamingService.exe`nC:\Program Files\app\folder\Transformation Server\productname.CEBPM.CPUServer.ServiceHost.exe.config")

Open in new window

2019-02-22 10:44:30,799 [INFO] Waiting 60 seconds for reboot
2019-02-22 10:45:30,849 [DEBUG] Running command 2-addConfigFilesPath
2019-02-22 10:45:30,849 [DEBUG] No test for command 2-addConfigFilesPath
2019-02-22 10:45:31,256 [ERROR] Command 2-addConfigFilesPath (powershell.exe (Set-Content -Path C:\filepath.txt -Value "C:\Program Files\app\folder\Agility.Server.Web\web.config`nC:\Program Files\app\folder\Agility.Server.Web\web.config`nC:\Program Files\app\folder\CoreWorkerService\Agility.Server.Core.Executor.exe.config`nC:\Program Files\app\folder\CoreWorkerService\Agility.Server.Core.ExportService.exe.config`nC:\Program 

Open in new window

Odd Error now appearing when trying to connect to Ond Drive Account via AWS environment.

I have a colleague that is using our One Drive for an SA Account to collaboration on files. The other day this error appeared when they logged into the AWS environment. Up until then the One Drive connection was working fine.

error message from OneDrive on AWS
We have attempted to do the basic troubleshooting and there isn't a lot of stuff online with this error message as it relates to AWS.

Any suggestions? Is there something that may have changed?

To provide further context our team as their own AWS environment and I am able to connect to OneDrive without any errors. the only differences is the account we use for One Drive is the exact same one that I use to login to AWS. so maybe it is a security change of some kind?

Any guidance would be appreciated.



1. In AWS
2. I created an EC2 Windows Instance
3. Have installed few softwares as part of the project.
4. after that, have created an image from above EC2.  E.g AMI
5. Now, to setup another env. E.g SIT1. I'm creating a cloudformation script to use above AMI (step 5) to create an instance.
6. Instances are getting created. All good till this point.
Issue is:
I'm trying to execute Powershell as part of userdata to replace some values in config files. but userdata is not getting executed. I checked windows events logs as well.
Since this is based on custom AMI, any limitation/restrictions or need to enable some service to execute userdata

If someone could help me with steps, that wouold very help

thanks in advance
Is it possible on a Cisco ASA to advertise a network via BGP if the network in not in the routing table.

I have a Cisco ASA firewall with

1. one interface uplinking to a router (3rd party) that connects to our AWS estate.
2. One interface with a internet link with a ipsec tunnel to a second office which has an ip address range of

Because it is an ipsec tunnel the network does not appear in the routing table, but i want to advertise it out to the AWS estate via BGP. Normally i would just add a route to null for the super-net and then advertise that and all would work.

however adding a route to null stops the tunnel passing traffic so I want to know if there is any way to advertise a route to the network out to the AWS router via BGP with out having to have it in the routing table or any supernet of it.

Any thoughts
I had created AWS EC2 servers with Windows Server and SQL Server Web Edition before, but it looks like it's no longer available in the AMI list.

Only Standard and Enterprise editions are available.

Would anyone know when the Web Edition was removed? Or is the available based on region now?

I only have basic support in AWS, so I am hoping anyone can shed a light on this here.

Thanks in advance.


Amazon Web Services (AWS), is a collection of remote computing services, also called web services, that make up a cloud-computing platform  operated from 11 geographical regions across the world. The most central and well-known of these services include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3". Other services include Elastic MapReduce (EMR), Route 53 (a DNS web service),  provides a highly available and scalable Domain Name System (DNS) web service, Virtual Private Cloud (VPC), storage, database, deployment and application services.