Amazon Web Services (AWS), is a collection of remote computing services, also called web services, that make up a cloud-computing platform  operated from 11 geographical regions across the world. The most central and well-known of these services include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3". Other services include Elastic MapReduce (EMR), Route 53 (a DNS web service),  provides a highly available and scalable Domain Name System (DNS) web service, Virtual Private Cloud (VPC), storage, database, deployment and application services.

Share tech news, updates, or what's on your mind.

Sign up to Post


I have an EC2 instance, as well as a S3 bucket.

I have mapped to the S3 bucket from the EC2 instance successfully, using a third party app called ExpanDrive, it maps as a Z drive.

I have done the same from my local PC. Also mapped as a Z drive.

I can save a file into my new Z drive, and it appears immediately in the corresponding Z drive on the server.

Problem is, the document management system we have installed on the EC2 instance has an automated collection facility where it collects and ingests any files dropped into the shared folder, but it is failing. Log files say it is access denied.

We tried running the application as administrator - no change.

Can an AWS guru please advise on what might be necessary here?

Many thanks in advance.
The Lifecycle Approach to Managing Security Policy
The Lifecycle Approach to Managing Security Policy

Managing application connectivity and security policies can be achieved more effectively when following a framework that automates repeatable processes and ensures that the right activities are performed in the right order.

I have an oracle database in AWS, but I need to create an Azure function that will allow me to start a "PL/SQL" function. But I do not know how to create a connection between the Azure function and the AWS Oracle database.

I am not an admin, so I would need some sort of idiot guide to kind of follow to help get round the problem.

A very basic example would be

from the azure function i would want to issue the following sqls

Select sysdate from dual;

if I had a typical sqlclient I would just first create a connection username/password@db but in this case I dont know how to get that connection to be made.
Is it possible to customize the "look and feel", layout, etc for Amazon AWS's SAML Federation Landing Page?

When we signin to our company "federation page" it redirects us to "" (as it should), which displays all of
our AWS accounts/AWS Roles to sign into. However, it's ugly and not really organized/categorized in any way (so you have to always scroll up/down
to find the account you want or do a "Ctrl-F").

I know we can customize our first landing page (company federation landing page), but can we change anything for the redirect/second SAML landing page?
Hello Everyone,

I am trying to setup a powershell script in an attempt to setup a automated transfer of a directory to a S3 Bucket, I have been following instructions listed at but when I run it I get the following error

Unable to find type [Amazon.AWSClientFactory].
At line:18 char:9
+ $client=[Amazon.AWSClientFactory]::CreateAmazonS3Client($accessKeyID, ...
+         ~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidOperation: (Amazon.AWSClientFactory:TypeName) [], RuntimeException
    + FullyQualifiedErrorId : TypeNotFound

Open in new window

The Code I have is pasted in below... If someone has some insight that would be awsome :)

# Constants
$sourceDrive = "C:\"
$sourceFolder = "Users\Administrator\AppData\Roaming\folder"
$sourcePath = $sourceDrive + $sourceFolder
$s3Bucket = "bucket"
$s3Folder = "Archive"

# Constants – Amazon S3 Credentials

# Constants – Amazon S3 Configuration
$config=New-Object Amazon.S3.AmazonS3Config
$config.ServiceURL = ""

# Instantiate the AmazonS3Client object

# FUNCTION – Iterate through subfolders and upload files to S3
function RecurseFolders([string]$path) {
  $fc = New-Object -com 

Open in new window

I have over 10 Amazon Ec2 Instances running and I want to automate their backups to a Amazon S3 Bucket. I have been told that I can do this using a Amazon Linux AMI with python code but I am unsure how to accomplish this.

Could someone please provide me a simple solution on how to accomplish this, specifically in python code.

Thank you for your time.

can oracle move data easily between AWS/Azure, can local oracle installation can work with Cloud easily. e.g. archive data move  back and forward to cloud easily
I have been able to create the 2012 domain using AWS AD. I am able to login to a EC2 instance i created to add users and groups. I dont have access to the directory in order to add them. is this a different user name and password. If so how do i manage this account for managing the domain. i know i cant login to the domain controllers directly but from what i gathered i should be able to manage and add users and computers to the domain with admin access. How do i configure this?
Is there a means of creating a GRE or IPSec tunnel over a Direct Connect connection between
AWS and a corporate network?
If you have two accounts in AWS with one linux host in each - what options do you have to copy files (sftp) between the two hosts?
What would be the options? What would be the fastest? The least costly?

When comparing Azure on SQL server and MariaDB as well as AWS on MS SQL and MariaDB, what is the preference of choice.
Free Tool: Subnet Calculator
LVL 12
Free Tool: Subnet Calculator

The subnet calculator helps you design networks by taking an IP address and network mask and returning information such as network, broadcast address, and host range.

One of a set of tools we're offering as a way of saying thank you for being a part of the community.

SQL Workbench is unable to connect to Amazon Redshift Cluster.I am very new to AWS Redshift.I  have created the cluster  and configured the workbench to connect to cluster.But its not connecting to Redshift Cluster.Sending the screenshot
I was trying to enter a dynamic dns address entryp in my Juniper SRX. But it complained that the name was over 63 character limit. We had earlier tried to put a cname into DNS for this dynamic resolution but it seems that the Juniper only wants A records. Is there any means to work around the 63 character limit? Is it known if using a cname for this functionality should work as well as putting in an A record?

set security zones security-zone untrust address-book address db2-beta dns-name ipv4-only
Re Amazon S3 Modified Date... Yesterday we had to download a file from S3 and realized that the new Modified Date of our file, after we downloaded it, was yesterday's date rather than the original Modified Date.  We searched around for conversations about this and learned that it is a long-standing issue with S3 (though some would suggest it is by design, in that S3 is not "file" system; rather it is an "object" system).

Ok, but the problem still exists, and we're not interested in devising some VB solution or using a 3rd party program to maintain that attribute.  Is there yet a "standard" solution for this in S3?  Assuming not, can someone recommend another well-supported repository for our backups that WILL maintain the file attributes (i.e. OneDrive, Azure)?

Thank you...

Update:  One solution we found was, if using Cloudberry Restore, Cloudberry retains the Modified Date.
I need help with static routing for an AWS managed Vpn connection to a either a Greenbow VPN client or another AWS VPC. I am a developer, not a network engineer, but I have set up both hardware and software VPNs in the past, just never AWS managed. I know the easiest way is to just peer the VPCs together but that is not how we want to set it up. First off I am not sure if you can even connect two VPCs using the AWS managed VPNs. From what I have read they can not initiate connections, only receive them so manged to manged may not be possible.  We tried setting up a Greenbow VPN client on a VPC with no manged VPN connection and when we try to connect we are able. The problem is that we can not ping any machines. All firewall rules are configured properly ( we can ping using the external IPs). Basically we have the windows firewalls turned off and the AWS security groups allowing all traffic between the two VPCs. Here is where I believe the problem is and it comes with question. We set the VPN connections to use static routing and added a route to the managed VPN subnet routing table specifying the subnet that the Greenbow client is in and used the managed gateway as the target. On the Greenbow side we configured it with the subnet we want to reach on the managed VPN side. so Here is an example to make it more clear:

Our side of the VPN (the one with the Managed VPN connection) subnet: mask
Client side of the VPN (the one with the Greenbow client) …

I am having an issue with my Amazon EC2 Instance. I want the information panel that appears on the top right of the instance when you access it (as displayed in the image below) to be modified. Is there a way to add lines of information from sources like one of the instance tags?

If someone has a solution to this that would be excellent. Thank you for taking the time to read this :).

Could someone advise me how you could set something up on AWS that would be like or exactly a streaming media server? I use that term because I heard others call it that. I want to play media from a website, and I do not want the user to be able to download the content. I do not mean something that is simple like; not have a download button. I mean when the user comes to the site with things like IDM it just cannot be done. I would like to accomplish this most simply. I do not want to pay anybody for this I want to learn about the technology to accomplish this myself. I am not asking how to do it; I am asking what technology do I have to learn about and implement on AWS to accomplish this (keywords links etc.). Thanks
Bezos' annual letter to shareholders reveals Prime membership has surpassed 100M subscribers:

They've come a long way since the days of selling used books.

13 years post-launch, we have exceeded 100 million paid Prime members globally. In 2017 Amazon shipped more than five billion items with Prime worldwide, and more new members joined Prime than in any previous year – both worldwide and in the U.S. Members in the U.S. now receive unlimited free two-day shipping on over 100 million different items.

Expert Comment

by:Craig Kehler
One thing I love about customers is that they are divinely discontent. Their expectations are never static – they go up. It’s human nature. We didn’t ascend from our hunter-gatherer days by being satisfied.

That was my favorite part. :)

I am not able to create ec2 instance, instead i get error
Status Code: 400; Error Code: AccessDeniedException; Request ID: 138445dc-43d9-11e8-9ee9-69ba7680aa98)
The Cloudwatch disk metrics values of EC2 instance that is causing problems, there are no data. I am using C4.xlarge instance. In Cloudwatch following metrics are shown as zero,

 - Disk Reads(Bytes)
 - Disk Read Operations
 - Disk Writes(Bytes)
 - Disk Write Operations

The Minimum,Maximum,Average and Sum values of the above items are zero.

Network and CPU monitors return data fine.

Any idea why ?

Improved Protection from Phishing Attacks
Improved Protection from Phishing Attacks

WatchGuard DNSWatch reduces malware infections by detecting and blocking malicious DNS requests, improving your ability to protect employees from phishing attacks. Learn more about our newest service included in Total Security Suite today!


for user of MySQL and MariaDB, what tools and way you all to debug performance down to query level. e.g. which part of the query slow everything down?

please share.

if move to RDS by AWS, still easy to debug ?

I am analysis a new project which try to move oracle DB to the cloud, MS SQL + Azure or Maria DB + cloud, which one is good ? which one you prefer and why ?

which one allow us to debugs easier ?

I have a django application running on AWS server. It was working all good from past 2 one month. Suddenly now apache2 default page is coming up.

Can anyone please help me with this?


I have a client that was told that Amazon Workspaces was a good solution to allowing their remote appraisers to access their data remotely.  I've been looking into it but I haven't found a good solution to what they're specifically looking for.  They only have 4 workstations in their office and one is the "server".  They're adding a fifth person but they're not sure if they need to add a fifth PC.  They use a third party appraisal application that stores data locally on the "server".  They would like to set the new user up using Amazon Workspaces and have that workspace access the data on their "server".  Is this possible or is there a better solution?  I was thinking that all the computers would need to be on the same network.  So then all of the computers would need to have their own Workspace on Amazon Workspaces so that they all of their workspaces on the same network.  How would you recommend another user is added with the ability for that user and all users to work remotely?  Thanks for your advice!
I am attempting to have the logical names  like this Instance “Master01”

    Type: "AWS::EC2::Instance"
    DependsOn: AttachInternetGateway
          Timeout: PT12M
Show up in the  cloud watch dash boards , I have a set of working dashboards now with code like this that is attached , also see my attachment on what the cloud watch dashboard looks like now . I want this dashboard to show the LOGICAL NAMES defined in the template , Like Master01, Master02, Infra01 etc

thanks in advance , I would be lost with out the help I get here !!!!!!
dashboardtoEE.txtthis is the screen shot showing the instance IDs circled in red , I want to change this valule to a logical name in the clould formation template (eg Master01, Master01)
We have our corporate public website hosted on an internal server. We want to use Amazon Route 53, and the Health Checks feature to monitor this website, and automatically redirect visitors to an "under maintenance" page (hosted in an S3 bucket) in the event that the primary site goes down.

I have everything working like this:
- Route53 PRIMARY record: --> [IP Address of internal Web Server]
- Route53 SECONDARY record: --> [ALIAS for S3 bucket] --> S3 Web Redirect for all requests --> (S3 bucket)

This all works great. If the primary goes down, visitors are redirected to

However, the problem is that browsers seem to be caching the redirect and maintenance page, so even once Route53 has switched back to the PRIMARY record, if the visitor tries to go back, they go straight back to the page. Even closing and opening the browser doesn't help. Only manually clearing the browser cache, or switching to a different browser seems to help.

Any idea on how I could change anything to get the 'failback' working better?


Amazon Web Services (AWS), is a collection of remote computing services, also called web services, that make up a cloud-computing platform  operated from 11 geographical regions across the world. The most central and well-known of these services include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3". Other services include Elastic MapReduce (EMR), Route 53 (a DNS web service),  provides a highly available and scalable Domain Name System (DNS) web service, Virtual Private Cloud (VPC), storage, database, deployment and application services.