[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More



Amazon Web Services (AWS), is a collection of remote computing services, also called web services, that make up a cloud-computing platform  operated from 11 geographical regions across the world. The most central and well-known of these services include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3". Other services include Elastic MapReduce (EMR), Route 53 (a DNS web service),  provides a highly available and scalable Domain Name System (DNS) web service, Virtual Private Cloud (VPC), storage, database, deployment and application services.

Share tech news, updates, or what's on your mind.

Sign up to Post


I'm new to building an Amazon Alexa skill and the course I'm taking seems to be a bit outdated. In the course, it says to use event.request.intent to get the intent of the user. However, I'm getting an undefined error. Should I be using something else to get the intent?

exports.handler = function(event, context) {

Open in new window

Produces this error
TypeError: Cannot read property 'intent' of undefined
Redefine Your Security with AI & Machine Learning
Redefine Your Security with AI & Machine Learning

The implications of AI and machine learning in cyber security are massive and constantly growing, creating both efficiencies and new challenges across the board. Check out our on-demand webinar to learn more about how AI can help your organization!

I am trying to deny AWS services based on users outside an accepted IP address range. I am trying to use a cloudformation script in yml to create a policy. I am a bit new to yml so any advice/help would be appreciated.
So we currently have a Cisco ASA 5512-X, v9.2.

We are currently on split tunnel for VPN, however, we want to move away from split tunnel as it causes routing issues for us to AWS.

Is there a good way for me to build out another VPN interface and apply new profiles/rules to test?
I have a wordpress website on AWS EC2 Ubuntu Linux. I am not good in this department of coding but I get by. I just used created a Load Balancer and attached it to my EC2 instance. I am trying to force SSL (HTTPS) on anyone who visits my site. I have 90% of it correct.  if you visit:

http://www.Example.com (Redirects to https://www.Example.com)

it works perfectly with Secure. But if you go to

then it goes to a UNSECURE site. and stays on Example.com

In my ".htaccess" file at the very top I have the code below.  So what is the problem? I thank you for the help.

#Force www:
RewriteEngine on
RewriteCond %{HTTP_HOST} ^Example.com [NC]
RewriteRule ^(.*)$ https://www.Example.com/$1 [L,R=301,NC]

# Begin force ssl
<IfModule mod_rewrite.c>
# RewriteEngine On
 RewriteCond %{SERVER_PORT} 443
 RewriteRule ^(.*)$ https://Example.com/$1 [R,L]

Open in new window

I am trying to verify some AWS prerequisites for Server Migration.  Could someone help me with the following 3 prerequisites listed below.   specifically:

a) verify if the following prerequisite connections are allowed
b) if they are blocked, how to open the requested ports in the fortigate

1)  DNS—Allow the connector ( to initiate connections to port 53 for name resolution.

 2)  HTTPS on WinRM port 5986 on your SCVMM or standalone Hyper-V host

 3)  Inbound HTTPS on port 443 of the connector ( —Allow the connector to receive secure web connections on port 443 from Hyper-V hosts containing the VMs you intend to migrate.
AWS is not picking up the laptop camera. Camera is definitely working in the windows 10 environment.  My understanding is that webcams do not get picked up by AWS client not sure if it is true for laptop's build-in camera
I am trying to reset Disk identifier on one of my hyper-v virtual disks.  AWS support tells me that there is no identifier, and this is preventing me from migrating the virtual disk to the AWS cloud.

I am using the following PS:
 Set-VHD -Path "M:\VMs\Virtual Hard Disks\NC-LBL.vhdx" -ResetDiskIdentifier[1]

I get the following error:

A positional parameter cannot be found that matches parameter name 'ResetDiskIdentifier'

CategoryInfo : InvalidArgument: (:) [Set-VHD], ParameterBindingException
FullyQualifiedErrorID : NamedParameterNotFound, Microsoft.vhd.powershell.setvhdcommand

Could someone assist me with the error?
Dear Experts,

I have a brief idea of Amazon Web Services.

I know that you create  an instance: virtual server/PC in the cloud.

The wizard is there to guide and you have to generate and download the key pair in order to access it.

I also know that S3 bucket is used to store the backup of Amazon EC2 instance, but where can I get information on how to do the backup to the S3 bucket using GUI instead of CLI?
Very long story short: I am trying to manually migrate a hyper-v server to AWS ec-2 without using SMS.

I am copying the vhdx virtual disk as I type.  I plan to use the CLI to import the image into an new instance.  My question is, can I also somehow integrate the xml file from hyper-v manager to copy the config of the vm?  Or should I re-define the vm instance in E2 with a fresh start?  If I do the latter, will it affect any of the windows drivers?(maybe dumb question)
15 or 20 times a day we see an error like the one below on our lambda instance.
One thing that jumps out is that the source address is a link local address instead
of a normal private (or public) address. Is that normal for lambas to try and use
a link local address as source to their destinations?

There appear to be no network error over DX, no bandwidth problems and thousands
of other connections per hour are successful. It's this small subset we're trying to figure out.

read tcp> read: connection reset by peer
How the Cloud Can Help You as an MSSP
How the Cloud Can Help You as an MSSP

Today, every Managed Security Service Provider (MSSP) needs a platform to deliver effective and efficient security-as-a-service to their customers. Scale, elasticity and profitability are a few of the many features that a Cloud platform offers. Register today to learn more!

Our Active Directory domain says is contoso.com, and our cooperate URL is the same https://contoso.com. URL is publically hosted on AWS and has elastic FQDN, In order to make the URL accesible on internal Newtwork, IT team has created A CNAME record against Public FQDN, But DNS services don't let us create a CNAME with Blank Fields stating "A new record cannot be created. An alias (CNAME) record cannot be added to this DNS name. The DNS name contains records that are incompatible with the CNAME record".  

For now we have created a CNAME with www, with this we open the URL as www.contoso.com, but we want to open it without www internally.
Amazon Migration job fails at Uploading 99% Step 2 of 4 in progress.  

The connector, service are both running.  I am able to view hyper-v server list on connector.   Network here is good.  We have cable and fiber.
When I run the AWS VM Import Prerequisites Checker on the VM to be imported everything passes except: Only Local Disks Attached, and Windows Firewall Disabled.  The issue is that the firewall is disabled, and there is no mapped drives, network connections or media connected to the vm.  Any ideas as to why its failing?
ASP.NET Core Web Client (RAZOR) Log In using AWS Cognito user pool and AWS .NET SDK to log in user in asp.net core web client

How to use AWS cognito user pool to authenticate and authorise ASP.NET Core WEb Client and ASP.NET Core Web API.

I already created a AWS Cognito User pool and App CLient.. I followed the below article from AWS


reached the point

      var cognito = new AmazonCognitoIdentityProviderClient(_region);
            //var cognito = new AmazonCognitoIdentityProviderClient(credentials);

            var request = new AdminInitiateAuthRequest
                UserPoolId = _aWSConfig.PoolID,
                ClientId = _clientId,
                AuthFlow = AuthFlowType.ADMIN_NO_SRP_AUTH

            request.AuthParameters.Add("USERNAME", "test@test.com");
            request.AuthParameters.Add("PASSWORD", "P@ssword12");

            var response = await cognito.AdminInitiateAuthAsync(request);

            return strToken = response.AuthenticationResult.AccessToken;

1. what are the next steps so that asp.net core web client is aware the user is logged in ??

example the below are set




2. what other details from token need to be stored where and how in ASPNET Client so that these can be used to send in HTTPCLient request …
I am in the process of replicating a server using the AWS migration tool.  When the server is done copying, what do I do next to turn off the original in-house server, and activate the vm?
I recently configured local Certificate authority server. since then our website hosted on AWS with valid Certificate gives error message.

Your connection is not private
Attackers might be trying to steal your information from <mydomainname>.org (for example, passwords, messages, or credit cards). Learn more
can you please let me know i need to do to make sure that accessing the website from internal network does not use or by pass local certificate?

Thank you

I am working through the process of automating my backups in AWS. I have a process to take VSS snapshots of my volumes, and I also have a separate script in AWS Lambda which can automatically take an AMI. What i'm now looking to do is combine these in to one single function. The lambda script does take a snapshot, however I'm not certain that it's a VSS2 snapshot. I've googled the issue but all the articles I've come across seem to describe the two processes as separate entities.

This is the Python script I'm using in Lambda to take an AMI:

# Automated AMI Backups
# @author Robert Kozora <bobby@kozora.me>
# This script will search for all instances having a tag with "Backup" or "backup"
# on it. As soon as we have the instances list, we loop through each instance
# and create an AMI of it. Also, it will look for a "Retention" tag key which
# will be used as a retention policy number in days. If there is no tag with
# that name, it will use a 7 days default value for each AMI.
# After creating the AMI it creates a "DeleteOn" tag on the AMI indicating when
# it will be deleted using the Retention value and another Lambda function

import boto3
import collections
import datetime
import sys
import pprint

ec = boto3.client('ec2')
#image = ec.Image('id')

def lambda_handler(event, context):
    reservations = ec.describe_instances(
            {'Name': 'tag-key', 'Values': ['backup', 'Backup']},
How to find SQS header value in a AWS message. we are using a java pojo to access the message and setting the message value. how can we find the message header value.

 can somebody help.

Thank you
We have a client with a new AWS based Windows 2016 VPC, they have a domain controller and an Remote Desktop Server which is using TSplus to serve the remote clients. Any windows based thin client or Microsoft RDP client is fine but a the Linux based HP thin clients display the time one hour behind and it does not matter what we do we can't sort it any suggestions would be most welcome at this point

anyone can advice how to write a python script template to create 2000 VMs (can state different flavour, image, vpc, sys volumes) to provision on private cloud to call the API of  Huawei Fusion Cloud 6.3 or AWS.

The python script template can either shutdown, restart, destroy, recreate and backup the VM, VPC, System Volumes, etc when the staging DC configuration need to redo.

Introduction to Web Design
LVL 12
Introduction to Web Design

Develop a strong foundation and understanding of web design by learning HTML, CSS, and additional tools to help you develop your own website.

I just started using AWS EC2. I have an ubuntu box that is running, but I can't connect to it like I would a windows box with RDP.
I want to connect to this instance using RDP or something similar. I cannot use the Java browser extension that is recommended.
Hi All,

I need a python boto3 script to scan all my EC2 instances and write EC2 hostnames if its got unencrypted EBS volumn into a text file.

Please help!

We're looking to install the WDS role on a server to host the keys for BitLocker's network unlock feature.  Will only be using WDS for the unlock, nothing else.  Network unlock is our only option since pre-boot PINS are not an option.  We have about 50 desktops scattered throughout different locations and the goal is to enable network unlock with BitLocker.

We'd like to install the WDS role on multiple hosts to avoid a single point of failure.  If one WDS host goes down, this would prevent our desktops from booting which would be very bad.  What do folks do in this situation to allow for redundancy?

Next question.  
We're in AWS.  The concern is that WDS relies on DHCP and since DHCP is hosted in AWS, will this cause problems with WDS?  Keep in mind, we're only using WDS for network unlock.
We'd like to host a server on prem for WDS but this is not an option at this time.

thank you
I have setup Code Pipeline to build a docker instance and deploy it on a ECS cluster, I have created my buildspec.yml and everything is working, however I need to adjust my buildspec.yml to print image definitions that setup a health check but so far its not working. Here is my current code for my buildspec.html

version: 0.2

      - echo Entered the update phase...
      # Updates Docker Instance
      - apt-get update -y
      - echo Logging in to Amazon ECR...
      - aws --version
      - $(aws ecr get-login --region ap-southeast-2 --no-include-email)
      # ECS Repository URI
      - REPOSITORY_URI=###########.dkr.ecr.ap-southeast-2.amazonaws.com/###########
      - IMAGE_TAG=${COMMIT_HASH:=latest}
      - echo Build started on `date`
      - echo Building the Docker image...          
      - docker build -t $REPOSITORY_URI:latest .
      - docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG
      - echo Build completed on `date`
      - echo Pushing the Docker images...
      - docker push $REPOSITORY_URI:latest
      - docker push $REPOSITORY_URI:$IMAGE_TAG
      - echo Writing image definitions file...
      # Prints Task Definitions
      - printf '[{"name":"website","imageUri":"%s","healthCheck":{"retries":3,"command":["/bin/bash curl -f http://localhost/ || exit

Open in new window

We have installed a PBX on AWS and connected it to our on-prem Router via VPN.

My on-prem router is connected to the SIP provider via a physical connection with another on-prem MUX device (device given by sip provider).

All connections are working fine, EXCEPT, my SIP provider has a condition that all connections to their server must originate from a specific IP that they have assigned to us.

Since AWS machine is connected via VPN, all calls from PBX are picking up the IP of the AWS machine as "source IP".

For resolving this, i need to replace / masquerade / NAT / change the IPs of all connections from AWS machine's IP to SIP provider's assigned IP. Someone suggested i need NAT loopback/reflection for this. Someone also suggested packet forwarding. someone suggest IP masquerading.

Please guide how can this be done?

AWS Snowball help.  

We received an AWS snowball and I'm trying to make it work.  AWS support SUCKS with a capital SU, so, here I am.  :)

The snowball is connected and pulling an IP.

Problem one is that the LCD screen on the front does not show that it has an IP.  The LCD screen is showing that the device timed out and I need to try again.  However, in my DHCP leases, I can see the IP and the DNS name and I can ping it.  So, I figured the LCD screen is wrong.  Maybe...

Second thing is the CLI snowball start command.  I downloaded the manifest and the unlock key and I "think" I have the command written correctly, but it's failing.  It's telling me it can't find the manifect in the path I specified.  Here's the command I'm using:

snowball start -i -m c:/snowball/JID54a9420f-455e-462b-8d8f-caecc2be40a3_manifest.bin -u 8aaaf-8febd-xxxxx-b580d-xxxx

Notice my path...it's to the location on my C: drive where the manifest file lives.  The message I get is that the client cannot communicate with the snowball.  However, as I said, I CAN ping it.  

Anyone have any experience with these things?  At this point, it's a brick.




Amazon Web Services (AWS), is a collection of remote computing services, also called web services, that make up a cloud-computing platform  operated from 11 geographical regions across the world. The most central and well-known of these services include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3". Other services include Elastic MapReduce (EMR), Route 53 (a DNS web service),  provides a highly available and scalable Domain Name System (DNS) web service, Virtual Private Cloud (VPC), storage, database, deployment and application services.