AWS

Amazon Web Services (AWS), is a collection of remote computing services, also called web services, that make up a cloud-computing platform  operated from 11 geographical regions across the world. The most central and well-known of these services include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3". Other services include Elastic MapReduce (EMR), Route 53 (a DNS web service),  provides a highly available and scalable Domain Name System (DNS) web service, Virtual Private Cloud (VPC), storage, database, deployment and application services.

Share tech news, updates, or what's on your mind.

Sign up to Post

Hi All, I have 3 AWS VM's. VM01 is domain controler, VM02 and VM03 joined domain. I am able to ping VM's by IP , full name "host.domain.com" and AWS Private DNS butnot with "host" .  Can you please help me out for this?
0
Cloud Class® Course: Amazon Web Services - Basic
LVL 12
Cloud Class® Course: Amazon Web Services - Basic

Are you thinking about creating an Amazon Web Services account for your business? Not sure where to start? In this course you’ll get an overview of the history of AWS and take a tour of their user interface.

Using Kotlin I have been trying to connect an Android App with Amazon Web Services. I have carefully followed all the instructions in the "Getting Started" guide. I have done it over and over about 6 times thinking I was making a little error. I honestly can not figure out what I have done wrong. I installed imports, dependencies and permissions without trouble. After I modified the onCreate it would no longer work on the emulator and I didn't get the handshake back from AWS.  Please note: Text file of all codes attached!
errorAWS.txt
0
I have a non-public S3 bucket XXX on account X, and a CloudFront distribution on account Y that needs to use that bucket as the origin.

What I did so far:

* Added the canonical id of account Y to the permissions of bucket XXX - I get 403 errors.
* Added a bucket policy to bucket XXX - I still get 403 errors
{
    "Version": "2012-10-17",
    "Id": "Policy1234567890",
    "Statement": [
        {
            "Sid": "AllowRead",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::accountY:root"
            },
            "Action": "s3:Get*",
            "Resource": "arn:aws:s3:::XXX/*"
        },
        {
            "Sid": "AllowList",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::accountY:root"
            },
            "Action": "s3:List*",
            "Resource": "arn:aws:s3:::XXX/*"
        }
    ]
}

Open in new window


Any verified suggestions on how to do this?
0
We are using Amazon WorkDocs as storage.  We have a website that processes orders and sends out emails with attached Word documents.  We would like these Word documents to be sent into a specific Amazon WorkDocs location automatically.  

I was thinking I may be able to setup an email filter that forwards these emails to an address provided by WorkDocs, which would save the attachments on that email accordingly.  Maybe wishful thinking..??  I'm just not seeing anything about this, but before I give up on it I figured I'd ask here and see if somebody else has done something like this..??

Any  information on this would be greatly appreciated.  Thanks!
0
I have a Node project written in TypeScript where the application takes in user's input arguments and does some calculation and prints out the results. Take the example of a calculator the user inputs value of two numbers and it prints out the addition of two numbers. for example npm run calc 3 5. The application prints out 8.

The application is working but i want to make this an aws lambda function and deploy it in aws. The examples i see everywhere is hello lambda function. Can anyone help me how i can make a handler and deploy it to aws?

How do i convert the below hello function to a calc lambda function and deploy it to aws?
export const hello: Handler = (event: APIGatewayEvent, context: Context, cb:             
Callback) => {
const response = {
statusCode: 200,
body: JSON.stringify({
message: 'Go Serverless Webpack (Typescript) v1.0! Your function executed 
successfully!',
  input: event,
}),
};

cb(null, response);
}

Open in new window


This is my addition class

export class Calc {
public static add(){

    console.log( this.addition( (process.argv[0] + process.argv[1] ));

}
public static addition(num1:number, num2:number)
{
    return num1 + num2;
}

Open in new window

0
I am trying to write a powershell script to query my AWS Glacier vault to verify files have been uploaded before I delete them from the source.

I am using CloudBerryLab.Explorer.PSSnapIn cmdlets within my powershell script.  Here is the script which I've basically copied from Cloudberry forum


add-pssnapin CloudBerryLab.Explorer.PSSnapIn
$conn = Get-CloudGlacierConnection -Key [YOUR ACCESS KEY] -Secret [YOUR SECRET KEY]
$vault = $conn | Select-CloudFolder -Path "[region]/[Vault Name]"
$invJob = $vault | Get-Inventory
$archives = $vault | get-clouditem

The connection appears to work and checking $invJob.status = succeeded.  Also it does take about 4 hours for the inventory to complete.  

When I execute "$archives = $vault | get-clouditem", I get the error "Get-Clouditem:No ready inventory retrieval jobs for vault:[my region/my vault]"

I have a feeling I'm missing a step somewhere that would involve using the $invJob but I can't find any methods for the object where I can query the folders in my vault.

Any thoughts or ideas would be much appreciated.
0
I have a 10Gbps Direct Connect circuit from our enterprise data center to AWS. Each VPC has a different
sub-interface and different BGP peer. See snippet below. What's happening is that ping tests to some BGP
peers might have NO loss be other BGP peers are seeing 2 to 10% packet loss from the perspective
of our monitoring system in the data center. If I look at the Ethernet port or the PortChannel port there
are no incrementing errors nor discards that I can see. If I try to show anything about the subinterface
(say show interface port-channel3.1002) - error information is not available. I'm not sure how I can
look at the interface of the router on the AWS side of the connection.

My question: how can I go about troubleshooting the ping loss to these sub-interfaces/bgp peer addresses?

neighbor 172.18.1.189
inherit peer aws-dx-peering
description peering to preprod

neighbor 172.18.1.195
inherit peer aws-dx-peering
description peering to prod

interface port-channel3.1001
  description DX for preprod
  encapsulation dot1q 1001
  bfd interval 300 min_rx 300 multiplier 3
  no ip redirects
  ip address 172.18.1.130/31
  ip router ospf 1 area 0.0.0.0


interface port-channel3.1002
  description DX for prod
  encapsulation dot1q 1002
  bfd interval 300 min_rx 300 multiplier 3
  no ip redirects
  ip address 172.18.1.132/31
ip router ospf 1 area 0.0.0.0
0
Hey, This is just a decision-making problem. I am seeking for a well thought out answer.
I have to develop an android XMPP chat application, (the application also has a NodeJS server API, connecting to MongoDB and AWS S3 for picture uploads)

which will be better:

1. Having a "openfire" server on aws and connecting it to the android application and implementing an XMPP client on the android device using "smack" library.

2. Implementing a "xmpp-client" on NodeJS server side and scrape the results from this API to the Android device.
0
Hi, I'm building a big data streaming pipeline that takes streams from a camera through kinesis to trigger a lambda function. The lambda function will then use AWS machine learning to detect objects and the images are stored in S3 and their metadata is stored in DDB. My problem is that the first frame of the video is being stored on S3 and DynamoDB repeatedly (same image is being stored). Here is the lambda code (the main function):

def process_image(event, context):

    #Initialize clients
    rekog_client = boto3.client('rekognition')
    s3_client = boto3.client('s3')
    dynamodb = boto3.resource('dynamodb')

    s3_bucket = ...
    s3_key_frames_root = ...

    ddb_table = dynamodb.Table(...)
    rekog_max_labels = ...
    rekog_min_conf = float(...)
    label_watch_list = ...
    label_watch_min_conf = ...

    #Iterate on frames fetched from Kinesis
    for record in event['Records']:
        
        frame_package_b64 = record['kinesis']['data']
        frame_package = cPickle.loads(base64.b64decode(frame_package_b64))
        img_bytes.append(frame_package["ImageBytes"])
        frame_count = frame_package["FrameCount"]

        rekog_response = rekog_client.detect_labels(
            Image={
                'Bytes': img_bytes
            },
            MaxLabels=rekog_max_labels,
            MinConfidence=rekog_min_conf
        )

        #Iterate on rekognition labels. Enrich and prep them for storage in DynamoDB
        labels_on_watch_list = []
        

Open in new window

0
AD Box in EC2 , AWS VPC.   1 DC and about 5 Members.   _MSCDS

Error when running DC DIag.

Testing server: \EC2AMAZ-XYZ
      Starting test: Connectivity
         The host 56789e91-a5fe-4d05-8c0d-698f5d2c9330._msdc.domaint.local could not be resolved to an
         IP address.  Check the DNS server, DHCP, server name, etc
         Although the Guid DNS name

         (56789e91-a5fe-4d05-8c0d-698f5d2c9330._msdcs.domain.local) couldn't

         be resolved, the server name (EC2AMAZ-9G30JPN.domain.local) resolved

         to the IP address (10.x.y.169) and was pingable.  Check that the IP

         address is registered correctly with the DNS server.
         ......................... EC2AMAZ-XYZ failed test Connectivity

Doing primary tests
   
   Testing server: IRON\EC2AMAZ-9G30JPN
      Skipping all tests, because server EC2AMAZ-9G30JPN is
      not responding to directory service requests
   
   Running partition tests on : ForestDnsZones
      Starting test: CrossRefValidation
         ......................... ForestDnsZones passed test CrossRefValidation
      Starting test: CheckSDRefDom
         ......................... ForestDnsZones passed test CheckSDRefDom
   
   Running partition tests on : DomainDnsZones
      Starting test: CrossRefValidation
         ......................... DomainDnsZones passed test CrossRefValidation
      Starting test: CheckSDRefDom
         ......................... DomainDnsZones passed test …
0
KuppingerCole Reviews AlgoSec in Executive Report
KuppingerCole Reviews AlgoSec in Executive Report

Leading analyst firm, KuppingerCole reviews AlgoSec's Security Policy Management Solution, and the security challenges faced by companies today in their Executive View report.

Echo customer service is an online service, Provide by amazon to their customers and amazon users. Echo customer service has a professional executive to solve all kind of queries of customers.
Echo customer service is a helpline service for their user who uses Echo products. Echo customer service tells special offers for you.  Echo customer service is aEcho customer servicevailable 24/7 for their customers.      

http://allcustomerservicephonenumber.com/echo-customer-service/
0
I want to know the average and maximum about of bandwidth that is being consumed by all the EC2 instances in a VPC. I am interested particularly in the traffic
initiated by hosts in the VPC and received from sites on the Internet.  Is there something standard built into AWS that would give me this visibility?
0
Can you disable TLS 1.0 in an existing AWS Beanstalk Application without having to destroy the existing implementation and create a new one?
0
Hello,

I'm struggling to setup subdomains on an AWS EC2 Linux AMI.

The primary ServerName should be student.dsportal.co.uk (using Cloudflare DNS to point the A record to the EC2 elastic IP)

The subdomain should be admin.dsportal.co.uk (again using Cloudflare DNS to point the A record to the EC2 elastic IP) and root to /admin folder.

I've tried adding the following to httpd.conf but its returning a 500 error on the admin.dsportal.co.uk subdomain:

<VirtualHost *:80>
    ServerName admin.dsportal.co.uk
    ServerAlias www.admin.dsportal.co.uk
    DocumentRoot "/var/www/html/admin"
    <Directory "/var/www/html/admin">
    AllowOverride All
    Require all Granted
    </Directory>
</VirtualHost>
<VirtualHost *:80>
    ServerName student.dsportal.co.uk
    ServerAlias www.student.dsportal.co.uk
    DocumentRoot "/var/www/html"
</VirtualHost>

Any help much appreciated.
0
I would like to know if it is possible to add a timestamp column in a table when it is loaded by an AWS Glue Job.

First Scenario:

Column A | Column B| TimeStamp
A|2|2018-06-03 23:59:00.0

When a Crawler updates the table in the data catalog and run the job again, the table will add the new data in the table with a new time stamp..

Column A | Column B| TimeStamp
A|4|2018-06-04 05:01:31.0
B|8|2018-06-04 06:02:31.0

here is my code..


import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext, DynamicFrame
from awsglue.job import Job
from pyspark.sql.functions import current_timestamp

## @params: [TempDir, JOB_NAME]
args = getResolvedOptions(sys.argv, ['TempDir','JOB_NAME'])

sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)

datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "test_stack", table_name = "sample_json", transformation_ctx = "datasource0")
applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("array", "array", "array", "string"), ("boolean", "boolean", "boolean", "boolean"), ("null", "string", "null", "string"), ("number", "int", "number", "int"), ("object.a", "string", "`object.a`", "string"), ("object.c", "string", "`object.c`", "string"), ("object.e", "string", "`object.e`", "string"), ("string", 

Open in new window

0
I am trying to troubleshoot a VPN connection to Amazon AWS

I have set up a VPC with a CIDR range 172.30.1.0/24 and I have gotten the tunnel to be active with the Customer Gateway, but I can't ping any of the EC2 instances in my VPC.

I set up a security group with 0.0.0.0/0 for inbound traffic and I set the route for my LAN subnet in the routing table as a  LAN8.LAN16.0.0/16

I can't think of what is going wrong
0
I am interested in trying to use PowerShell against our company SharePoint portal, We have some automation Tools like MS Flow and Alteryx Server that we are leveraging and I am trying to connect to a Workflow History listing. However, it is hidden from these other tools and I hoping Powershell might be an option.

I am trying to install and get it to work our on AWS EC2 environment and I keep having a lot of trouble adding the Add-In for SharePoint,

I tried following this kind of instruction but had zero success.
https://blogs.msdn.microsoft.com/opal/2010/03/07/sharepoint-2010-with-windows-powershell-remoting-step-by-step/

We are SharePoint Online/O365

Any suggestions?

Regards,

Adam
0
Hi Everyone,
I am trying to configure Windows Failover Cluster on AWS EC2 (Windows Server 2012 R2)  without shared storage. Basically I want to configure DR solution WithMS SQL Server AlwaysOn
My Question is : When we configure Windows fail-over between 2 nodes with local storage then where quorum setting reside.   In case of failover where the shared setting resides ?

Regards
Abdul Wahab
0
I am very new to asynchronous programming, I am using the amazon web services SDK (AWSSDK) in C#. More precisely the AWSSDK.SimpleMail SDK  

I am creating a function to run on the AWS Lambda function platform. This requires the use of .NET Core which apparently means everything needs to be asynchronous.

I am trying to send an email using the SAWS Simple Email service. There is a method in the AmazonSimpleEmailServiceClient class called SendMailAsync

I am not sure if I am calling this method correctly as it doesn't seem to work.  I'm getting a Task Cancelled Response.

I have attached my function code and here is the output of a log I have captured.

If anyone can help me out and point me in the right direction, that would be appreciated.

Async Error message: System.Threading.Tasks.TaskCanceledException: A task was canceled.
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable`1.ConfiguredTaskAwaiter.GetResult()
at System.Net.Http.HttpClient.<FinishSendAsyncUnbuffered>d__59.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at 

Open in new window

0
Increase Security & Decrease Risk with NSPM Tools
Increase Security & Decrease Risk with NSPM Tools

Analyst firm, Enterprise Management Associates (EMA) reveals significant benefits to enterprises when using Network Security Policy Management (NSPM) solutions, while organizations without, experienced issues including non standard security policies and failed cloud migrations

I connected the DNS from aws to my web host (Dream Host) with a wordpress extension from aws and have waited quite a while and still do not see the default wordpress blog on my website. I do not know what to do.
0
Hi experts,
Is it possible to have portable ansible and python for Linux pls

At least portable ansible
0
Connect to Oracle Database created on Amazon Web Services.

I have a Windows 10 Laptop. 16 GB Memory. Solid State hard drive.

Using Amazon Web Services, I created an Oracle Database on AWS.

I have created the appropriate TNSadmin entries like:

ALPHA =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = alpha.cmkgmvrzmjf4.us-east-2.rds.amazonaws.com)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = ALPHA)
    )
  )

LISTENER_ALPHA =
  (ADDRESS = (PROTOCOL = TCP)(HOST = alpha.cmkgmvrzmjf4.us-east-2.rds.amazonaws.com)(PORT = 1521))



I keep getting the error below.  

C:\>sqlplus /nolog

SQL*Plus: Release 12.2.0.1.0 Production on Mon May 28 13:50:05 2018

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

SQL>
SQL> connect adam/AD_am$12@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=alpha.cmkgmvrzmjf4.us-east-2.rds.amazonaws.com) (PORT=1521))(CONNECT_DATA=(SID=ALPHA)))
ERROR:
ORA-12505: TNS:listener does not currently know of SID given in connect
descriptor
0
I cannot get a scheduled task to consistently run on a Windows 2016 system.  The system is an EC2 instance in AWS, and I need it to run a task that essentially calls a .bat file on the local system.  The .bat file then runs a .vbs file on a mapped drive and runs it in the SysWOW64\cscript.exe context.  Initially I was getting errors related to the CLSID and APPID not being able to run using the configured account, and used the DCOM configuration to add the required permissions.  After doing that, I have been able to occasionally get the task to run, but for the most part it does not.  I've been searching for a resolution and have found a number of suggestions, none of which have worked.  When I look at the history of the task it indicates that the task completes successfully, but the end result of the task is that a new folder is created on the mapped drive and two .xml files are deposited there, which in turn are used to update a database for one of our applications.  In spite of the history showing that the task completes, the new folders and files are not created.  I can run the batch file manually and it succeeds every time.  I have tried configuring the task to open the batch file on the local drive, to open the cscript.exe application and add the path to the .vbs file as an argument, given it a "Start in" value, etc, etc, and nothing will work consistently.  I'm looking for other ideas to get this thing to run.  Any thoughts \ help is greatly appreciated!
0
I have an oracle database in AWS, but I need to create an Azure function that will allow me to start a "PL/SQL" function. But I do not know how to create a connection between the Azure function and the AWS Oracle database.

I am not an admin, so I would need some sort of idiot guide to kind of follow to help get round the problem.

A very basic example would be

from the azure function i would want to issue the following sqls

Select sysdate from dual;

if I had a typical sqlclient I would just first create a connection username/password@db but in this case I dont know how to get that connection to be made.
0
Is it possible to customize the "look and feel", layout, etc for Amazon AWS's SAML Federation Landing Page?

https://signin.aws.amazon.com/saml

When we signin to our company "federation page" it redirects us to "https://signin.aws.amazon.com/saml" (as it should), which displays all of
our AWS accounts/AWS Roles to sign into. However, it's ugly and not really organized/categorized in any way (so you have to always scroll up/down
to find the account you want or do a "Ctrl-F").

I know we can customize our first landing page (company federation landing page), but can we change anything for the redirect/second SAML landing page?
0

AWS

Amazon Web Services (AWS), is a collection of remote computing services, also called web services, that make up a cloud-computing platform  operated from 11 geographical regions across the world. The most central and well-known of these services include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3". Other services include Elastic MapReduce (EMR), Route 53 (a DNS web service),  provides a highly available and scalable Domain Name System (DNS) web service, Virtual Private Cloud (VPC), storage, database, deployment and application services.