AWS

Amazon Web Services (AWS), is a collection of remote computing services, also called web services, that make up a cloud-computing platform  operated from 11 geographical regions across the world. The most central and well-known of these services include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3". Other services include Elastic MapReduce (EMR), Route 53 (a DNS web service),  provides a highly available and scalable Domain Name System (DNS) web service, Virtual Private Cloud (VPC), storage, database, deployment and application services.

Share tech news, updates, or what's on your mind.

Sign up to Post

I tested an aws endpoint with Talend api tester ( a chrome extension ) and successfully got an successful 200 response and data. I set method as GET, with id query parameter and X-API-key under header as follows,

url : https://abc-api.us-east-1.amazonaws.com/xyz?id=8483943984
X-API-Key : e8fefjei303jfermnf

After that, I created a basic web page and added some code under script to make an call to same api.

<script>

         fetch ('https://abc-api.us-east-1.amazonaws.com/xyz?id=8483943984', {
                  method:'GET',
                  headers : {
                       'X-API-Key' : 'e8fefjei303jfermnf'
                  }
       })
       .then(res => res.json())
       .then(data => console.log(data));
</script>

but this time , I get an 403 status code. when I look at developer tools in chrome , i see

 under response headers one of the key is as follows

x-amzn-errortype : MissingAuthenticationTokenException

and in the request headers , Request Method : OPTIONS,

shouldn't this be set to GET?

from the error type, looks like the call is not sending the "X-API-Key" , does anyone know what might be wrong with this?
0
Hi experts,

I"m part of DevOps, and as part of my CICD pipeline (on AWS EC2), I send  notification/status report.

1. Build Status (Success / Failed)
2. Code Quality Report
3. Deployment Status
4. Healthcheck report for an enviornment.
5. everyday morning, afternoon, evening, send healthcheck report from all the environment.

In an Email, Microsoft Teams notification in a single line, table format etc.

Just wondering is there a attractive way to these reports and notifications, rather than traditional old table format please?

I was thinking, may be a flash card type colorful cards with details, with graph  etc. But again as an email but with avoiding simple table.

Any opensource, util that you can suggest pls


thanks in advance
0
I have an on premise Exchange 2013 environment with 2 servers in a DAG. I am trying to add a third server, but on AWS. I have created the server, the server OS is exactly the same the on premise servers. Both on premise servers can ping the AWS server, both by IP and FQDN. All permissions on the AWS server match the on premise servers.

However, when working in the ECP i continually get this error regarding the AWS server:
An error occurred while accessing the registry on the server "ServerName". The error that occurred is: "The network path was not found. ". Is there a special way that AWS has to be setup so that my on premise Exchange server can "locate" my AWS server?

Any help is much appreciated.
0
I built a RedHat server in AWS that runs an application that has 3 IPs and 3 URLs (1 URL per IP), in ifcfg-eth0 I have the config listed as:

BOOTPROTO=static
DEVICE=eth0
HWADDR=06:a6:8c:d3:2c:f0
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
IPADDR0=172.30.74.62
IPADDR1=172.30.74.237
IPADDR2=172.30.74.37
GATEWAY0=172.30.74.1
NETMASK=255.255.255.0

I had it working for some time, but for whatever reason I can't access the RDS database anymore, and not sure what happened to the server.

I can ping externally (google.com) but when I try to ping the RDS which is 172.30.72.x network, I get no route to host, when it used to work.  Thoughts?
0
I am working with the AWS SSM document and trying to create ami out of the instance.  While executing the instance it's giving the following error. May i know how to add VPC part in ssm document?
Screen-Shot-2020-02-11-at-12.50.26-P.png
0
I am trying to automate containerization of vm to AWS ECR. As part of it, automating python and AWS CLI installation on multiple windows servers using Ansible playbook. Can someone please share the playbook samples to install python/awscli on windows servers.
0
Hi,

I have a lot of experience setting up VPNs between our Palo Alto firewall (and others like Cisco ASA, Sonicwall etc) and on-premises Meraki MX firewalls. Most work perfectly first time.

I have an issue at the moment that has taken me weeks troubleshooting. Google hasn't yielded any results, Meraki Support help up to a point but can't advise on the AWS config, AWS support help up to a point but can't advise on the Meraki config.

Basically, I've set up both ends the same way we set up connections to on-premises MX devices but the VPN won't initiate.

I can ping both ways (public IPs of firewalls) and have checked the IPSec policies and shared secret keys match. I've done the AWS routing bits where i've added the remote subnets (on the Palo Alto side) to the route table in AWS and pointed it to the VMX interface. I've gone so far as to make the AWS security group assigned to my VMX 'allow all' to eliminate port issues.

Meraki say they aren't seeing any VPN initiation traffic on the internet side of the device and suspect it's a NAT issue upstream which would suggest something i need to change on the AWS side.

Has anyone managed to get this kind of scenario working? Any help would be greatly appreciated.
Palo_config.png
0
I have a question on Apache Airflow. We currently have an apache config running in AWS with one core (master)-node and 5 worker nodes. All running in a docker-container set up and managed by an ECS Service and an EC2 auto-scaling group. Right now all the workers are m5.xlarge.

According to the developers the reason why they all have to be m5.xlarge is because there is one job that has a dataset that would otherwise not fit in the memory of a single instance. But the majority of the jobs are small and don't need a lot of resources. So the 5 instances are basically idle most of the time.

I know little or nothing about Apache Airflow. My questions specifically about this setup (Airflow in docker on ECS) are:

1. Does Airflow by default supports a "fleet" of different instance sizes and can it then based on the job type (or other identification) sends these specific jobs to a certain type of worker?

2. Can the worker nodes with a default Airflow setup be spot instances? In other words when a worked dies would Airflow pick up the job again and re-run it or doesn't it have an idea of the state of a job?

3. Is Airflow aware of how many workers there are and what jobs are running where? Is there any way from within airflow to see what jobs are running and how much resources they are using?

4. I see a lot of Airflow core nodes with a very flat CPU utilisation line which seems strange to me and possibly a process that is in some kind of loop. What is the best way to …
0
Hello,

We'd like to setup the following:
  • We need to collect data being written by an external industrial device on a customer's PC
  • The data has to be uploaded to S3 to a specific customer-specific folder using a custom written client application that monitors a directory
  • Config changes (config-updates) for the client have to be downloaded from a specific customer dedicated folder
  • The client currently has a client name and an access key and secret key combination linked to an IAM user that has rights for his own folder. In this setup every user needs a dedicated user and a dedicated IAM policy which is not ideal.

2 questions:

1. Is it possible to write a IAM (or S3) generic policy that allows access to the customer based on a parameter (for example "customername") that is filled out in the client config. So that would mean customer1 has access to s3://bucket1/customer1, customer2 has access to s3://bucket1/customer2 each using their own access key and secret key that is in the uploadclient's config. With generic I mean not needing a policy for every IAM user.

2. What would be the best/easiest solution to define these users somewhere else instead of in IAM (which is not meant for this amount of users. Simple AD, Cognito? How do we then map these users to IAM and S3 bucket policies.
0
Hello,

I am getting the attached error when I login to a Windows Server 2016 domain member server in AWS (Please see first attachment).    

I login using my domain admin credential.  

Under the system properties>Remote, the "Allow connections only from computers running remote desktop with NLA" is already checked, and my domain admin credential is already configured as the selected user (Please see second attachment).

I have no problem login as the local administrator.  For some reason, my domain admin credential just does not work.  

We also have a domain controller in AWS.  Please advise if you know where we should check to troubleshoot.  

Thanks.
nla.jpg
system-remote.png
0
Hello Experts,

I need to make all emails going out of my AWS EC2 instance to relay to SES. I have authorised mydomain.com as a verified domain in SES. The problem now is to masquerade all FROM addresses to noreply@mydomain.com in sendmail. My sendmail.mc below;

dnl FEATURE(`genericstable',`hash -o /etc/mail/genericstable.db')dnl
dnl GENERICS_DOMAIN_FILE(`/etc/mail/generics-domains')dnl
EXPOSED_USER(`root')dnl
DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA')dnl
FEATURE(`accept_unresolvable_domains')dnl
LOCAL_DOMAIN(`localhost.localdomain')dnl
define(`SMART_HOST', `email-smtp.ap-southeast-2.amazonaws.com')dnl
define(`RELAY_MAILER_ARGS', `TCP $h 587')dnl
define(`confAUTH_MECHANISMS', `LOGIN PLAIN')dnl
FEATURE(`authinfo', `hash -o /etc/mail/authinfo.db')dnl
MASQUERADE_AS(`mydomain.com')dnl
dnl MASQUERADE_DOMAIN(`otherdomain.com')dnl
FEATURE(masquerade_envelope)dnl
FEATURE(masquerade_entire_domain)dnl
MAILER(smtp)dnl

Open in new window

0
HI, Experts,

Recently, I started seeing "manager engineer" a job designation. which I would like to target and get closer to it (if not completely)

Even though I'm hands-on technical and still continually learn in and around my area. E.g have few Oracle certifications, two AWS certifications, one Azure (fundamentals)

My request is (sorry I'm just writing randomly)

  1. What should I practice to become a successful manager engineer
  2. What do they do? :)
  3. What would be their day to day tasks/activities
  4. How should be the thought process
  5. What should be the proactive area

Is there a training available which touch these areas? Any youtube channel, podcast? :)

please suggest
0
Hi:

I am unable to make my AWS EC2 instance connect to my RDS MySQL DB through SSL using PHP.

AWS EC2 Linux 2, Apache 2.4.39, PHP 7.3.10, MySQL 5.7.26

In order for my application that resides in EC2 to have a secure connection in transit, it must utilize SSL/TLS. My understanding that given my PHP/MySQL application, I need to perform the code below. In order not to affect my DB, I have set up a test DB in the same DB_server. The new user is called new-user with its own password. The bundled PEM file is rds-combined-ca-bundle.pem.

From various sources I put together the following code.

In AWS-test-ssl-script.php ..
34 require_once('AWS-test-config.php');
35 require(MYSQLI);

44 $sel = "CREATE USER IF NOT EXISTS 'new-user'@'%' IDENTIFIED BY 'password' REQUIRE SSL";
45 $sel_qry = mysqli_query($dbc, $sel);
46 mysqli_close($sel_qry);

// Simple test query ..

Open in new window

In AWS-test-config.php ..
define ('MYSQLI', 'AWS-test-connect.php');

Open in new window

In AWS-test-connect.php ..
12 $dbc=mysqli_init();
13 mysqli_ssl_set($dbc, NULL, "/dir/rds-combined-ca-bundle.pem", NULL, NULL, NULL);
14 mysqli_real_connect($dbc,"DB_server","new-user","password");

16 $res = mysqli_query($dbc, 'SHOW STATUS like "Ssl_cipher"');
17 print_r(mysqli_fetch_row($res));
18 mysqli_close($dbc);

Open in new window

Output ..
Warning: mysqli_real_connect(): Unable to set private key file `rds-combined-ca-bundle.pem' in AWS-test-connect.php on line 14
Warning: mysqli_real_connect(): Cannot 

Open in new window

0
Hi,

I need help configuring a VPN between our AWS instance and our office SonicWall Firewall. Please help.
0
Hi Experts,
Trying to read and come up with a technical details on how to design multi cloud architecture for a Proof of concept

Basically what I'm thinking
1. Simple application (hello world) running on a container
2. Deployed above app in Azure as primary site
3. as a secondary site deploy and shut down as a standby
4. Global dns setup and pointing to Azure LB
5. Deliberately fail Azure, so it fail over to AWS and scale out.

Just trying to get high level technical details considering above scenario. Later plan is to automate through terraform.

This is just for POC :)

thanks in advance
0
I've looking into AWS, with an intention to host an application , but for any Microsoft .NET application , one has to pay for the licenses for the windows OS image on
AWS compared to some free linux version. but if the application is in .net core , which can be hosted on any platform.

will the pricing be significantly different between hosting a .net core application in a free version of linux in AWS (if possible) vs hosting it in Microsoft Azure.
0
Hi,

I've just built functionality in my Node.js app that allows me to create Jira issues via Jira API. I'm using simple login with e-mail address and token + 'request' library to make requests. It works great when I launch it from in my local environment but I keep on getting 401 - Unathorized when I deploy the app to AWS EC2. What could be the reason for this issue? Below is the example request that works when fired locally:

const headers = {
  'Authorization': 'Basic ' + Buffer.from(jiraUser + ':' + jiraPassword).toString('base64'),
  'X-Atlassian-Token': 'no-check',
  'Content-Type': 'application/json',
};

Open in new window


[code]router.get('/jira/priority', AuthGuard.verify, (req, res, next) => {
  request.get({url: jiraUrl + 'priority', headers: headers}, (err, resp) => {
    // logic
  });
});

Open in new window

[/code]

The only thing that was coming to my mind is the request origin, that might be different when firing requests from AWS.
0
Hello,

I have a table named contactDetails that basically stores phonenumber for each contact. I have also enabled streams and have created a trigger using an lambda function to send an email whenever a new record is created in the table.

I do receive an email but I get them twice. It looks like there are more than one time the same record is modified for different attributes so it generates different stream events.

Lamdbafunction.txt
'use strict';
var AWS = require("aws-sdk");
var sns = new AWS.SNS();

exports.handler = (event, context, callback) => {

   event.Records.forEach((record) => {
     console.log('Stream record: ', JSON.stringify(record, null, 2));
  // console.log(event);
       
        if (record.eventName == 'MODIFY') {
           
           // console.log(event);
         
             let tabledetails = JSON.parse(JSON.stringify(event.Records[0].dynamodb));
            //    console.log(tabledetails.NewImage.address.S);
                let customerPhoneNumber = tabledetails.NewImage.customerPhoneNumber.S;
                   
             
           var params = {
                Subject: 'A new voicemail received from' + customerPhoneNumber,
                Message: 'A new voicemail received from Phone Number ' + customerPhoneNumber,
                TopicArn: 'arn:aws:sns:xxxxxx:xxxxxxx:xxxxxxxsnstopic'
        };
           
            }
                   
          sns.publish(params, function(err, data) …
0
AWS Domain forwarding.  I'm not as AWS savvy as I thought.   I have it working on the WWW. instance but not on the domain without the WWW.  AWS seems to be adding a double backslash.  I can not figure it out.

I am forwarding a site: https://www.creditwoutborders(DOT)com to https://getcoverr(DOT)com and that works fine but with the exact same setup in AWS I have https//creditwithoutborders(DOT)com (without the WWW ) and it goes to https://getcoverr(DOT)com// please note the trailing //  
I have no idea what to do to fix it.

AWS > ClouidFront > Route 53 with alias pointing to the CloudFront Address
0
I am not familiar with AWS as I thought I ws.  I am having an issue forwarding 2 domains.  Same domain just one with www and the other without.  I edited the A record set to the endpoint.  Something is not working though.  The www does it correct and the non www does not forward.  I also have been looking for a solution but everything is about forwarding to AWS not forwarding away from AWS to a different domain.  I want the domain that is hosted on AWS the traffic to be forwarded to a non AWS hosted site.  Help?
0
We have a production site on AWS hosted on Windows Server 2012R2 / IIS / ASP 4.0.3 that has been functioning for several years without issue.
On 11/20/2019 the site suddenly started failing requests. The errors in the event log are consistent and coincide with the time frame that the failure began. Below is an example.

Web Event ASP.NET 4.0.30319.0 - Event code: 3012
Event message: An error occurred processing a web or script resource request. The resource identifier failed to decrypt.
Event time: 11/20/2019 6:22:10 PM
Event time (UTC): 11/20/2019 6:22:10 PM
Event ID: 9810a512ee0e4dc99a5ae2f172ea3fe8
Event sequence: 2242
Event occurrence: 72
Event detail code: 0

I have researched this issue and have followed the steps outlined in several similar issues. For example:
I have tried updating the Machine Key configuration to use "static" Validation and Decryption keys and updated the web.config files for the applications to match the values "validationKey" and "decryptionKey" for the IIS server. This course of action has not resolved the issue for me. This is a single server being accessed.

The current Certificate on the server will expire on 12/19/2019. Warnings are also being logged on the server about the pending expiration. It is interesting that the site has gone down 1 month prior to the certificate expiring. Coincidence?

I am hoping someone might be able to offer some further suggestions.
0
Hello All,

We have a trading server (an ec2 instance in London) and 3 data centers that feed information to the trading server. of the 3 DC's, 1 is hosted on AWS in HK, 2 others hosted outside AWS in HK as well.

How can I go about setting up a direct connection between all 4 sites in order to reduce network latency. Happy to offer further clarifications.

Thanks!
0
I am trying to setup an autostart function in AWS Lambda with Python.
I already have an autostop function that work:
ec2.instances.filter(InstanceIds=StartedInstances).stop()

But matching autostart function does not turn the instance on.
ec2.instances.filter(InstanceIds=StoppedInstances).start()

StartedInstances and StoppedInstanes are arrays of instance ids found by applying a filter for a specific tag.

The result I get when running a Test on the start Lambda is as follows:
START RequestId: 8cd154b2-390e-46d0-9a20-831247c4bc1f Version: $LATEST
[{'StartingInstances': [{'CurrentState': {'Code': 0, 'Name': 'pending'}, 'InstanceId': 'i-eae111caa 'PreviousState': {'Code': 80, 'Name': 'stopped'}}], 'ResponseMetadata': {'RequestId': '8135e59a-ab7d-4329-b7b4-7725ea9c7223', 'HTTPStatusCode': 200, 'HTTPHeaders': {'content-type': 'text/xml;charset=UTF-8', 'content-length': '570', 'date': 'Thu, 07 Nov 2019 22:14:59 GMT', 'server': 'AmazonEC2'}, 'RetryAttempts': 0}}]
END RequestId: 8cd154b2-390e-46d0-9a20-831247c4bc1f
REPORT RequestId: 8cd154b2-390e-46d0-9a20-831247c4bc1f      Duration: 977.91 ms      Billed Duration: 1000 ms      Memory Size: 128 MB      Max Memory Used: 87 MB      Init Duration: 395.54 ms      



Can someone please help me figure why this is not actually starting the server?
0
Trying to set up https access trough a SSH tunnel. I have a Web-server i want to protect access to a web server.

So what i have:

(1) a AWS Ubuntuserver with only port 22 open in firewall.
(2) Putty setup with tunnel L444 127.0.0.1:443
(3) Enter into Chrome https://sub.domain.com:444

But i get a timeout error, but https://sub.domain.com:443 does work...
0
I have a customer with a 2008 (not R2) forest level and their FSMO DC is hosted in AWS hosted servers.  Their sites all have 2008 R2 Server and somwhere there is a 2010 Exchange Server floating around...

Here begins the caveats to forest levels and exchange organizations....

My end-goal is to get the customer onto Exchange 2016 and get their forest level up to 2012R2 with zero disruption.  Even better, there are and probably will continue to be Windows 7 PCs in this environment.

I had considered doing a new environment with a knife-edge, but I'm not sure how to create a trust between 2016 domain controllers on a 2008 forest with a 2008 FSMO DC.  My reading leads me to believe that you cannot join a 2016 server to a 2008 forest.  

Anyone have recommendations on what order to perform which task and things to avoid or watch out for?

Thanks in advance!
0

AWS

Amazon Web Services (AWS), is a collection of remote computing services, also called web services, that make up a cloud-computing platform  operated from 11 geographical regions across the world. The most central and well-known of these services include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3". Other services include Elastic MapReduce (EMR), Route 53 (a DNS web service),  provides a highly available and scalable Domain Name System (DNS) web service, Virtual Private Cloud (VPC), storage, database, deployment and application services.