AWS

Amazon Web Services (AWS), is a collection of remote computing services, also called web services, that make up a cloud-computing platform  operated from 11 geographical regions across the world. The most central and well-known of these services include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3". Other services include Elastic MapReduce (EMR), Route 53 (a DNS web service),  provides a highly available and scalable Domain Name System (DNS) web service, Virtual Private Cloud (VPC), storage, database, deployment and application services.

Share tech news, updates, or what's on your mind.

Sign up to Post

Woocommerce db just moved to a new Amazon instance.  Did mysqldump of old instance and then import via mysql.  The backoffice comes up the pages show but when you try to view them this error shows

https://gyazo.com/6a19c26d8530df1b2d98e82253534dd6

This is a snapshot of the posts table.  As you can see there are over 50,000 entries.

https://gyazo.com/2fb664b3a1b2acaa0f55bce5bc6f0612

Please let me know what to do I am down completely.
0
The Lifecycle Approach to Managing Security Policy
The Lifecycle Approach to Managing Security Policy

Managing application connectivity and security policies can be achieved more effectively when following a framework that automates repeatable processes and ensures that the right activities are performed in the right order.

Hi All, I have 3 AWS VM's. VM01 is domain controler, VM02 and VM03 joined domain. I am able to ping VM's by IP , full name "host.domain.com" and AWS Private DNS butnot with "host" .  Can you please help me out for this?
0
Using Kotlin I have been trying to connect an Android App with Amazon Web Services. I have carefully followed all the instructions in the "Getting Started" guide. I have done it over and over about 6 times thinking I was making a little error. I honestly can not figure out what I have done wrong. I installed imports, dependencies and permissions without trouble. After I modified the onCreate it would no longer work on the emulator and I didn't get the handshake back from AWS.  Please note: Text file of all codes attached!
errorAWS.txt
0
I am looking to use a  MY Cloud Pro 4100 series NAS box  for backing up  Windows Server\ SQL Data locally and then send it to Amazon S3 cloud. The servers I am talking about are 3 Windows servers

2 of the 3 servers are VM-* the servers do  have SQL databases

I  would like to backup  the data locally on the NAS and then send it to Amazon
Has anyone done this?

What I originally wanted to do was install the backup software on the NAS and  have the data back to it and then to the Amazon s3 cloud
0
I am getting this error when I run ifup eth0
ifup eth0
ERROR     : [/etc/sysconfig/network-scripts/ifup-eth] Device eth0 does not seem to be present, delaying initialization.

Open in new window


I am using Centos 7 on an Amazon instance.

This is my config file for eth0
DEVICE="eth0"
BOOTPROTO="dhcp"
ONBOOT="yes"
TYPE="Ethernet"
USERCTL="yes"
PEERDNS="yes"
IPV6INIT="no"
PERSISTENT_DHCLIENT="1"

Open in new window


When I run service network restart I get this message
 service network restart
Restarting network (via systemctl):  Job for network.service failed because the control process exited with error code. See "systemctl status network.service" and "journalctl -xe" for details.
                                                           [FAILED]

Open in new window


This is what it says in the journal file
[root@ip-172-31-31-237 network-scripts]# journalctl -xe
--
-- Unit network.service has begun starting up.
Jul 08 09:20:56 ip-172-31-31-237.ec2.internal network[2050]: Bringing up loopback interface:  [  OK  ]
Jul 08 09:20:56 ip-172-31-31-237.ec2.internal network[2050]: Bringing up interface ens5:
Jul 08 09:20:56 ip-172-31-31-237.ec2.internal dhclient[2180]: dhclient(761) is already running - exiting.
Jul 08 09:20:56 ip-172-31-31-237.ec2.internal network[2050]: Determining IP information for ens5...dhclient(761) is already running - exiting.
Jul 08 09:20:56 ip-172-31-31-237.ec2.internal network[2050]: This version of ISC DHCP is based on the release available
Jul 08 09:20:56 

Open in new window

0
Hi Experts,

      I have two containers running on dockers.  

root@ip-10-252-14-11:/home/ubuntu/workarea/sourcecode/ntdl# docker container ls
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                  NAMES
596874f0eedb        dcf3be75c970        "start"             8 days ago          Up 8 days           0.0.0.0:8009->80/tcp   iiif
91c61a7ea455        8a38b977270d        "start"             8 days ago          Up 8 days           0.0.0.0:8008->80/tcp   ntdl

Open in new window


wagtail(Django)  (ntdl )application on port 8008
wagtail application similar to django applicationimage server running independently on 8009
IIIF IMAGE Serverwagtail (ntdl) without zoom image not communicating with iif image server
wagtail application not talking to iiif image server
without zoom image from image server
console logs details
console log on browser windowconsole logs
::net ERR_CONNECTION_REFUSED for accessing iiif_image server.  

nginx is installed with wagtail ntdl application
Please help me in resolving this issue.



With many thanks,
Bharath AK
0
I have a non-public S3 bucket XXX on account X, and a CloudFront distribution on account Y that needs to use that bucket as the origin.

What I did so far:

* Added the canonical id of account Y to the permissions of bucket XXX - I get 403 errors.
* Added a bucket policy to bucket XXX - I still get 403 errors
{
    "Version": "2012-10-17",
    "Id": "Policy1234567890",
    "Statement": [
        {
            "Sid": "AllowRead",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::accountY:root"
            },
            "Action": "s3:Get*",
            "Resource": "arn:aws:s3:::XXX/*"
        },
        {
            "Sid": "AllowList",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::accountY:root"
            },
            "Action": "s3:List*",
            "Resource": "arn:aws:s3:::XXX/*"
        }
    ]
}

Open in new window


Any verified suggestions on how to do this?
0
Amazon is launching a new business where you can franchise a branded Amazon courier service:
https://logistics.amazon.com/

Startup cost is $10k

They estimate annual profit for a fleet size of 20-40 vans at $75k - $300k a year:
https://logistics.amazon.com/marketing/opportunity

Is this the "Uber"ization of the Post Office?
2
LVL 55

Expert Comment

by:Scott Fell, EE MVE
I saw that the other day too.  I think the vans turn out to be daily rentals from Enterprise. Managing 20 to 40 vans probably means managing 30 to 60 people, 12 hour shifts, 7 days a week.  If you are at the low end of $75K, that is a lot of work for that amount.
2
LVL 19

Author Comment

by:Lucas Bishop
Yeah, package/mail delivery is a real grind. My parents are retired postmasters, so I'm interested to hear their take.

I could see how this would be an interesting fit for someone with experience in managing a delivery/logistics model, to be able to work for themselves. At the same time, you would have no negotiating power with Amazon as far as payment per package/van. Seems like an extremely high risk scenario.

Seems like the only way you could step into this safely, is if you could look at the historic package volume for the geographic region you'd like to service.
0
We are using Amazon WorkDocs as storage.  We have a website that processes orders and sends out emails with attached Word documents.  We would like these Word documents to be sent into a specific Amazon WorkDocs location automatically.  

I was thinking I may be able to setup an email filter that forwards these emails to an address provided by WorkDocs, which would save the attachments on that email accordingly.  Maybe wishful thinking..??  I'm just not seeing anything about this, but before I give up on it I figured I'd ask here and see if somebody else has done something like this..??

Any  information on this would be greatly appreciated.  Thanks!
0
I have a Node project written in TypeScript where the application takes in user's input arguments and does some calculation and prints out the results. Take the example of a calculator the user inputs value of two numbers and it prints out the addition of two numbers. for example npm run calc 3 5. The application prints out 8.

The application is working but i want to make this an aws lambda function and deploy it in aws. The examples i see everywhere is hello lambda function. Can anyone help me how i can make a handler and deploy it to aws?

How do i convert the below hello function to a calc lambda function and deploy it to aws?
export const hello: Handler = (event: APIGatewayEvent, context: Context, cb:             
Callback) => {
const response = {
statusCode: 200,
body: JSON.stringify({
message: 'Go Serverless Webpack (Typescript) v1.0! Your function executed 
successfully!',
  input: event,
}),
};

cb(null, response);
}

Open in new window


This is my addition class

export class Calc {
public static add(){

    console.log( this.addition( (process.argv[0] + process.argv[1] ));

}
public static addition(num1:number, num2:number)
{
    return num1 + num2;
}

Open in new window

0
The Firewall Audit Checklist
The Firewall Audit Checklist

Preparing for a firewall audit today is almost impossible.
AlgoSec, together with some of the largest global organizations and auditors, has created a checklist to follow when preparing for your firewall audit. Simplify risk mitigation while staying compliant all of the time!

I am trying to write a powershell script to query my AWS Glacier vault to verify files have been uploaded before I delete them from the source.

I am using CloudBerryLab.Explorer.PSSnapIn cmdlets within my powershell script.  Here is the script which I've basically copied from Cloudberry forum


add-pssnapin CloudBerryLab.Explorer.PSSnapIn
$conn = Get-CloudGlacierConnection -Key [YOUR ACCESS KEY] -Secret [YOUR SECRET KEY]
$vault = $conn | Select-CloudFolder -Path "[region]/[Vault Name]"
$invJob = $vault | Get-Inventory
$archives = $vault | get-clouditem

The connection appears to work and checking $invJob.status = succeeded.  Also it does take about 4 hours for the inventory to complete.  

When I execute "$archives = $vault | get-clouditem", I get the error "Get-Clouditem:No ready inventory retrieval jobs for vault:[my region/my vault]"

I have a feeling I'm missing a step somewhere that would involve using the $invJob but I can't find any methods for the object where I can query the folders in my vault.

Any thoughts or ideas would be much appreciated.
0
I have a 10Gbps Direct Connect circuit from our enterprise data center to AWS. Each VPC has a different
sub-interface and different BGP peer. See snippet below. What's happening is that ping tests to some BGP
peers might have NO loss be other BGP peers are seeing 2 to 10% packet loss from the perspective
of our monitoring system in the data center. If I look at the Ethernet port or the PortChannel port there
are no incrementing errors nor discards that I can see. If I try to show anything about the subinterface
(say show interface port-channel3.1002) - error information is not available. I'm not sure how I can
look at the interface of the router on the AWS side of the connection.

My question: how can I go about troubleshooting the ping loss to these sub-interfaces/bgp peer addresses?

neighbor 172.18.1.189
inherit peer aws-dx-peering
description peering to preprod

neighbor 172.18.1.195
inherit peer aws-dx-peering
description peering to prod

interface port-channel3.1001
  description DX for preprod
  encapsulation dot1q 1001
  bfd interval 300 min_rx 300 multiplier 3
  no ip redirects
  ip address 172.18.1.130/31
  ip router ospf 1 area 0.0.0.0


interface port-channel3.1002
  description DX for prod
  encapsulation dot1q 1002
  bfd interval 300 min_rx 300 multiplier 3
  no ip redirects
  ip address 172.18.1.132/31
ip router ospf 1 area 0.0.0.0
0
I am working for a utility company. For hosting we use Linode right now.

I wanted to set up some form of redundancy in the event that the website goes down.

So I was thinking about using failover. As I understand I would need two (or more) web servers and two DNS servers. Is this correct?/

I was going to have one web server with Linode and one with AWS. For DNS I was going to have one with Linode and one with AWS.

Does this sound like a good setup?
0
Hey, This is just a decision-making problem. I am seeking for a well thought out answer.
I have to develop an android XMPP chat application, (the application also has a NodeJS server API, connecting to MongoDB and AWS S3 for picture uploads)

which will be better:

1. Having a "openfire" server on aws and connecting it to the android application and implementing an XMPP client on the android device using "smack" library.

2. Implementing a "xmpp-client" on NodeJS server side and scrape the results from this API to the Android device.
0
Hi, I'm building a big data streaming pipeline that takes streams from a camera through kinesis to trigger a lambda function. The lambda function will then use AWS machine learning to detect objects and the images are stored in S3 and their metadata is stored in DDB. My problem is that the first frame of the video is being stored on S3 and DynamoDB repeatedly (same image is being stored). Here is the lambda code (the main function):

def process_image(event, context):

    #Initialize clients
    rekog_client = boto3.client('rekognition')
    s3_client = boto3.client('s3')
    dynamodb = boto3.resource('dynamodb')

    s3_bucket = ...
    s3_key_frames_root = ...

    ddb_table = dynamodb.Table(...)
    rekog_max_labels = ...
    rekog_min_conf = float(...)
    label_watch_list = ...
    label_watch_min_conf = ...

    #Iterate on frames fetched from Kinesis
    for record in event['Records']:
        
        frame_package_b64 = record['kinesis']['data']
        frame_package = cPickle.loads(base64.b64decode(frame_package_b64))
        img_bytes.append(frame_package["ImageBytes"])
        frame_count = frame_package["FrameCount"]

        rekog_response = rekog_client.detect_labels(
            Image={
                'Bytes': img_bytes
            },
            MaxLabels=rekog_max_labels,
            MinConfidence=rekog_min_conf
        )

        #Iterate on rekognition labels. Enrich and prep them for storage in DynamoDB
        labels_on_watch_list = []
        

Open in new window

0
AD Box in EC2 , AWS VPC.   1 DC and about 5 Members.   _MSCDS

Error when running DC DIag.

Testing server: \EC2AMAZ-XYZ
      Starting test: Connectivity
         The host 56789e91-a5fe-4d05-8c0d-698f5d2c9330._msdc.domaint.local could not be resolved to an
         IP address.  Check the DNS server, DHCP, server name, etc
         Although the Guid DNS name

         (56789e91-a5fe-4d05-8c0d-698f5d2c9330._msdcs.domain.local) couldn't

         be resolved, the server name (EC2AMAZ-9G30JPN.domain.local) resolved

         to the IP address (10.x.y.169) and was pingable.  Check that the IP

         address is registered correctly with the DNS server.
         ......................... EC2AMAZ-XYZ failed test Connectivity

Doing primary tests
   
   Testing server: IRON\EC2AMAZ-9G30JPN
      Skipping all tests, because server EC2AMAZ-9G30JPN is
      not responding to directory service requests
   
   Running partition tests on : ForestDnsZones
      Starting test: CrossRefValidation
         ......................... ForestDnsZones passed test CrossRefValidation
      Starting test: CheckSDRefDom
         ......................... ForestDnsZones passed test CheckSDRefDom
   
   Running partition tests on : DomainDnsZones
      Starting test: CrossRefValidation
         ......................... DomainDnsZones passed test CrossRefValidation
      Starting test: CheckSDRefDom
         ......................... DomainDnsZones passed test …
0
Echo customer service is an online service, Provide by amazon to their customers and amazon users. Echo customer service has a professional executive to solve all kind of queries of customers.
Echo customer service is a helpline service for their user who uses Echo products. Echo customer service tells special offers for you.  Echo customer service is aEcho customer servicevailable 24/7 for their customers.      

http://allcustomerservicephonenumber.com/echo-customer-service/
0
I want to know the average and maximum about of bandwidth that is being consumed by all the EC2 instances in a VPC. I am interested particularly in the traffic
initiated by hosts in the VPC and received from sites on the Internet.  Is there something standard built into AWS that would give me this visibility?
0
Can you disable TLS 1.0 in an existing AWS Beanstalk Application without having to destroy the existing implementation and create a new one?
0
ON-DEMAND: 10 Easy Ways to Lose a Password
LVL 1
ON-DEMAND: 10 Easy Ways to Lose a Password

Learn about the methods that hackers use to lift real, working credentials from even the most security-savvy employees in this on-demand webinar. We cover the importance of multi-factor authentication and how these solutions can better protect your business!

Hello,

I'm struggling to setup subdomains on an AWS EC2 Linux AMI.

The primary ServerName should be student.dsportal.co.uk (using Cloudflare DNS to point the A record to the EC2 elastic IP)

The subdomain should be admin.dsportal.co.uk (again using Cloudflare DNS to point the A record to the EC2 elastic IP) and root to /admin folder.

I've tried adding the following to httpd.conf but its returning a 500 error on the admin.dsportal.co.uk subdomain:

<VirtualHost *:80>
    ServerName admin.dsportal.co.uk
    ServerAlias www.admin.dsportal.co.uk
    DocumentRoot "/var/www/html/admin"
    <Directory "/var/www/html/admin">
    AllowOverride All
    Require all Granted
    </Directory>
</VirtualHost>
<VirtualHost *:80>
    ServerName student.dsportal.co.uk
    ServerAlias www.student.dsportal.co.uk
    DocumentRoot "/var/www/html"
</VirtualHost>

Any help much appreciated.
0
If I wanted to transfer data from my data center to AWS S3 at a rate of 80Gbps - what would be my options?
0
Hello, we are currently migrating some websites into AWS VPS hosting (or whatever they call it I’m not in billing), and want to know what other people use as a DNS provider, does AWS provide DNS services, if not what DNS providers do you use to host websites.
In particular we are looking for DNS services that can do things like SPF and DKIM etc.. (so not afraid DNS)

I thank you for your help in advance.
0
I would like to know if it is possible to add a timestamp column in a table when it is loaded by an AWS Glue Job.

First Scenario:

Column A | Column B| TimeStamp
A|2|2018-06-03 23:59:00.0

When a Crawler updates the table in the data catalog and run the job again, the table will add the new data in the table with a new time stamp..

Column A | Column B| TimeStamp
A|4|2018-06-04 05:01:31.0
B|8|2018-06-04 06:02:31.0

here is my code..


import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext, DynamicFrame
from awsglue.job import Job
from pyspark.sql.functions import current_timestamp

## @params: [TempDir, JOB_NAME]
args = getResolvedOptions(sys.argv, ['TempDir','JOB_NAME'])

sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)

datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "test_stack", table_name = "sample_json", transformation_ctx = "datasource0")
applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("array", "array", "array", "string"), ("boolean", "boolean", "boolean", "boolean"), ("null", "string", "null", "string"), ("number", "int", "number", "int"), ("object.a", "string", "`object.a`", "string"), ("object.c", "string", "`object.c`", "string"), ("object.e", "string", "`object.e`", "string"), ("string", 

Open in new window

0
This is using AWS Amazon Linux instance. I want to setup clamav onto this instance. Please see the setup as follows,

     1. sudo yum install clamav clamav-scanner-sysvinit clamav-update; setup completed successfully
     2. sudo freshclam;  Anti-virus signatures downloaded successfully
     3. chmod -R clamscan.clamscan /var/log/clamd.scan
         chmod -R clamscan.clamscan /var/run/clamd.scan
         chmod -R clamscan.clamscan /usr/lib/clamd.scan

         all applied successfully
     4. However, attempt to start daemon - clamd.scan failed.

         service clamd.scan start;  
         result:  Starting clamd.scan         [Failed]
     5. Manual scanning using "clamscan -r" is completed successful.

Please see the attached /etc/clamd.d/scan.conf as attached.
scan.txt
0
If your data center is in the same data center as an AWS Availability Zone data center - would it be
possible to create a Direct Connect just by patching from your gear to theirs? That is - is there a
way you could avoid some of expense of a carrier getting packets to/from AWS by virtue of being
physically located in one of their data centers?
0

AWS

Amazon Web Services (AWS), is a collection of remote computing services, also called web services, that make up a cloud-computing platform  operated from 11 geographical regions across the world. The most central and well-known of these services include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3". Other services include Elastic MapReduce (EMR), Route 53 (a DNS web service),  provides a highly available and scalable Domain Name System (DNS) web service, Virtual Private Cloud (VPC), storage, database, deployment and application services.