AWS

Amazon Web Services (AWS), is a collection of remote computing services, also called web services, that make up a cloud-computing platform  operated from 11 geographical regions across the world. The most central and well-known of these services include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3". Other services include Elastic MapReduce (EMR), Route 53 (a DNS web service),  provides a highly available and scalable Domain Name System (DNS) web service, Virtual Private Cloud (VPC), storage, database, deployment and application services.

Adding a Windows 10 Hosted VDI on AWS

I thought this would be simple on AWS. But, when I try and "Launch an Instance" and look through the various Windows options, I see only Windows Server?

Isn't a Windows 10 Desktop VDI considered an Instance?

What am I missing?

Thanks
0
We are currently have a large amount of data we want to back up to a safe location. All of our IT infrastructure including servers and storage (Dell EMC) are located on premise. We have a gateway server that allows our servers to connect to the Internet. The data doesn't need to be accessible constantly. We only need to retrieve the data when there is a catastrophic event on premise. We are looking into AWS Glacier as our solution; however, we have couple concerns:

. Can we just use the AWS Glacier service without using any other AWS services to upload and to retrieve our data?
. Our data is currently located on an Isilon that doesn't have direct access to the outside Internet? Can we set up so that we can upload/retrieve data from the Isilon to AWS Glacier?
. The files and folders we want to back up to AWS contain very large graphic files. The files can go up to couple hundred Gibs. Will speed be an issue?
0
I have a WordPress site running a LAMP stack in AWS (Amazon Linux).

The version of PHP I have is:

PHP 7.0.33 (cli) (built: Jan  9 2019 22:04:26) ( NTS )

Open in new window


I noticed that the latest version of PHP is 7.3, available via the AWS Package Manager as:

sudo yum install -y php73

Open in new window


If I was to install this version of PHP are there any WordPress Configuration changed I'd need to make?

Thanks!
0
We have a Synology NAS and would like to do a daily cloud backup.

I am looking for a suggestion as to the overall best cloud server.

We do our local backups using Hyper Backup and it seems to work OK.

I have tried using Hyper Backup to AWS cloud. For me, not a good solution.

We have about 2000 gigs of data with about 400 megs getting modified daily.

This whole external cloud backup is new to me.
0
I need to install a Let's Encrypt SAN Certificate, so that multiple domains can use Let's Encrypt. I am currently running AWS Linux (basically RedHat). Does anyone have a good "go to" set of directions on how to create a Let's Encrypt SAN Certificate on AWS Linux?

Thanks
0
I have a Wordpress site on a LAMP Stack running in AWS. I have Let's Encrypt running to automatically update the TLS Certificate and I re-direct all traffic to the "www." instance for the purposes of SEO.

The site works as expected for EVERY browser EXCEPT Safari. What I mean is that I've tried Chrome, Firefox, Opera, Vivaldi and Internet Explorer. I made not changes to the Server itself and I'm a little perplexed as to what happened. Below is a screenshot from some testing I was doing using BrowserStack. This seems to be happening on all Safari browsers back to iOS7.

Any idea what's going on here? Is this fixable from my end?

wordpress-site-safari-anomaly.jpg
0
I'm having issues with my haproxy servers (running Ubuntu 16.04) rejecting new connections (or timing them out) after a certain threshold. The proxy servers are AWS c5.large EC2's with 2 cpus and 4GB of ram. The same configuration is used for both connection types on our site, we have one for websocket connections which typically have between 2K-4K concurrent connections and a request rate of about 10/s. The other is for normal web traffic with nginx as the backend with about 400-500 concurrent connections and a request rate of about 100-150/s. Typical cpu usage for both is about 3-5% on the haproxy process, with 2-3% of the memory used for the websocket proxy (40-60MB) and 1-3% of the memory used for the web proxy (30-40MB).

Per the attached config, the cpus are mapped across both cpus, with one process and two threads running. Both types of traffic are typically 95% (or higher) SSL traffic. I've watched the proxy info using watch -n 1 'echo "show info" | socat unix:/run/haproxy/admin.sock -' to see if I'm hitting any of my limits, which does not seem to be the case.

During high traffic time, and when we start to see issues, is when our websocket concurrent connections gets up to about 5K and web requests rate gets up to 400 requests/s. I mention both servers here because I know the config can handle the high concurrent connections and request rate, but I'm missing some other resource limit being reached. Under …
0
Hi
I think I am ready to roll on my final proper game coding.

I can get html/javascript code running from my local OS X XAMPP  7.2 local server, of course, w MacBook IP address..but is isn't always-up, and I need it always on.

But, for an intended final product, I've been looking at the options for a real-world type dev system.

It looks like Amazon Web Services has very reasonable options for eventual monthly billing. here

I'll use the free tier.
For now, I need only the ability to put a simple page up that can load a device .io game in html and serve the game code to the player browser / device correctly.
Is that what gameLift is suited for? If I put my working .io game directory in my Godaddy space, it doesn't work. I apologize for bringing up Godaddy again. It'll be the last time, I hope. So, gamelift can serve an .htm that pulls up its .io code?
Is gamelift mainly for MMO games / games w player accounts?
I'd say that - My first game - .io game - is most similar to games like word-scapes and Drag-'n-Merge, not yet fortnite or Slither.io

Thanks
0
I've got questions about GDPR and CCPA data deletion requests and backup sets. Its pretty straight forward to remove a person that has asked for data deletion from our production environment. My problem is our backup set and machine snapshots stored in AWS or Azure. I cannot find much information about whether or not we would be in compliance if we didnt delete data contained in encrypted/password protected incremental backup sets. Does anyone have any experience with this?
0
I am new to AWS RDS. We have SQL server in Azure, not SQL Azure. I think AWS RDS, like SQL Azure is Platform as a Service (PaaS), not  infrastructure as a service;. I want to know if this is correct statement.

I have MS 2016  Always-on cluster. What is the benefits if we want to do AWS RDS.

What is pros and cons for AWS RDS vs not just AWS?
0
private void submitCallablesWithExecutor()
				throws InterruptedException, ExecutionException, TimeoutException {

			ExecutorService executorService = null;

			try {
				executorService = Executors.newCachedThreadPool();

				Future<String> task1Future = executorService.submit(new Callable<String>() {

					public String call() {
						try {
							processExportRequest(xmlPutRequest_, customizedRequest_, response_);
							return "Success";
						} catch (Exception ex) {
							return ex.getMessage();
						}
					}
				});

			} finally {
				executorService.shutdown();

				try {
					if (!executorService.awaitTermination(800, TimeUnit.MILLISECONDS)) {
						executorService.shutdownNow();
					}
				} catch (InterruptedException e) {
					executorService.shutdownNow();
				}
			}
		}

Open in new window


within processExportRequest I am calling upload to S3.  I have tried both S3Client and S3AsyncClient.  In both cases, I am getting following error:

Failed to upload to S3: java.lang.IllegalStateException: Interrupted waiting to refresh the value.

I don't see anywhere in my code that's calling Thread.interrupt(), and everything else seems to work fine, just not S3 upload.  Maybe the multithreaded nature of Java Future is not compatible with AWS SDK?  Thanks.
0
I have a Wordpress site running on a LAMP stack running in AWS EC2 that got compromised today. The hacker encrypted the small MySQL database with a Bitcoin address instead of the expected tables.

I would like to install some AntiVirus and Malware software as a future deterrent. It wouldn’t have done me a lot of good in this case, but I realized that the folks before me didn’t set this up.

1/ Do you have any recommendations for software that plays nicely with Amazon Linux (basically RedHat)?

2/ Do you have a favorite set of “go-to” installation and configuration instructions that you could share? I need something fairly simple to setup & automate updating heuristics and protecting the system.

Thanks for your help!
0
I am looking to set up and EC2 server to process files, potentially user uploaded , but they could be saved in S3 and processed later as a nightly job or something similar. I don't know if having a EC2 server is better or this can be simply done by a lambda.

my org has lot of ec2 servers and s3 buckets, if I wanted to add some code , what else do I need to set up apart from giving read/write permission to the s3 objects/buckets. do I need to set up a different user, or any other rules . I don't think vpc is needed.

and what about the security, if I allow users to upload files and eventually save it to the s3 bucket?
0
I want to upload a file to S3 bucket, but my company want to use IAM role as opposed to access keys.  This is AWS documentation on how to upload to S3 asynchronously:

S3AsyncClient client = S3AsyncClient.create();
		CompletableFuture<PutObjectResponse> future = client.putObject(
				PutObjectRequest.builder().bucket(BUCKET)
						.key(fileName)
						.build(),
				AsyncRequestBody
						.fromFile(fromFile.toPath()));
		future.whenComplete((resp, err) -> {
			try {
				if (resp != null) {
					System.out.println("my response: " + resp);
				} else {
					// Handle error
					err.printStackTrace();
				}
			} finally {
				// Lets the application shut down. Only close the client when
				// you are completely done with it.
				client.close();
				
			}
		});

		future.join();
	}

Open in new window


I don't see anywhere to put in IAM role info.  I tried to put it in ~/.aws/credentials in this form:

[useraccount]
aws_access_key_id=<key>
aws_secret_access_key=<secret>

[somerole]
role_arn=<the ARN of the role you want to assume>
source_profile=useraccount

but so far haven't gotten it to work.  I read somewhere you need to use STSAssumeRoleSessionCredentialsProvider but didn't see any good examples.  My main question is do I even need to do anything if I already assigned the IAM role to an ECS instance.  Can someone help me?   Thanks.
0
I plan to use Amazon Aurora Serverless  (MySQL-compatible ) , but to build locally can I install mysql locally or we have to connect to the aws right from the start?
0
I am running below from my mac laptop

ssh -f -N -T -R2222:localhost:22 ec2-user@app.my_aws_host.com

Open in new window


and per my understanding when I do below below from any other ssh client then I should be connected ( ssh ) to my mac laptop

ssh ec2-user@app.my_aws_host.com -p 2222

Open in new window


But I am getting connection refused error. Appreciate any help here

P.S: port 2222 is open in my security group in AWS
0
We are subscribing to Teammate SaaS (that's hosted in AWS)
& our data to be hosted is deemed sensitive.

Q1:
by default is data at rest encrypted by default (whether it's
default offering by AWS or by Teammate)?

Q2:
Is backup offered by default (by Teammate or by AWS?) or
this is an optional item that we must subscribe/purchase
separately?

Q3:
For data sovereignty purpose, can we specify to Teammate
(or is it AWS) that the data must be hosted in AWS DC in
the local country only & not 'synced' to overseas?
0
To get started with aws, I have created a s3 bucket , created a index html and few other pages to test my static web pages. I also created a EC2 instance. I'm writing an web application in .net core , where I want to read some input files (XML or Json ), and do some processing write the output and store those files back in S3 bucket.

How do I read/write files in the S3 bucket?
0
hi experts,
trying to perform a proof of concept by clicking an IoT Device button invoking webhook or restapi. eg invoking a Jenkins build.


could you suggest some devices please? aws IoT is not available on my region.

looking for Samsung Smart Things hub. would this work? but expensive though :)
0
This is going to sound really stupid - so be it - but with the AWS examples - I see a lot of this coding - Can someone explain to me - basically - what the code is describing ??
I think I know, its giving substance/value to "Version", "Statement" etc but when I try this in Java, especially when compiling it in a package it does not fly. And I found this example in AWS SDK examples for Javascript/Java.
Could someone clear the air for me ?? And yes, I feel pretty stupid.
thank you.

{
  "Version":"2012-10-17",
  "Statement":[
    {
      "Sid":"PublicRead",
      "Effect":"Allow",
      "Principal": "*",
      "Action":["s3:GetObject"],
      "Resource":["arn:aws:s3:::examplebucket/*"]
    }
  ]
}

Open in new window

0
I am new to AWS - I am using  localhost and Win 10. I plan to use Javascript and Java within html for some it. This is my first attempt tying to access and list the content of my bucket programmatically. (I am using the freebie for learning), I found this code when reading thru the documentation and for the life of  me I cannot get it to execute. And I have no clue.
Could someone be as so kind as to set me on the right path to learning this ??  My eventual goal is getting objects, granting permissions etc.
But I thought a good starting point would be something simple as listing the bucket contents - HA ! The bucket is public and everyone has read rights. I have provided both the code from AWS SDK Example and underneath it, my html coding.
I have verified that my Java is installed correctly, have set my environment path and I am able to compile packages etc.
Below is the code from the AWS SDK example for javascript: I inserted my bucket name.
var params = {
Bucket:'elasticbeanstalk-us-west-2-768711936919.s3-us-west-2.amazonaws.com',
Delimiter: '/',
Prefix: 'foldername/'
};
s3Bucket.listObjects(params, function(err, data) {
            if (err) {
                return 'There was an error viewing your album: ' + err.message
            }else{
                console.log(data.Contents,"<<<all content");

                data.Contents.forEach(function(obj,index){
                    console.log(obj.Key,"<<<file path")
                })
            }
        })

Open in new window


And below is my html coding:

Open in new window

0
I am new to AWS. I have done a great deal of background reading/research which always helps when you are trying to resolve a problem.
That being said, I have a problem which I do not understand.
We have an ASP legacy system which will remain in ASP indefinitely.
We have stored pdfs with images – S3 in the AWS cloud.
The images and pdfs are “public”.
Part of the URL is dynamically constructed each time based on the “user” that logins (tagged onto the AWS info) We have a newer system – codeigniter/php –  which will display the pdfs and image just fine with the dynamically created URL.
But will not do so in our ASP legacy system.
I understand that by the simple use of a URL, I should be able to access the pdf etc., and particularly since the pdfs and images are “public”.
I develop using localhost, (PHP-Wampserver) but can also develop ASP.
SO my question is two fold.
Do we need to install a something on our ASP server to enable us to view the images as well??  
And what SDK must I install on my localhost (I am thinking the JavaScript and PHP)
I thought maybe it might be a security/permission issue with the ASP server accessing the cloud but that doesn’t make sense because on the ASP side, it will display the pdf but not the image.
This is becoming urgent and any help/advice/pointers would really be appreciated. Thank you.
0
For compliance we need to maintain native SQL backups. On-prem systems use Veeam backup which easily gathers daily SQL backups and manages a retention policy of daily for 2 weeks, end of month for 6 months, end of year for 7 years.  The company is now building the next technology stack on Amazon Web Services and the S3 bucket version management is woefully simplistic.  I have created a lambda to trigger a native SQL backup once a day, and hoping to be able to manage the version retention, it simply overwrites the file in S3. This DB is expected to grow to around 4TB by the end of the year, so paying to store every version every day for 7 years is out of the question.  
Has anyone in this group come across, or written a lambda (or other widget) that can be triggered to look through S3 previous versions and prune the excess according to a selected or defined retention policy as described?
0
HI,

I'm trying to modify below python script o produce nice tabular output. (Right now its not in a readable format)

Thanks in advance

Script source:
https://github.com/hjacobs/aws-cost-and-usage-report/blob/master/aws-cost-and-usage-report.py

Current output
./aws-cost-and-usage-report.py
TimePeriod	LinkedAccount	Service                                 	Amount	Unit	Estimated
2019-11-08 	 21212121212121	AWS CloudTrail 	 	 	                       0.153943 	 USD 	 False
2019-11-08 	 21212121212121	AWS Config 	 	 	                          9.213 	 USD 	 False
2019-11-08 	 21212121212121	AWS Direct Connect 	 	 	                   0.2797877163 	 USD 	 False
2019-11-08 	 21212121212121	AWS Key Management Service 	 	 	                   1.4141780112 	 USD 	 False
2019-11-08 	 21212121212121	AWS Lambda 	 	 	                   0.0804225759 	 USD 	 False
2019-11-08 	 21212121212121	Amazon DynamoDB 	 	 	                   0.3836161225 	 USD 	 False
2019-11-08 	 21212121212121	Amazon EC2 Container Registry (ECR) 	 	 	                   0.0783308328 	 USD 	 False
2019-11-08 	 21212121212121	Amazon EC2 Container Service 	 	 	                              0 	 USD 	 False
2019-11-08 	 21212121212121	EC2 - Other 	 	 	                   6.8639388761 	 USD 	 False
2019-11-08 	 21212121212121	Amazon Elastic Compute Cloud - Compute 	 	 	                  73.1890902202 	 USD 

Open in new window

0
Hi experts

Since I'm part of new team managing cloud projects (AWS)

As part of DevOps, would like to introduce few automation which will streamline CICD

My request is, could you please help me with some pointers, essential process, best practices, house keeping, monitoring automation (I know, its a wide topic) but even a link to a third-party would be very helpful.

Basically below are the areas would like to get some help.

GITHUB
1. github on commit deployment to DEV or QA?. (we already have basic branching and release strategy)
2. Housekeeping, deleting old branches
3. Automated git commit report (generate release note from git code commit).


Atlasian JIRA:
1. Automation around JIRA
2. Integration with github, confluence
3. Essential alerts and report.

Jenkins:
1. On commit, reports etc.


please let me know, if you want me to create individual question for each topic

Thanks
0

AWS

Amazon Web Services (AWS), is a collection of remote computing services, also called web services, that make up a cloud-computing platform  operated from 11 geographical regions across the world. The most central and well-known of these services include Amazon Elastic Compute Cloud, also known as "EC2", and Amazon Simple Storage Service, also known as "S3". Other services include Elastic MapReduce (EMR), Route 53 (a DNS web service),  provides a highly available and scalable Domain Name System (DNS) web service, Virtual Private Cloud (VPC), storage, database, deployment and application services.