Docker is a computer program used to run software packages called containers in an operating-system-level virtualization process called containerization. It’s developed by Docker, Inc. and was first released in 2013.

Share tech news, updates, or what's on your mind.

Sign up to Post

I have a java backend application which is containerised using docker. The front end for this application is developed using npm and these static files are deployed in nginx. And it works perfectly. The api calls to the backed is proxy passed to the docker host using nginx reverse proxy.

Now i need to setup docker swarm for this backend application using manager and worker nodes.

So these are the questions which I need to ask you experts,

To which host should I need to configure proxy pass in nginx while using docker swarm?

Thanks in advance :)
Answering your cyber security questions
Answering your cyber security questions

It’s in the news. It’s the job title of the main character on every other tech thriller on TV and in the movies. It’s the hot topic in business boardrooms, university classrooms, and just about everywhere else. But what is cyber security, and how do you get a job in it?

Can anyone provide me with a link to how to uninstall 'Docker Compose' from Windows server 2016 ?

There is plenty out there about how to uninstall Docker from Win 2016 and RHEL but nothing I can find that is specifically about 'Docker Compose' and Win 2016.
Following command Not returning any IP address it just returns an empty string

docker inspect --format '{{.NetworkSettings.Networks.nat.IPAddress}}' sql2
I managed to run docker and install wordpress on ubuntu linux but can't seem to get the handle how I can edit the files within the dock as I get permission issues.
I think I am looking at it the wrong way about,
Could somone get me thinking the right way because  I love the performance for local development :).
(PHP ,Wordpress,MYSQL on NGINX).

why can't run the docker container with error for yr advice. Tks.


Anyone can advice how to retrieve accurate information to determine the overhead and size the hardware requirement (CPU, RAM, Storage, Network) by calculation for docker container APPs like elastic LB, fault tolerance which running on Kuberbnetes orchestration layer to design 500000 live video feed coming to share the load on the bare-metal design?

I'm just starting to learn Docker, and I think I have some of the basic concepts of creating containers down.  My intention is to have multiple containers on my server, each serving one unique website.

Now, here's my question.  I don't know how to handle the ports if there are multiple containers all set to respond to port 80.  Won't it cause some sort of problem if there are multiple containers, each running their own instance of apache, each reacting to port 80?  Is there some sort of internal IP addressing then that needs to take place to handle that?

I've got a pretty decent idea how the routing/responding through Apache works on a single server - but isn't this conceptually multiple servers all tied together with the same IP?
I have setup Code Pipeline to build a docker instance and deploy it on a ECS cluster, I have created my buildspec.yml and everything is working, however I need to adjust my buildspec.yml to print image definitions that setup a health check but so far its not working. Here is my current code for my buildspec.html

version: 0.2

      - echo Entered the update phase...
      # Updates Docker Instance
      - apt-get update -y
      - echo Logging in to Amazon ECR...
      - aws --version
      - $(aws ecr get-login --region ap-southeast-2 --no-include-email)
      # ECS Repository URI
      - IMAGE_TAG=${COMMIT_HASH:=latest}
      - echo Build started on `date`
      - echo Building the Docker image...          
      - docker build -t $REPOSITORY_URI:latest .
      - docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG
      - echo Build completed on `date`
      - echo Pushing the Docker images...
      - docker push $REPOSITORY_URI:latest
      - docker push $REPOSITORY_URI:$IMAGE_TAG
      - echo Writing image definitions file...
      # Prints Task Definitions
      - printf '[{"name":"website","imageUri":"%s","healthCheck":{"retries":3,"command":["/bin/bash curl -f http://localhost/ || exit

Open in new window

How to access aws s3 bucket out side form aws docker container .

How to configure aws configure in that
how to configure aws configure in docker file

my docker file

FROM ubuntu

RUN apt-get update && apt-get install -y awscli

cmd docker pull mariadb
cmd  docker pull mysql
CMD ["/"]
 aws configure set aws_access_key_id default_access_key xxxxxx
 aws configure set aws_secret_access_key default_secret_key xxxxxx
aws configure set default.region us-west-2
aws s3 cp s3://mariadbs3bucket/test.txt /test

 mysql -u xxx-pxxx --host xxx -P 3306  --socket=TCP/IP  -e "USE myDB; insert into values(50000);"

the above both command are working fine while running in ec command line
there i have set the aws configure
SolarWinds® Network Configuration Manager (NCM)
SolarWinds® Network Configuration Manager (NCM)

SolarWinds® Network Configuration Manager brings structure and peace of mind to configuration management. Bulk config deployment, automatic backups, change detection, vulnerability assessments, and config change templates reduce the time needed for repetitive tasks.

How to connect already running mariadb container

I found this query

$ docker run --name appName --link some-mariadb:mysql -d application-that-uses-mysql

i am confuse with the syntax ?
can any one tell me how to use this ??
Hi Experts,
How to install apparmor in alpine Linux docker image.
Please share me the steps to do it.
what are the package i need to import docker shell script to read AWS s3 bucket file
how to write docker script to update maria DB

1. How to connect docker to Maria DB
2. How to do insert and update in Maria DB

I have a task in AWS that is started by ECS. I have developed a Cloudformation script that creates the ECS cluster, service, task definitions and containers.

The EC2 instances (2 to begin with) are initiated and are healthy. ECS then creates the tasks on each EC2 instance. However, have 1 minute, the tasks are stopped by ECS and it tries to recreate it. I presume this is something to do with the scheduler on AWS and not getting a healthy check back.

It is a node app running on port 300 which is mapped to the container. If I login to the EC2 instance and do a simple curl (host port) then I get "no reply from server". Ok so it is something wrong with the image or container.

However, if I launch my own container of the same image publishing the ports ("docker run -d -p 32810:3000 <image> yarn start") and then do curl then I get a response from the server.

I cannot figure out how to debug this as there are no logs with any errors. Anyway..(I created the one on port 32810) the image is the same in both. I can only figure it is something to do with what the ECS agent does when it boots a task.

48ccad832ad2        <image>   "yarn start"        9 seconds ago       Up 8 seconds>3000/tcp   ecs-test-InterfaceCogTaskDefinition-1UE2FONT2RQU3-1-InterfaceCog-f092fec2d9d1e8a90300
36016de55b81        <image>   "yarn start"        23 minutes ago      Up 23 minutes>3000/tcp   practical_banach

can anybody give me exact docker compose yml file which will install nginx php mysql to run wordpress
final wordpress need to be run on container with or serverip and on port 80
I have to admit that I'm lost on a lot of these apps and languages that appear to be endless rabbit holes to me  but for my own enjoyment and education I'm trying to build a really nice media appliance.
Everything was working fine and then I took the step of installing super transfer 2, it installed docker which is what I believe was the cause of the pages not being found. The webserver is running because I get "Not Found
The requested URL /setup/ was not found on this server.

Apache/2.4.18 (Ubuntu) Server at Port 80"
So I'm kind of stuck and there doesn't appear to be any support on the topic in the plexguide forums.
While running the docker-compose up for git project Linked-Data-Theater I am getting
error standard_init_linux.go:195: exec user process caused no such file or directory

Below is a stack trace,

ifour.techno@ifour-137 MINGW64 /d/test/Docker/LinkData_Theater_Repo/Linked-Data-Theatre (master)
$ docker-compose up
Starting virtuoso ...
Starting ldt ... done
Attaching to virtuoso, ldt
virtuoso    | standard_init_linux.go:195: exec user process caused "no such file or directory"
ldt         | Mar 01, 2018 7:35:47 AM org.apache.catalina.startup.VersionLoggerListener log
ldt         | INFO: Server version:        Apache Tomcat/7.0.85
ldt         | Mar 01, 2018 7:35:47 AM org.apache.catalina.startup.VersionLoggerListener log
ldt         | INFO: Server built:          Feb 7 2018 18:52:33 UTC
ldt         | Mar 01, 2018 7:35:47 AM org.apache.catalina.startup.VersionLoggerListener log
ldt         | INFO: Server number:
ldt         | Mar 01, 2018 7:35:47 AM org.apache.catalina.startup.VersionLoggerListener log
ldt         | INFO: OS Name:               Linux
ldt         | Mar 01, 2018 7:35:47 AM org.apache.catalina.startup.VersionLoggerListener log
ldt         | INFO: OS Version:            4.4.111-boot2docker
ldt         | Mar 01, 2018 7:35:47 AM org.apache.catalina.startup.VersionLoggerListener log
ldt         | INFO: Architecture:          amd64
ldt         | Mar 01, 2018 7:35:47 AM org.apache.catalina.startup.VersionLoggerListener log

Open in new window

We have a docker container running NGINX and the certificate expired a week ago. I am now stuck trying to figure out what system to generate a CSR on, would it be the virtual machine docker is running on?
4 signs you’re cut out for a cybersecurity career
4 signs you’re cut out for a cybersecurity career

It’s one of the most in-demand fields in technology and in the job market as a whole. It’s crucial to our individual and national security. And it may be your path to a future filled with success and job satisfaction—if these four traits sound like you.

Hello, how can I start a container and open new ports?
Hi guys,

we are looking into a way to use the docker lvm plugin or similar volume management plugin, multiple times per host. Actually we'd like to run 3 docker daemons on the same host and each one to use it's own volume group for it's LVM for it's volumes. The problem is that there is only one configuration file "/etc/docker/docker-lvm-plugin" where we set the volume group that the logical volumes should be created.

I've recently started working for a company that wants to break their monolithic SaaS application up into containerized microservices. I'm having a hard time grasping a fundamental part of persistent storage, though. Why are there so many different competing platforms? Portworx, Rexray, StorageOS, Flocker, Inifint, etc.

My Questions

  1. Why wouldn't someone simply spin up an NFS server and use a hierarchical folder structure there as their storage backend? What gains do you get when using one of these tools?

  1. How dangerous is it to use something like this with Docker? What are the common causes for catastrophic data loss in a docker-based environment?

  1. What persistent storage solution would you recommend and why? My company operates a SaaS platform. The data payloads are small in size (5kb-100kb). Data processing is small-medium in resource consumption. Overall volume is medium, but continues to grow. We're hoping to completely move our monolithic application to the cloud as separate containerized microservices. Including our data warehouse.

  1. Somewhat unrelated, but it ties in. What are the strengths of using Kubernetes as an orchestrator as opposed to Rancher/Cattle? Isn't Kubernetes over-engineered for a small-medium sized platform? Are there any strengths to using Kubernetes in Rancher aside from the one-click installation?

*Thank …
Hello Experts,

Currently the Atlassian applications along with the Postgres databases reside on 1 VM.

I am looking for any recommendations for splitting up the applications/databases and platform options to integrate the Atlassian applications on (VM, Docker or AWS).

We have 100 users split between the US and UK.
How would a nginx reverse proxy fit in the following setup?
An AWS VPC attached to an internet gateway.
A public Subnet with a Route Table attached to the VPC.
2 Classic Load Balancers inside the VPC, each CLB has one EC2 (Amazon Linux) added to each of the CLBs.
Each EC2 runs a docker container serving a different domain running Meteor app.
Currently each domain can be visited via http, https as well as their Elastic IP.
Route 53 type A records for both domains Alias the Load Balancer for each domain.

I want visitors to be directed to the https, and read somewhere that this can be done using nginx which I installed on one of the EC2 to try.

I am not an expert, just following some tutorial in the process of learning with an objective of setting my own 2 domains.

I also created a third EC2 (Ubuntu 14.04 LTS) in the same VPC and installed nginx on it to try how far I can go on my own.
If all inbound traffic must first come through the internet gateway, Isn't better to put nginx right behind the gateway or what's the most effective fix?
I setup two swarm manager nodes(mgr1, mgr2).  But when I try to connect to the container it throws an error message .

[root@ip-10-3-2-24 ec2-user]# docker run --restart=unless-stopped -h mgr1 --name mgr1 -d -p 3375:2375 swarm manage --replication --advertise consul://

[root@ip-10-3-2-24 ec2-user]# docker exec -it mgr1 /bin/bash
rpc error: code = 2 desc = "oci runtime error: exec failed: exec: \"/bin/bash\": stat /bin/bash: no such file or directory"

It's happening in both the servers(mgr1, mgr2). I'm also running consul container on each node and able to connect to the consul containers.


Docker is a computer program used to run software packages called containers in an operating-system-level virtualization process called containerization. It’s developed by Docker, Inc. and was first released in 2013.

Related Topics

Top Experts In