Docker is a computer program used to run software packages called containers in an operating-system-level virtualization process called containerization. It’s developed by Docker, Inc. and was first released in 2013.

Share tech news, updates, or what's on your mind.

Sign up to Post

i have an ssh-sftp container on my ubuntu.
from ubuntu i add ip for container ( docker inspect "container_id")
then i can ssh and sftp from my ubuntu server to container
how to do it out of ubuntu server (on my domain)?
where to add container's ip for ssh or sftp to container?
is it possible to commit docker in docker-compose file?
what i want is when i say docker-compose stop want to commit before stop.
Hi, I have requirement to use Kerberos authentication for ASP.NET WebAPI application deployed in Docker Swarm .NET Core linux containers. WebApi will be used by web clients with Kerberos support. Application also should be connected to active directory to subscribe and get list of all users from AD. Docker Swarm deployed on premises in the organization network. Anyone have experience with such configuration?
- What should be done to enable Kerberos authentication in the ASP.NET and its Docker linux image? Will this require 3rd party kerberos tools or it can be handled by .NET Core?
- To enable such configuration what should be configured in the SWARM cluster?
- What should be used as service principal names (SPN)? And how to get user AD Identity inside ASP.NET?
- Is it possible to use multiple container instances of the same application?
- How I could use background worker service inside SWARM cluster to sync users list with the AD database?
i have sonar 7.7-community docker on postgres 11.0.3
this my compose file:

version: "2"
    image: postgres:11.3
    user: "${UID}:${GID}"
    restart: unless-stopped
    container_name: sonar-postgresql
     - 5430:5432
      POSTGRES_DB: sonar
      POSTGRES_USER: sonar
      - /containers/postgres/sonar/sonar_data:/var/lib/postgresql/data
    restart: always
      nproc: 65535
        soft: 32000
        hard: 40000
    image: pf-sonar:1.0
    restart: unless-stopped
    container_name: PF-sonar
      - 9100:9000
      - 9092:9092
      - /containers/sonar/sonarqube_conf:/opt/sonarqube/conf
      - /containers/sonar/sonarqube_data:/opt/sonarqube/data
      - /containers/sonar/sonarqube_extensions:/opt/sonarqube/extensions
      - /containers/sonar/sonarqube_plugins:/opt/sonarqube/lib/bundled-plugins
      - sonar.jdbc.url=jdbc:postgresql://db:5432/sonar
      - DB_TYPE=postgresql
      - DB_USER=sonar
      - DB_PASSWORD=Sonar
#    restart: always
      nproc: 65535
        soft: 32000
        hard: 40000

it try to use port 9000 and then try port 9100  i have error

2019.05.22 18:12:20 ERROR web[][o.s.s.a.EmbeddedTomcat] Fail to start web server
PF-sonar     | …
sonarqube on docker on ubuntu.
i try to install sonarqube 7.7-community with postgres version 11.3
its failed with
 ERROR web[][o.a.c.h.Http11NioProtocol] Failed to initialize end point associated with ProtocolHandler ["http-nio-"my-ip-9000"]
sonar      | Cannot assign requested address
ERROR web[][o.s.s.a.EmbeddedTomcat] Fail to start web server
at org.apache.catalina.util.LifecycleBase.init(
at org.apache.catalina.core.StandardService.initInternal(
 WARN  app[][o.s.a.p.AbstractProcessMonitor] Process exited with exit value [es]: 143

Open in new window

the port 9000 is not occupied.
i have docker image nginx 1.16.0 on ubuntu. i share a volume like that :
 docker run --name my-nginx -v /etc/nginx/nginx-file/nginx.conf:/etc/nginx/nginx.conf -v /etc/nginx/src:/usr/share/nginx/html -p 80:80 nginx:1.16.0

i move all my pages to /etc/nginx/src and they have permission (uid:gid 101:101 and 775) . i can see the file under my image when i login to image.
i can see my index.html file ( <p> hello world </p>) i can't see any file except for index.html
 can't see my jenkins (build."domain") file. What is wrong?
i try following docker run on ubuntu 16.04:
docker run --name jenkin-nginx -v /containers/nginx/jenkins/conf.d/nginx.conf:/etc/nginx/nginx.conf -p 80:80 nginx
nginx.conf is :

http {
server {

    listen 80;
    server_name build;

    location ^~ /jenkins/ {

        proxy_set_header        Host $host:$server_port;
        proxy_set_header        X-Real-IP $remote_addr;
        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header        X-Forwarded-Proto $scheme;

        # Fix the "It appears that your reverse proxy set up is broken" error.
        proxy_pass              http://build:8080/;
        proxy_read_timeout      90;

        proxy_redirect          http://jenkins:8080/ http://build/;

        # Required for new HTTP-based CLI
        proxy_http_version 1.1;
        proxy_request_buffering off;
        # workaround for
        add_header 'X-SSH-Endpoint' 'build:50022' always;

i try to http://build
then i have error :

 *1 "/etc/nginx/html/index.html" is not found (2: No such file or directory),
how to move artifactory?
i have artifactory in ubuntu standalone server.
I want to move to another server and using docker.
how to move postgres (with all databases) from ubuntu to docker container(using docker compose)?
how to move jenkin from ubuntu to another server and on docker jenkin?
HI Docker / DevOps Experts,
would like to implement essential unit test as part of Docker build ( CICD Pipeline)

Could you please suggest and advise what all options we have for various docker unit test ?
I found "Clair" but still trying to find out more options whether its an open source or paid service. Anything available within AWS? Docker?

please help
Architecture advice

So from a full pipeline infrastructure to develop and then deploy a web application what would you all recommend as best practice.

The items I do know at this point is I will be using the latest .net and angular frameworks for the application.  There will be a SQL backend.

My company wants this application to be deployed using CI/CD techniques.

We will have an Azure storage along with utilizing Docker.

I dont know much more but wanted to get an idea of what this architecture should look like as a developer.

I would imagine working locally in visual studio...perhaps there are some item you need to think about upfront for .net and angular apps while developing knowing that you will push to azure via a docker container.

Also at this point should I be thinking about .net versus .net core.

I plan on watching some videos and doing more reading but wanted to start with this as a place to get ideas of how this should be setup at a high level
from the architecture point of view dev through test and prod.
Hello Experts,
I am trying to create a microservice based on Microsoft tutorial 'Hello World' microservice.
Any prerequisites for this or any ready to go microsoervice.

How to install a docker for Microservices. I tried to install it on my home laptop but it failed as it requires Windows 10 pro.

I need to be a quick study in Docker

I am trying to digest Azure, Containers, Docker, Kubernetes, etc.

and quickly.

I am rebuilding my Mac OS environment to contain:

- a Windows 10 VM on Parallels
- a LINUX VM on Parallels
- Visual Studio 2019 Community Edition
- access to Azure Cloud

So, My first hands on practice could be to learn about Docker.

What kind of work-along tutorial can you suggest?  Or video training?

I have a server that has 32GB of RAM and 8TB of HDD with RAID 1 for 4TB of total HDD space.

I would like to split that with 2 TB being NFS and 2TB being Samba.

This is sort of a discussion oriented question.

I was wondering about running NFS and Samba under LXD or Docker containers. Would there be advantages to doing this (at least for learning)?
Hi Experts,

I get the following error as below for wagtail (Django application) inside a Docker container.  Please see the uwsgi logs inside the docker container as below.

*** Starting uWSGI 2.0.18 (64bit) on [Mon Mar  4 03:56:36 2019] ***
compiled with version: 5.4.0 20160609 on 04 March 2019 01:00:36
os: Linux-4.4.0-1057-aws #66-Ubuntu SMP Thu May 3 12:49:47 UTC 2018
nodename: e56d42de8c73
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 8
current working directory: /home/ntdl/code
writing pidfile to /tmp/
detected binary path: /usr/local/bin/uwsgi
setgid() to 33
setuid() to 33
chdir() to /home/ntdl/code
your memory page size is 4096 bytes
detected max file descriptor number: 1048576
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to UNIX address /tmp/ntdl.sock fd 8
uwsgi socket 1 inherited UNIX address @ fd 0
inherit fd0: chmod(): No such file or directory [core/socket.c line 1797]
Python version: 3.6.2 (default, Jul 17 2017, 23:14:31)  [GCC 5.4.0 20160609]
Python main interpreter initialized at 0x971510
python threads support enabled
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 543168 bytes (530 KB) for 20 cores
*** Operational MODE: threaded ***
WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter 0x971510 pid: 24 (default app)
*** uWSGI is running in 

Open in new window

Hi Experts,

I get the following error when I run the docker

root@ip-10-252-14-11:/home/ubuntu/workarea/sourcecode/NTDL-TEST/Harvest-Trove-Pictures# sudo docker start trove_pull
Error response from daemon: invalid header field value "oci runtime error: container_linux.go:247: starting container process caused \"exec: \\\"start\\\": executable file not found in $PATH\"\n"
Error: failed to start containers: trove_pull

Open in new window

the docker file contents are as follows

FROM ubuntu:16.04


RUN apt-get update -y
RUN apt-get install -y software-properties-common python-software-properties curl
RUN add-apt-repository -y ppa:fkrull/deadsnakes

RUN apt-get update -y && apt-get install -y curl
RUN apt-get update -y && apt-get install -y \
        git \
        python3.6 \
        python3.6-dev \
        nginx \
        sqlite3 \
        nodejs \
        build-essential \
        libmagickwand-dev \
        cron \
        nginx \

RUN rm -f /usr/bin/python3
RUN ln -s /usr/bin/python3.6 /usr/bin/python3
#RUN curl | python3

RUN mkdir -p /home/trove/trove
WORKDIR /home/trove

COPY . .
COPY ./ ./
RUN chmod -R 755 /home/trove
RUN chown -R www-data:www-data /home/trove
COPY . .

COPY build/docker/uwsgi_params .
COPY build/docker/uwsgi.ini .
COPY trove-variables.env .
RUN pip3 install --no-cache-dir uwsgi
RUN pip3 install --no-cache-dir -r requirements.txt

COPY build/docker/start 

Open in new window

Hi Experts,

I get the following error for python application.

django.db.utils.OperationalError: FATAL:  remaining connection slots are reserved for non-replication superuser connections

Open in new window

I use for configuring database.   I am using Postgres as a database

I had set connection_max_age to 0 in the config.

Still, I get this error.

I had checked the entire source code by using grep there is no connection in the source code.

I getting this error from inside docker container.  the database is outside the docker container.

the connection string I use to connect is DATABASE=postgres://test:*****@ where is docker gateway.

max_connection =100 in postgresql.conf which located inside /etc/postgresql/9.5/main/

Any help is greatly appreciated.
Hi Experts,

I get a bad gateway when running the web application from docker.

I had installed nginx inside the docker container.

for eg,

docker gateway is

public ip is

I create the environment for the docker container

declare -a ntdl_environment=( -e ES_CONNECTION= -e DATABASE=postgres://user:****@ -e LOCAL_URL_PREFIX== -e IIIF_SERVER= -e FACEBOOK_APP_ID=000000 -e CLOUD_WATCH=true -e AWS_ACCESS_KEY_ID=akz44 -e AWS_SECRET_ACCESS_KEY=******** -e AWS_DEFAULT_REGION=ap-abct-2 -e S3_BUCKET=abc-test-dev -e GENERIC_SERVER_NAME= -e SEARCH_PATH=/ -e SEARCH_DOMAIN= -e HANDLE_SITEMAP_PATH= -e DEBUG=true)

Open in new window

I had run the following commands to run the docker container

docker run --name ntdl -d --restart always  ${ntdl_environment[@]} f27af16c9ed6

to go inside bash shell for the docker container i had run the follow command from command prompt

docker exec -it ntdl /bin/bash

when I run gateway ip for elasticsearch it is running fine inside docker
I get the following results it is working fine

root@16a1d7df5399:/home/ntdl/code# curl -XGET
  "name" : "C****D",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "FG4Sgll3Rau***6nQ",
  "version" : {
    "number" : "5.6.4",
    "build_hash" : "8bbedf5",
    "build_date" : "2017-10-31T18:55:38.105Z",
    "build_snapshot" : false,
    "lucene_version" : "6.6.1"
  "tagline" : "You Know, for Search"

I had checked nginx status inside docker it is working fine.

root@16a1d7df5399:/home/ntdl/code# service nginx status
 * nginx is running

curl -XGET 'localhost' inside the docker gets the following below error

Open in new window


I have installed Docker EE on a new copy of Windows 2016 using the following PowerShell commands as recommended in the Docker page (click here):
Install-Module DockerMsftProvider -Force
Install-Package Docker -ProviderName DockerMsftProvider -Force

Open in new window


Docker appears to have installed correctly (the 'Program Files' folder has been created, the service has been created and the basic 'Hello World' test was successful) but the service will not automatically start with Windows.

Keep in mind:
  • The service startup type is definitely set to 'Automatic'.
  • There is no error recorded in the Windows Event Logs.
  • One started the service stays running without generating any errors in the event log.


Has anyone had the same issue ?
Can anyone suggest what the issue is because I can find nothing via the web ?
I have a java backend application which is containerised using docker. The front end for this application is developed using npm and these static files are deployed in nginx. And it works perfectly. The api calls to the backed is proxy passed to the docker host using nginx reverse proxy.

Now i need to setup docker swarm for this backend application using manager and worker nodes.

So these are the questions which I need to ask you experts,

To which host should I need to configure proxy pass in nginx while using docker swarm?

Thanks in advance :)
Following command Not returning any IP address it just returns an empty string

docker inspect --format '{{.NetworkSettings.Networks.nat.IPAddress}}' sql2
What is the location of  the binary and library postgresql directories installed on docker linux suse 12 sp2 ?
We have this postgres DB installation working ok.
I have access to the database using pgadmin and dbeaver.
But we don't know who did this installation and I need to know the location of the binary and library postgresql directories in order to run pg_ctl and psql.
Are there any limitations in Linux containers (LXC/LXC/dockers) with regards to supporting software / os etc, vs with full virtualization software or bare metal?


Docker is a computer program used to run software packages called containers in an operating-system-level virtualization process called containerization. It’s developed by Docker, Inc. and was first released in 2013.

Top Experts In