[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

x

Docker

Docker is a computer program used to run software packages called containers in an operating-system-level virtualization process called containerization. It’s developed by Docker, Inc. and was first released in 2013.

Share tech news, updates, or what's on your mind.

Sign up to Post

What is the location of  the binary and library postgresql directories installed on docker linux suse 12 sp2 ?
We have this postgres DB installation working ok.
I have access to the database using pgadmin and dbeaver.
But we don't know who did this installation and I need to know the location of the binary and library postgresql directories in order to run pg_ctl and psql.
0
Become a Microsoft Certified Solutions Expert
LVL 12
Become a Microsoft Certified Solutions Expert

This course teaches how to install and configure Windows Server 2012 R2.  It is the first step on your path to becoming a Microsoft Certified Solutions Expert (MCSE).

Are there any limitations in Linux containers (LXC/LXC/dockers) with regards to supporting software / os etc, vs with full virtualization software or bare metal?
0
Hi Experts,

I get the following errors with docker compose up

root@ip-10-252-14-11:/home/ubuntu/workarea/sourcecode/harvest-trove# docker-compose up
Recreating harvest-trove_harvest-trove_1 ... done
Creating harvest-trove_trove_review_1    ...
Creating harvest-trove_trove_pull_1      ... error
Creating harvest-trove_trove_push_1      ...
Creating harvest-trove_trove_process_1   ...

ERROR: for harvest-trove_trove_pull_1  Cannot start service trove_pull: invalid header field value "oci runtime error: container_linux.go:247: starting container procesCreating harvest-trove_trove_process_1   ... error

Creating harvest-trove_trove_push_1      ... error
Creating harvest-trove_trove_review_1    ... error

ERROR: for harvest-trove_trove_push_1  Cannot start service trove_push: invalid header field value "oci runtime error: container_linux.go:247: starting container process caused \"exec: \\\"harvest-trove:1.0.3\\\": executable file not found in $PATH\"\n"

ERROR: for harvest-trove_trove_review_1  Cannot start service trove_review: invalid header field value "oci runtime error: container_linux.go:247: starting container process caused \"exec: \\\"harvest-trove:1.0.3\\\": executable file not found in $PATH\"\n"

ERROR: for trove_pull  Cannot start service trove_pull: invalid header field value "oci runtime error: container_linux.go:247: starting container process caused \"exec: \\\"harvest-trove:1.0.3\\\": executable file not found in $PATH\"\n"

ERROR: for trove_process 

Open in new window

0
how to access s3 bucket in the docker file.

its expecting aws configure , i export the  key's but its does not help for me!!!
0
Hi Experts,

     I had build and run a docker image like this.

docker build -t harvest-trove:1.0.3 .
docker run --name trove_pull ${trove_environment[@]} -d -restart always harvest-trove:1.0.3 start pull                 

Open in new window


I want to use docker compose for the above two commands

I had created the docker compose like this.

version:'3'
services:
   harvest-trove:
      build: .
      image: harvest-trove:1.0.3
      volumes:
                -  .:/home/trove
           env_file:
        -  web-variables.env
            command: python3 manage.py migrate
  trove-pull:
    container_name:trove-pull     
    image: harvest-trove
    env_file:
       - web-variables.env
       depends_on:
         - harvest-trove
       command: harvest-trove:1.0.3 start pull
       restart:always

Open in new window


when I run docker-compose up I get the following error.

error with docker-compose up
Please help me in fixing this issue.

With Many thanks,
Bharath AK
0
How to download awscli docker image  (I try docker run awscli it ask password without that is it possible)

How to download mariaDB docker image  (I try docker run mariaDB it ask password without that is it possible)
0
how to read the csv file which is in AWS s3 bucket

aws s3 cp command will copy the file but i need to validate before copy the file ? is that possible ?
0
Hi Experts,

     docker container is not picking the recent changes from the source code.     it makes me to build the docker image every time to see the latest changes on the source code.

Please find below the contents of the Dockerfile

FROM ubuntu:16.04

MAINTAINER *****

RUN apt-get update -y
RUN apt-get install -y software-properties-common python-software-properties curl
RUN add-apt-repository -y ppa:fkrull/deadsnakes

RUN apt-get update -y && apt-get install -y curl
RUN apt-get update -y && apt-get install -y \
	git \
	python3.6 \
	python3.6-dev \
	nginx \
	sqlite3 \
	nodejs \
	build-essential \
	libmagickwand-dev \
	cron \
	nginx

RUN rm -f /usr/bin/python3
RUN ln -s /usr/bin/python3.6 /usr/bin/python3
RUN curl https://bootstrap.pypa.io/get-pip.py | python3

WORKDIR /home/trove
COPY . .

COPY build/docker/uwsgi_params .
COPY build/docker/uwsgi.ini .
RUN pip3 install --no-cache-dir uwsgi
RUN pip3 install --no-cache-dir -r requirements.txt

COPY build/docker/start /usr/bin/
COPY build/docker/crontab /etc/cron.d/harvest-cron
RUN chmod 0644 /etc/cron.d/harvest-cron
RUN touch /var/log/harvest.log

RUN echo "daemon off;" >> /etc/nginx/nginx.conf
COPY build/docker/nginx-app.conf /etc/nginx/sites-available/default
COPY build/docker/start /usr/bin/
RUN mkdir /var/log/harvest/
RUN python3 manage.py collectstatic --noinput

WORKDIR /home/trove/
RUN chmod 755 /home/trove
RUN chown -R www-data:www-data /home/trove

EXPOSE 80
CMD ["start"]

Open in new window


please find below the contents of the crontab

SHELL=/bin/bash
* * * * * root ( source /tmp/environment.sh && /usr/bin/python3 /home/trove/run.py $(cat /tmp/method) ) >> /dev/null 2>/var/log/harvest/ts_errors.log

Open in new window

0
What are all the package need to access docker to lambda


lambda will call s3 bucket .

I need automated script (that script will daily mid night will run)
0
how we can read s3 bucket files form docker container
what are the package needed and what is the command

how we can put data to maria DB form docker container
what are the package needed and what is the command
0
Make Network Traffic Fast and Furious with SD-WAN
Make Network Traffic Fast and Furious with SD-WAN

Software-defined WAN (SD-WAN) is a technology that determines the most effective way to route traffic to and from datacenter sites. Register for the webinar today to learn how your business can benefit from SD-WAN!

Hi Experts,

   I got a server which runs applications with docker and aws, running from console(putty).  The command to run is as follows:-

docker run --name ts_pull ${ts_environment[@]}  -d  --restart always 1234656458.dkr.ecr.ap-southeast-2.amazonaws.com/harvest-ts:1.0.2 start pull

Open in new window


I can understand 1234656458.dkr.ecr.ap-southeast-2.amazonaws.com/harvest-ts:1.0.2 is from aws.  Can any tell me how this container application is pushed to aws?

From the documentation I found Dockerfile is available, from which the docker image is built.  I don't understand how to push this to aws and run with the docker.

Please throw some light over it.

with many thanks,

Bharath AK
0
Hi Experts,

   How to delete the existing elasticsearch cluster and how to create a new elasticsearch cluster?  Please throw some best practices.

With Many thanks,

Bharath AK
0
Hi Experts,

      I have two containers running on dockers.  

root@ip-10-252-14-11:/home/ubuntu/workarea/sourcecode/ntdl# docker container ls
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                  NAMES
596874f0eedb        dcf3be75c970        "start"             8 days ago          Up 8 days           0.0.0.0:8009->80/tcp   iiif
91c61a7ea455        8a38b977270d        "start"             8 days ago          Up 8 days           0.0.0.0:8008->80/tcp   ntdl

Open in new window


wagtail(Django)  (ntdl )application on port 8008
wagtail application similar to django applicationimage server running independently on 8009
IIIF IMAGE Serverwagtail (ntdl) without zoom image not communicating with iif image server
wagtail application not talking to iiif image server
without zoom image from image server
console logs details
console log on browser windowconsole logs
::net ERR_CONNECTION_REFUSED for accessing iiif_image server.  

nginx is installed with wagtail ntdl application
Please help me in resolving this issue.



With many thanks,
Bharath AK
0
How is Docker different from a virtual machine?  How does it manage to provide a full filesystem, isolated networking environment, etc. without being as heavy?
0
Hi experts,

I get bad gateway request.  I use nginx, uwsgi docker wagtail application.  Please find attached nginx.ini and uwsgi.ini  I am running all inside a docker containers.

nginx settings

user www-data;
worker_processes auto;
pid /run/nginx.pid;

events {
        worker_connections 768;
        # multi_accept on;
}

http {

        ##
        # Basic Settings
        ##

        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;
        # server_tokens off;

        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ##
        # SSL Settings
        ##

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
        ssl_prefer_server_ciphers on;

        ##
        # Logging Settings
        ##

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;

        ##
        # Gzip Settings
        ##

        gzip on;
        gzip_disable "msie6";

        # gzip_vary on;
        # gzip_proxied any;
        # gzip_comp_level 6;
        # gzip_buffers 16 8k;
        # gzip_http_version 1.1;
        # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

        ##
        # Virtual Host Configs
        ##

        include 

Open in new window

0
I'm not too familiar with Docker or networking in general so I'm getting confused as to whats going on in my Docker Compose file below.

1. Would each of the services be available to the host machine at something like http://localhost:4200 ?

2. If the answer is yes above, how would I ensure that the correct ports are exposed between containers but not accessible to the host machine? Does it involve defining a network instead of relying on the default one?

version: '3' # specify docker-compose version

# Define the services/containers to be run
services:
  angular: # name of the first service
    build: ./Coding/DeepLearning/GardenApp/client-app-dev # specify the directory of the Dockerfile
    ports:
      - "4200:4200" 
    volumes:
      - ./Coding/DeepLearning/GardenApp/client-app:/usr/src/garden-app-dev

  express-ts: #name of the second service
    build: express-ts-server # specify the directory of the Dockerfile
    ports:
      - "3000:3000" 
    links:
      - database # link this service to the database service
    volumes:
      - ./Coding/DeepLearning/GardenApp/express-ts-server:/usr/src/express-ts-app

  database: # name of the third service
    image: mongo # specify image to build container from
    ports:
      - "27017:27017" 
    volumes: 
      - ./Coding/DeepLearning/GardenApp/database:/data/db

Open in new window

0
Hi all,

I have been working with learning AWS and currently doing a course on scaling docker to aws.  Looking for some real world feedback of good architecture.  Amazon EKS is in preview so I am going to look there if it goes mainstream.  For now I am trying to understand what is best to use aka setup an ECS cluster, task definitions and should I also use application load balancer or elastic load balancers etc. to create a solid setup for failover.  My goal is take one front end app for example maybe spread across 3 AZs in docker containers (learning docker).  I will also have a web api also in a container not sure if that also makes sense to have the same setup across 3 AZs etc.  Any suggestions if you had a front end and api how would be best real world to set it up with docker and instances?   Thanks all.
0
My LXD host is running Ubuntu 17.04.  I created a bridged interface and set that interface as the default network for all LXD containers...  This results in my containers being on my LAN..  This works just fine, however, I'm now trying to run Docker inside a LXD container.  My goal is to place all Docker container on the same LAN.  How can I create a Docker network to be on the same LAN while nested on a LXD container?
0
I created two Hyper-V IIS containers via the commands noted below.  Each has their own static IP address.  I can pin each of these on the Windows Container / Hyper-V host but unable to via any hosts on the same LAN.  I think it's due to my vSwitch setup.  Any ideas?

docker network create -d transparent --subnet=192.168.1.0/24 --gateway=192.168.1.1 TransparentNet3
docker run -d --name myIIS --network=TransparentNet3 --ip 192.168.1.246 --isolation=hyperv microsoft/iis
docker run -d --name myIIS2 --network=TransparentNet3 --ip 192.168.1.251 --isolation=hyperv microsoft/iis
0
Making Bulk Changes to Active Directory
LVL 8
Making Bulk Changes to Active Directory

Watch this video to see how easy it is to make mass changes to Active Directory from an external text file without using complicated scripts.

So..  I understand how to install each of these as separate containers..  This means I can start and stop each container, however, I would prefer all three of these be in one container.. Is that possible?  I plan to spin up a custom .php web app using it's dedicated mySQL instance.. I'll end up with 3 or 4 of these on the same Docker host..  Each having different php versions and mysql versions...  How can I do this?
0
My goal is to install 4+ MySQL instances on one Docker host.  Each SQL Instance is a different version..  Can I assign dedicated IPs to each instance?  I know I can do port translation but I prefer dedicated IPs..
0
I followed the steps here https://hub.docker.com/r/nanoserver/mysql/  to install mySQL and PHP, however, I need the ability to access the MySQL instance via port 3306 via MySQL WorkBench..  To do this I would imagine I need to allow remote access via the mysql command ...  But.. when I type 'mysql -u root -p' via the folder that mysql is running in (connected to session via Enter-PSSession <type-here-containerID> -RunAsAdministrator it just hangs and nothing happens.  Any ideas?
0
How to: DOCKER Swarm mixed mode cluster ( Windows and Linux ) containers on AZURE?

 

I am just looking for a link to the best set of instructions I can find on how to do this

I have a very slow internet connection and cannot search very well Help!


thanks !!!
0
domain controller on centos <-- host machine
host controller on centos  <-- docker

installed jboss on centos docker image and tried to start standalone server it starts with out any issues. but if i want to start slave and sync it to domain i get error as below:

started docker with exposing ports as below:
docker run -p 18080:808 -p 19990:9990 -it jboss_test

15:57:18,045 DEBUG [org.jboss.modules] (Controller Boot Thread) Module org.jboss.as.cli:main defined by local module loader @26538d04 (finder: local module finder @374f1544 (roots: /opt/jboss-eap-6.2/modules,/opt/jboss-eap-6.2/modules/system/layers/base))
15:57:23,477 DEBUG [org.jboss.as.host.controller] (Controller Boot Thread) failed to connect to 10.10.10.10:9999: java.net.ConnectException: JBAS012144: Could not connect to remote://10.10.10.10:9999. The connection timed out
      at org.jboss.as.protocol.ProtocolConnectionUtils.connectSync(ProtocolConnectionUtils.java:131) [jboss-as-protocol-7.3.0.Final-redhat-14.jar:7.3.0.Final-redhat-14]
      at org.jboss.as.host.controller.RemoteDomainConnection.openConnection(RemoteDomainConnection.java:198) [jboss-as-host-controller-7.3.0.Final-redhat-14.jar:7.3.0.Final-redhat-14]
      at org.jboss.as.host.controller.RemoteDomainConnection$InitialConnectTask.connect(RemoteDomainConnection.java:560) [jboss-as-host-controller-7.3.0.Final-redhat-14.jar:7.3.0.Final-redhat-14]
      at org.jboss.as.protocol.ProtocolConnectionManager.connect(ProtocolConnectionManager.java:70) …
0
docker version output as below it shows same API version but when i run command  it is throwing error

# docker version
Client:
 Version:         1.12.6
 API version:     1.24
 Package version: docker-1.12.6-48.git0fdc778.el7.x86_64
 Go version:      go1.8.3
 Git commit:      0fdc778/1.12.6
 Built:           Thu Jul 20 00:06:39 2017
 OS/Arch:         linux/amd64

Server:
 Version:         1.12.6
 API version:     1.24
 Package version: docker-1.12.6-48.git0fdc778.el7.x86_64
 Go version:      go1.8.3
 Git commit:      0fdc778/1.12.6
 Built:           Thu Jul 20 00:06:39 2017
 OS/Arch:         linux/amd64
#

Open in new window



But the error as below...

# python
Python 2.7.5 (default, Aug 29 2016, 10:12:21)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import docker
>>> client = docker.APIClient(base_url='unix://var/run/docker.sock')
>>> print client.version()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python2.7/site-packages/docker/api/daemon.py", line 177, in version
    return self._result(self._get(url), json=True)
  File "/usr/lib/python2.7/site-packages/docker/api/client.py", line 226, in _result
    self._raise_for_status(response)
  File "/usr/lib/python2.7/site-packages/docker/api/client.py", line 222, in _raise_for_status
    raise create_api_error_from_http_exception(e)
  File "/usr/lib/python2.7/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
    raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 400 Client Error: Bad Request ("client is newer than server (client API version: 1.30, server API version: 1.24)")
>>>

Open in new window

0

Docker

Docker is a computer program used to run software packages called containers in an operating-system-level virtualization process called containerization. It’s developed by Docker, Inc. and was first released in 2013.

Related Topics

Top Experts In
Docker
<
Monthly
>