Ubuntu is an open source software operating system that runs from the desktop, to the cloud, to all your internet connected things.

Share tech news, updates, or what's on your mind.

Sign up to Post

Hi Experts,

I get the following error as below for wagtail (Django application) inside a Docker container.  Please see the uwsgi logs inside the docker container as below.

*** Starting uWSGI 2.0.18 (64bit) on [Mon Mar  4 03:56:36 2019] ***
compiled with version: 5.4.0 20160609 on 04 March 2019 01:00:36
os: Linux-4.4.0-1057-aws #66-Ubuntu SMP Thu May 3 12:49:47 UTC 2018
nodename: e56d42de8c73
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 8
current working directory: /home/ntdl/code
writing pidfile to /tmp/ntdl.pid
detected binary path: /usr/local/bin/uwsgi
setgid() to 33
setuid() to 33
chdir() to /home/ntdl/code
your memory page size is 4096 bytes
detected max file descriptor number: 1048576
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to UNIX address /tmp/ntdl.sock fd 8
uwsgi socket 1 inherited UNIX address @ fd 0
inherit fd0: chmod(): No such file or directory [core/socket.c line 1797]
Python version: 3.6.2 (default, Jul 17 2017, 23:14:31)  [GCC 5.4.0 20160609]
Python main interpreter initialized at 0x971510
python threads support enabled
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 543168 bytes (530 KB) for 20 cores
*** Operational MODE: threaded ***
WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter 0x971510 pid: 24 (default app)
*** uWSGI is running in 

Open in new window

4 signs you’re cut out for a cybersecurity career
4 signs you’re cut out for a cybersecurity career

It’s one of the most in-demand fields in technology and in the job market as a whole. It’s crucial to our individual and national security. And it may be your path to a future filled with success and job satisfaction—if these four traits sound like you.

I am running nginx on Ubuntu server 16.04LTS

I am trying to configure partkeepr.org

During the install, this error appears when  I use this config

server {
    # Listening port and host address
    listen 80;
    server_name inventory.techpress.internal;

    # Default index pages
    index app.php index.html

    # Default character set
    charset utf-8;

    # Turn off access.log writes
    access_log off;
    log_not_found off;

    # Send file is an optimization, but does not work
    # across unix sockets which I use for php fpm so is best
    # used for local static content onlya 
    sendfile off;

    # Root for / project
    root /var/www/html/inventory.techpress.internal/web/;

    # Setup rewrite helper
    rewrite ^/setup/webserver-test$ /setup/tests/webservercheck.json;

    # Handle main / location to symfony app.php controller
    location / {
        try_files $uri $uri/ /app.php?$query_string;

    # Handle /setup location to symfony setup.php controller
    location /setup {
        try_files $uri $uri/ /setup.php?$query_string;

    # Handle all locations *.php files via PHP-FPM unix socket
    location ~ \.php$ {
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        #fastcgi_pass unix:/var/run/php5-fpm.sock;
        fastcgi_pass unix://var/run/php/php7.0-fpm.sock;
        fastcgi_index index.php;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME 

Open in new window

I am trying to write a microservice that writes the SD card on dell edge box 3001 so the data can be saved directly to the SD card.
any ideas how can I achieve this.
Thanks in advance.
im using debian, ubuntu, linux, mint, ext4 if that specific info helps

when saving a file from within notepad ap or from within the libreoffice word processor ap the exact same message box appears when saving to a file name that already exists.....  it warns that the file already exists and asks if one wishes to cancel or overwrite.....   does this identical message box originate from the application or the OS?   and is the fact that the file already exists determined within the ap or is the message box generated by info passed between the ap and the OS

rename fails when renaming to a filename that already exists but mv and cp overwrite with no warnings.....
New ubuntu 18.04 server build. Running zfs for datastorage for backup. Receiving Errors about ata2.00 failed command: WRITE FPDMA QUEUED.

Please see attached screen shot.

What does this mean? Are my drives going bad? or is it a motherboard issue?

Any guidance would be appreciated.
I am trying to read a stored text file on SD card and this card is on  dell 3001 edge box with ubuntu core 16.04, I have tried to mount the SD card but I can not see the text file to edit or read the contents.
Thanks in advance
where is the logs files are stored in the snap that runs on edge boxes with ubuntu core 16.04
Thanks  in advance
Hello Experts,

I am a novice in Linux and having to try to enable the SSO ( Single Sign-on) for a website built on Ubuntu using apache2 web server. We basically wanted to have authentication in place for anybody connecting to that website and I enabled LDAP auth and it's working fine. But the client needs Sigle Sign on so that staff need not to manually authenticate using their AD credentials when accessing the website.  I have gone through online documentation, and its confusing what all i need to change in the existing config and install the new stuff. If any one can help/guide me in the right process. It would be of great help.

Thanks a lot in advance.
I am running Node.js app to check for connectivity and reboot if the timer reaches 0, and I am logging the status on a log file but when I run this code as a snap on ubuntu core, I can not find where the log files are in snap.
here is my code:
#!/usr/bin/env node
const moment = require('moment');
const shutdown = require('electron-shutdown-command');
const electron = require('electron');

const fs = require('fs');

let counter = 0;

function checkInternet(cb) {
    require('dns').lookup('google.com',function(err) {
        if (err && err.code ===  "ENOTFOUND") {
        } else {
    }); }, 30000);

function rebootEdgeBox(){
    console.log('Shutting down now');

checkInternet(function (isConnected) {
        if (isConnected) {
             counter =0;
        } else {

            const timeStamp = moment().format('YYYY-MM-DD, HH:mm:ss');
            const status = 'Disconnected at ' + timeStamp;
            counter += 1;
            if (counter === 10){
                counter = 0;



function vortexLog(string) {
    fs.appendFile('./Logs/connectionLog', string + "\r\n", function (err) {
        if (err) throw …
I have a newly configured, headless, Ubuntu 18.04 dedicated server on a hosting company, so it's not a local server.

The problem is that while I am connected and doing something on the server using SSH, running top, transferring a file, etc, I get disconnected and cannot connect again. I have to reboot the server to do so.

The hosting company says the server is configured to have power saving mode turned on, and it goes into power saving mode.

I find it weird because:
1. I never had encountered a server OS with power saving mode turned on, because, well, it's a server
2. Even it it was turned on, it should not kick in while it's in use. At lease that's how I understand how it should work
3. The server is almost 2 months old, and we only encountered this issue twice. Had it been on all the time, it would have been a constant problem since day 1

So my question is, how do I check that power saving mode is on, and how do I turn it off? I've read some articles in other sites, but I'd rather have the answers here.

Thanks in advance!
Angular Fundamentals
LVL 13
Angular Fundamentals

Learn the fundamentals of Angular 2, a JavaScript framework for developing dynamic single page applications.

Hi Experts,

I get a bad gateway when running the web application from docker.

I had installed nginx inside the docker container.

for eg,

docker gateway is

public ip is

I create the environment for the docker container

declare -a ntdl_environment=( -e ES_CONNECTION= -e DATABASE=postgres://user:****@ -e LOCAL_URL_PREFIX==https://ntest-dev.s3.amazonaws.com -e IIIF_SERVER= -e FACEBOOK_APP_ID=000000 -e CLOUD_WATCH=true -e AWS_ACCESS_KEY_ID=akz44 -e AWS_SECRET_ACCESS_KEY=******** -e AWS_DEFAULT_REGION=ap-abct-2 -e S3_BUCKET=abc-test-dev -e GENERIC_SERVER_NAME= -e SEARCH_PATH=/ -e SEARCH_DOMAIN= -e HANDLE_SITEMAP_PATH=https://s3-test.amazonaws.com/ntest/sitemap/sitemap.txt.gz -e DEBUG=true)

Open in new window

I had run the following commands to run the docker container

docker run --name ntdl -d --restart always  ${ntdl_environment[@]} f27af16c9ed6

to go inside bash shell for the docker container i had run the follow command from command prompt

docker exec -it ntdl /bin/bash

when I run gateway ip for elasticsearch it is running fine inside docker
I get the following results it is working fine

root@16a1d7df5399:/home/ntdl/code# curl -XGET
  "name" : "C****D",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "FG4Sgll3Rau***6nQ",
  "version" : {
    "number" : "5.6.4",
    "build_hash" : "8bbedf5",
    "build_date" : "2017-10-31T18:55:38.105Z",
    "build_snapshot" : false,
    "lucene_version" : "6.6.1"
  "tagline" : "You Know, for Search"

I had checked nginx status inside docker it is working fine.

root@16a1d7df5399:/home/ntdl/code# service nginx status
 * nginx is running

curl -XGET 'localhost' inside the docker gets the following below error

Open in new window

I am using WAMP on my Windows 10 machine.

I am having the same issue on my Ubuntu machine.

When I add to many columns to my database I get an error:

Rebuild database fault: PDOException: SQLSTATE[42000]: Syntax error or access violation: 1118 Row size too large. The maximum row size for the used table type, not counting BLOBs, is 65535.

Open in new window

I have gone to the my.ini file and added:

I have also tried increasing my log file size to 512 and the buffer size to 2048 just to see. Still no change.

Any time I try and add another column in mysql I get the error.

I have a security related question on Linux Ubuntu.
Can we remove sudo access to root console for power users.
But still allow admin to ssh to the servers with root password.
As a security measure is it a good idea to disable root access to everyone if  connecting to servers remotely.

VirtualBox guest in windows 10, running a 10 GB xUbuntu client. I want to increase to 20 gb. After a lot if work i got the vmdk to 20 gb. But the  "actual size"  in VirtualBox is still only 10 gb. The partition is a vmdk file. (virtual size 20.00 GB, Actual Size 9.07 GB).

No tool i can find can see the file as a partition and increase it from windows, from inside xUbuntu I have the same problem. How can i get the actual size up to 20 GB?

(no i does NOT autoincrease, its just get full)
I managed to run docker and install wordpress on ubuntu linux but can't seem to get the handle how I can edit the files within the dock as I get permission issues.
I think I am looking at it the wrong way about,
Could somone get me thinking the right way because  I love the performance for local development :).
(PHP ,Wordpress,MYSQL on NGINX).

I have a NFS server on Ubuntu 16.04 LTS installed on it. The client machine has Windows Home edition , it connects to the server with ovpn which is
on another subnet,
I am unable to mount the NFS share from the client machine .
Kindly guide me how to connect to the NFS share on Ubuntu.


Does anyone have any idea how to authenticate a user against two different OUs on the same AD server?

I am using Apache 2.4 on Ubuntu 18.04.
Server version: Apache/2.4.18 (Ubuntu)
Server built:   2018-06-07T19:43:03

The user could be in "ABC User" or "XYZ user".
AD OUs are:
AuthLDAPURL "ldap://adx.ABC.org:389/OU=ABC Users,DC=ABC,DC=org?sAMAccountName?sub?(objectClass=*)"
AuthLDAPURL "ldap://adx.ABC.org:389/OU=XYZ Users,DC=ABC,DC=org?sAMAccountName?sub?(objectClass=*)"

Part of the current conf file:
<Location />
      AuthName "ABC Intranet"
        AuthBasicProvider ldap

        AuthType Basic
        AuthLDAPURL "ldap://adx.ABC.org:389/OU=ABC Users,DC=ABC,DC=org?sAMAccountName?sub?(objectClass=*)"

      # login to AD
      AuthLDAPBindDN "CN=ldap_ABCweb,OU=ABC Service Accounts,DC=ABC,DC=org"
        AuthLDAPGroupAttributeIsDN off
        AuthLDAPGroupAttribute memberUid

# tried this and failed
#      Require ldap-filter (&(memberOf='OU=XYZ Users,DC=ABC,DC=org?sAMAccountName?sub?(objectClass=*')|(memberOf='OU=ABC Users,DC=ABC,DC=org?sAMAccountName?sub?(objectClass=*'))

# tried this and failed                                     
#      <RequireAny>
#        Require ldap-filter (&(memberOf='OU=ABC Users,DC=ABC,DC=org?sAMAccountName?sub?(objectClass=*'))
#        Require ldap-filter (&(memberOf='OU=XYZ Users,DC=ABC,DC=org?sAMAccountName?sub?(objectClass=*'))
#       </RequireAny>

      # require any is implied
      require any
      Require valid-user
Hi Experts,

I'm looking for a simple script to collect the local time drift on Ubuntu Bionic and send it to a specific TCP port in the local system.

Any advice would be highly appreciated.

Thanks & Regards,
I have open vpn server and all client work fine
I have also setup a new open vpn client config on it and its connect to the other servers.
I want when the client request a specific ip to route through vpn client connection that is established on server
so what I have to do in this case.
Get a highly available system for cyber protection
Get a highly available system for cyber protection

The Acronis SDI Appliance is a new plug-n-play solution with pre-configured Acronis Software-Defined Infrastructure software that gives service providers and enterprises ready access to a fault-tolerant system, which combines universal storage and high-performance virtualization.

Hi All,

I am looking to semi-automate the setup of Ubuntu machines.  I don't do that many, so this is more of an 'out of interest' thing than any real productivity issue.

For the avoidance of doubt, I am not looking for a full automated / scripted setup at this point, just to move some setup tasks from being a manual / GUI action to command line.

The first and simple thing is that, when I have setup a new Ubuntu install (16.04 or 18.04), I remove the 'Amazon' icon from the 'favourites' bar.

I do that by right-clicking, and selecting 'remove'.

My underastanding is that I could uninstall Unity Web Apps entirely with this:

sudo apt-get remove unity-webapps-common

Open in new window

which would include the Amazon icon going, but I don't want to remove anything else at this point (unless you think I should?)

I also found a reference that I could run:

sudo apt-get remove ubuntu-web-launchers

Open in new window

to remove the Amazon icon.

My concern is that often there is no explicit mention of what else might be removed or impacted, if anything, which makes me reluctant.

So, how can I just delete the icon from the favourites bar via the command line?


This is a php script, which basically does the send mail function via terminal shell, I am studying it and the curious thing is that it worked in my tests only in the Ubuntu 14.04 versions with php5 and postfix. But I could not get it to work on Ubuntu 18.04 or Debian 9. I would like to ask for help to change the code in order to make it work in those versions. Thank you.


With both of the versions of Ubuntu is there a way I can make the favourites bar icons larger. I have found a way to make the other icons larger but not the favourates bar ones.
Happy if there a different way for each version.

Ubuntu 18.04.1 LTS and Ubuntu 18.10


Internet (ISP) ----> CISCO 891 ----> Ubuntu Server [Another Country/City] (IPSec or smthing) ------]
                                                                                       Internet (ISP) <------ CISCO 891 <---------------------]

Can i configure the my home CISCO router to connect to another VPN Server and give access to my home computers to the internet from this Server?
How to find an unauthorized connection  in samba domain?
I have a samba 4.X domain on ubuntu 16.04. Is there software for Intrusion detection?
Ubuntu graphics crash with Visiontek / AMD Radeon 7750

I have a frustrating and consistent issue with gpu crashes on a pair of brand new systems I just built, both used as Ubuntu-based securities trading and data analysis systems.  Any help or ideas in solving this is greatly appreciated.  In all cases, the system boots perfectly fine and performs as expected until a gpu hang, anywhere from a few minutes to around 24 hours later with an average time to gpu hang of around 8 hours.  Degree of system usage from zero to maximum does not seem to affect time to gpu hang.  Upon gpu hang, the screen freezes, but the mouse can still be moved and non-graphics functionality generally continues, such as music playing.  In all cases, a gpu hang is noted in the sys logs, as shown below.

Each of the two systems has the following brand new hardware:

Intel i7-7820X 8 Core Processor
128GB GSkill Ram
Asus Prime X299A Motherboard
Mushkin 2TB SSD
Corsair RM1000x 1000-watt power supply
Corsair Hydro H115i liquid cooler
VisionTek Radeon 7750 2GB 6x-MiniDP Video Card (#900614)  
   (one video card is brand new, the other is two years old, both identical)
No overclocking used
1 to 3 monitors attached

I have run each of these systems with the following Ubuntu OS / configuration combinations as I worked through trying to solve this Radeon crash issue.  Each produces the same consistent gpu hang / crash as described above:

1.  Ubuntu 18.04.1, default configuration, open …






Ubuntu is an open source software operating system that runs from the desktop, to the cloud, to all your internet connected things.