Go Premium for a chance to win a PS4. Enter to Win







Linux is a UNIX-like open source operating system with hundreds of distinct distributions, including: Fedora, openSUSE, Ubuntu, Debian, Slackware, Gentoo, CentOS, and Arch Linux. Linux is generally associated with web and database servers, but has become popular in many niche industries and applications.

Share tech news, updates, or what's on your mind.

Sign up to Post

I've got a cron job that runs 4 times a day which uses a simple shell script to perform a mysql_dump then tar the file (DB_BACKUP-2017-11-18.55.25.tgz) and place it in a directory.


tar_name=`date +%Y-%m-%d.%M.%S`

# Dump MySQL Database
/usr/local/mysql/bin/mysqldump --user="user" --opt vcal > /share/CACHEDEV1_DATA/Web/SQL/vcal.sql

# Backup MySQL Dump File
/bin/tar -czPf $backup_dir/"DB_BACKUP-"$tar_name.tgz $backup_file

exit 0

Open in new window

I'm trying to figure out a way to have the script delete backups which are older than 4 days.
Ideally, I'd like to be left with 16 files (4 are created per day) at any given time

Any suggestions?

Thank you for looking :)

Technology Partners: We Want Your Opinion!
Technology Partners: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

i need two passive finger printing programs.

i've tried p0f, but the results i am getting are not consistent --maybe just wrong ...

i load up p0f, and use the line p0f -f p0f.fp log1.log

then i either use netcat to try and open a connection or a browser to open a connection, but, i get all sorts of results, but not the ip of the target i'm shooting for

i need to identify the target(yahoo, google, microsoft, whatever) operating system through a passive program

so, i dig the domain name or ping the site and retreive an ip address, and then wait for it to populate in p0f, but
i never actually see a packet come back from that site identifying an OS

this is for an ethical hacking class


I am confused about when and why to run many contianers in a single task? or should be one task and one container ?

what is the logic for this?
IF  i have a multi node cluster up on ECS it seems to imply that I can create a service and scale across nodes , do you have a git hub example of some thing i can demo this with ?

do I have to set up a load balancer for this to work?

what do I have to do to get swarm like behavior?

hello experts
for i am a new for Linux, while i am trying to start up IPSEC service i got below error and failed, could you guide me how to identify the problem and fixed?
thank you

[root@izj6cj3u8v3v07l4w3162fz ~]# systemctl start ipsce
Failed to start ipsce.service: Unit not found.
[root@izj6cj3u8v3v07l4w3162fz ~]# systemctl start ipsec
Job for ipsec.service failed because the control process exited with error code. See "systemctl status ipsec.service" and "journalctl -xe" for details.
[root@izj6cj3u8v3v07l4w3162fz ~]# systemctl status ipsec.service
鈼?ipsec.service - Internet Key Exchange (IKE) Protocol Daemon for IPsec
   Loaded: loaded (/usr/lib/systemd/system/ipsec.service; enabled; vendor preset: disabled)
   Active: failed (Result: start-limit) since Fri 2017-11-17 14:42:55 CST; 7s ago
     Docs: man:ipsec(8)
  Process: 3074 ExecStartPre=/usr/libexec/ipsec/addconn --config /etc/ipsec.conf --checkconfig (code=exited, status=3)

Nov 17 14:42:55 izj6cj3u8v3v07l4w3162fz systemd[1]: ipsec.service: control process exited, code=exited status=3
Nov 17 14:42:55 izj6cj3u8v3v07l4w3162fz systemd[1]: Failed to start Internet Key Exchange (IKE) Protocol Daemon for IPsec.
Nov 17 14:42:55 izj6cj3u8v3v07l4w3162fz systemd[1]: Unit ipsec.service entered failed state.
Nov 17 14:42:55 izj6cj3u8v3v07l4w3162fz systemd[1]: ipsec.service failed.
Nov 17 14:42:55 izj6cj3u8v3v07l4w3162fz systemd[1]: ipsec.service holdoff …
How can I set this up to be able to communicate from one container that I am running as a task and a service inside of AWS ECS ?

Do you need to do something with the load balancer or network or something?
I have never set up a DNS server before.  Right now we have an SBS server that is our DNS server - but we are probably doing away with windows... Or atleast only using the domain for a small number of PC's.

Other than public internet DNS, there are just a handful of DNS entries that I need for a handful of internal servers.  

Before I look into this too much, I'm just curious how hard people think this is to do?  I'm confortable at linux command line, and I know how to set up my DHCP server to change the DNS server address, etc.... Really I am just curious how difficult it is to just setup DNS server on redhat/centos and other than passing public DNS records, having just a handful of internal A records.  

I am trying to install OpenMS following the instructions at http://ftp.mi.fu-berlin.de/pub/OpenMS/release-documentation/html/install_linux.html.

I have run the following commands:
sudo apt-get install build-essential cmake autoconf patch libtool automake
    sudo apt-get install qt4-default libqtwebkit-dev
    sudo apt-get install libeigen3-dev libwildmagic-dev \
      libxerces-c-dev libboost-all-dev libsvn-dev libgsl-dev libbz2-dev
    # use from contrib for compatibility: SEQAN ; COINOR; ZLIB ; WILDMAGIC

# Assuming you are in ~/Development
git clone  https://github.com/OpenMS/contrib.git
mkdir contrib-build
cd contrib-build
cmake -DBUILD_TYPE=LIST ../contrib

cmake -DBUILD_TYPE=SEQAN ../contrib
cmake -DBUILD_TYPE=WILDMAGIC ../contrib
cmake -DBUILD_TYPE=EIGEN ../contrib

All these commands work fine, but the next one does not work:
cmake -DBUILD_TYPE=ALL -DNUMBER_OF_JOBS=4 ../contrib

I get the error:-- Configuring incomplete, errors occurred!
See also "/home/gcefalu/Development/contrib-build/CMakeFiles/CMakeOutput.log".

Is it possible that I m missing some linux library?
i'm trying to install an IRC client on my Debian Gnome Desktop.

Every IRC client comes up with a plethora of unresolved dependencies

i've tried apt-get and the GUI package manager when apt-get didn't want to auto resolve the dependencies.

the GUI package manager lists the app i want to install, then i click the check box to install it, click apply changes, and it looks like the progress bar goes from 0 to 100 real quick, and then it shows the package as installed...

but when i go to run the app, it is missing, and i go back into package manager and it shows not installed

i haven't had this problem before--both apt-get and package manager have been real good to me

thanks for suggestions

We have 4 Ubuntu 14.0 linux gust VM on  Citrix Xen Server 6.0 .we are planing to migrate this VM to VMWare Esxi 6.5 Cluster environment . How to convert this ..please give a best solution.
Learn Veeam advantages over legacy backup
Learn Veeam advantages over legacy backup

Every day, more and more legacy backup customers switch to Veeam. Technologies designed for the client-server era cannot restore any IT service running in the hybrid cloud within seconds. Learn top Veeam advantages over legacy backup and get Veeam for the price of your renewal

So we have Linux 6.5 Server and when ever I SSH into it from a workstation to view the audit logs my UID is attached to someone else that is logged in (SSH) AUID.

I understand that the AUID is the Audited User ID and UID is the user ID.  So if I login as MikeP my AUID and UID should be MikeP and if I, from that log  in should ssh to a different machine using different credentials that the AUID stays the same but the UD changes to the new credentials.

So if MIkeP ssh to another workstation as Mike-local then the AUID should stay MikeP and the UID now becomes Mike-local.  

However, for us that is not happening.  The AUID is selecting who ever is attached to the system when we SSH or login locally.

Has anyone experienced this before and what could be causing it?

Hi All,

I have a web server that needs to host 2 SSL certs that will use 1 public IP address

I have added the certs to the server and added a new entry to the ssl.conf file

<VirtualHost *:443>
 #ServerName www.XXXXXXXX.com
 #DocumentRoot /var/www/site2
 SSLEngine on
 SSLCertificateFile /etc/httpd/conf/ssl.crt/XXXXXXXXX.crt
 SSLCertificateKeyFile /etc/httpd/conf/ssl.key/XXXXXXXXkey
 SSLCACertificateFile /etc/httpd/conf/ssl.crt/XXXXXXXX.crt

When I restart httpd.conf I get the following message.

Starting httpd: [Wed Nov 15 09:25:05 2017] [warn] _default_ VirtualHost overlap on port 443, the first has precedence

Obviously, it is looking at both certs and as both use Port 443 it goes with the first cert it sees and not the second. What am I missing?

CentOS 6.9
Apache with mod_ssl installed
How to code and call an application in Linux Ubuntu, which types क on active Window of another application when the user presses key ‘k’
Hi Andrew,

I have some serious troubles when trying to clone a physical linux machine.
It seams that i can't pass over the "unable to query the live Linux machine" error even if i followed the KB of Vmware regarding this issue and checked each point.
The log seam to stop at the following entries:

2017-11-15T00:36:55.322+02:00 error converter-gui[10444] [Originator@6876 sub=wizardController] Cannot query source HW info: converter.fault.SysinfoQueryBadThumbprintFault
2017-11-15T00:38:57.136+02:00 error converter-gui[10444] [Originator@6876 sub=wizardController] Cannot query source HW info: converter.fault.SysinfoQueryLinuxFault

Do you have any other tips besides the VMware KB?
Any information from your side will be highly appreciated.

Best regards,

We are running redhat 7.2 and running cluster.  How do we find out if the node is fenced ?  Any command ?
I’m having trouble setting up LUKS on a Red Hat Test Server. I decided not to have a DEV prompt for a passphrase at boot but to use manual decryption instead. This is supposed to require execution of the cryptsetup commands and mounting. I attempt to set it up on a blank second disk I recently installed. Here’s the session…


sudo cryptsetup luksOpen /dev/sdb crypt-sdb
# enter /dev/sdb password

sudo cryptsetup luksClose /dev/sdb crypt-sdb


This then caused RHEL to freeze and force a cold reboot. I then used yum to run updates. I try again…


sudo cryptsetup luksOpen /dev/sdb crypt-sdb
# enter /dev/sdb password
#[<username>@localhost dev]$ sudo cryptsetup luksClose /dev/sdb crypt-sdb
Device sdb not found

# [<username>@localhost dev]$ sudo mount /dev/sdb
mount: can't find /dev/sdb in /etc/fstab


So I attempt to enter /dev/sdb in /etc/fstab but unsuccessful since it’s read-only. I try this.


[<username>@localhost etc]$ sudo cryptsetup luksFormat /dev/sdb
[sudo] password for <username>:

This will overwrite data on /dev/sdb irrevocably.

Are you sure? (Type uppercase yes): YES
Enter passphrase:
Verify passphrase:
Cannot format device /dev/sdb which is still in use.
[<username>@localhost etc]$ sudo umount /dev/sdb
umount: /dev/sdb: not mounted
[<username>@localhost …

We have few Linux servers and that has connected to 2-3 vlans and  that has NFS file system mounted from different server. For an example NFS server is registered with 192.168.10.x, that is where all the our production payload traffic goes. We would like to move NFS traffic to different Vlan like 172.10.10.X. Since many of them are using NFS servers, we don't want to change the DNS name, but I was planning to add new IP and hostname in the /etc/hosts file, so it would take local /etc/hosts entry rather than going through DNS.

Basically I want all the NFS traffic to be go through  172.10.10.X
Hi, just wondering if someone is able to help me re-write a URL in the following pattern...

https://url.domain.com/url/ --> https://www.domain.com/url/ 

So if you load the first part it shows the content where the second part is. So it's basically creating a sub-domain that points to a folder, but still shows the folder as part of the URL. (the /url/ at the end)

Thanking you in advance!!
I am configuring mattermost software and when you login mattermost it won't ask to sign up?
What does it mean to be "Always On"?
What does it mean to be "Always On"?

Is your cloud always on? With an Always On cloud you won't have to worry about downtime for maintenance or software application code updates, ensuring that your bottom line isn't affected.

This is using redhat enterprise linux 6.6 64-bit. My vendor wanted to install a software but was shown that a file named libXss.so.1 missing. We confirmed the libXScrnSaver-1.2.2-2.el6.i686 is installed by running "yum install libXScrnSaver-1.2.2-2.el6.i686.rpm".

However, we found very strange as the system prompts for the error message - libXss.so.1 is missing. How can we solve this problem? Shall we download the libXScrnSaver in rpm package and then re-install again?

Thanks in advance.
Hi Experts
i need to disable ubuntu desktop 17.10  GUI and startup with command or tty only
i googled for this and i found that i need to run this command
sudo systemctl stop lightdm.service
but i got error the lightdm is not available
I am using rubinus 3.86 and CentOS7(x86_64).
The installation directory is / var / home / ap / rubinius.

I attempted to install nio4r using the gem command and it failed.
The log at that time is as follows.

It is described as "method_missing", but what does it mean?

cat /var/home/ap/rubinius/gems/extensions/x86_64-linux/2.3/nio4r-2.1.0/gem_make.out

current directory: /var/home/ap/rubinius/gems/gems/nio4r-2.1.0/ext/nio4r
/ var / home / ap / rubinius / bin / rbx -r ./siteconf20171112-15122-n0ryc2.rb extconf.rb --with-ldflags = - L / var / home / lib / gcc5 / lib64
checking for unistd.h ... yes
checking for sys / select.h ... yes
checking for poll.h ... yes
checking for sys / epoll.h ... yes
checking for sys / event.h ... no
checking for port.h ... no
checking for sys / resource.h ... yes
                  main # Rubinius :: Loader at core / loader.rb: 861
                script # Rubinius :: Loader at core / loader.rb: 679
           load_script. Rubinius :: Code Loader at core / code_loader.rb: 590
           load_script # Rubinius :: Code Loader at core / code_loader.rb: 505
            __script__ # Object at extconf.rb: 21
   << (method_missing) # Kernel (NilClass) at core / zed.rb: 1413

undefined method `<< 'on nil: NilClass. (NoMethodError)

An exception occurred running extconf.rb
*** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of necessary
libraries and / or headers. Check the mkmf.log file for …
my filebot script used to work on my readynas, it would automatically, take my downloaded contents and extract them and move them to the right directory, now it is not doing that. I can't figure it out.

I get the following error

(process:12005): GLib-GIO-ERROR **: Settings schema 'org.gnome.system.proxy' is not installed

i ran this

filebot -script fn:amc --output "/data/Videos" --action copy --conflict override -non-strict --log-file amc.log --def clean=y artwork=y excludeList=amc.txt "/data/Torrents/Completed"
i wrote simple perl script and trying to run geting below error from


sh-4.4$ vi hi.pl                                                                                                                                                                            
sh-4.4$ pwd                                                                                                                                                                                  
sh-4.4$ /home/cg/root/hi.pl                                                                                                                                                                  
sh: /home/cg/root/hi.pl: Permission denied                                                                                                                                                  

i just wrote in hi.pl as
print "hii";

in command prompt of windows it ran fine once i install activevperl

C:\Users\ss\perl\code>perl hello.pl

please advise
I have tunneled mixed linux and windows clients to a 'within-firewall client' (that could access the share's host) before, so I know that that, at least, is possible.

But what about tunneling directly from the client to the host of the network share?

Can I have samba listen on port, say, 5559 (just an example), and only accept connections from localhost, and tunnel a client's 5559 to that host - so that the client appears to be connecting from host's localhost? I can't figure out how to set it up. So far, I have samba configured:

hosts allow = ::1 lo
interfaces = lo
bind interfaces only = yes
And I'm tunneling from the host:

ssh -R 5559:localhost:5559 shrusr@shrhost -Nf
However, if samba is already running, than TCP forwarding fails. If the tunnel is already running, than samba cannot start. Is what I'm trying to accomplish possible? Is there some other way to do it?

It seems like it should work - I can even netcat myself files across that ssh tunnel. So, netcat has no problem listening to the same port as ssh. Only smbd refuses, and also blocks ssh from that port if started first.

Any advice would be appreciated.






Linux is a UNIX-like open source operating system with hundreds of distinct distributions, including: Fedora, openSUSE, Ubuntu, Debian, Slackware, Gentoo, CentOS, and Arch Linux. Linux is generally associated with web and database servers, but has become popular in many niche industries and applications.

Vendor Experts

Naveen SharmaKernel Data Recovery Learn more about Kernel Data Recovery
Sandeep KumarKernel Data Recovery Learn more about Kernel Data Recovery