Still celebrating National IT Professionals Day with 3 months of free Premium Membership. Use Code ITDAY17

x

Linux

64K

Solutions

32K

Contributors

Linux is a UNIX-like open source operating system with hundreds of distinct distributions, including: Fedora, openSUSE, Ubuntu, Debian, Slackware, Gentoo, CentOS, and Arch Linux. Linux is generally associated with web and database servers, but has become popular in many niche industries and applications.

Share tech news, updates, or what's on your mind.

Sign up to Post

This is Apache 2.2.17 and it was complied into its own directory.
The Openssl version on the server was 1.0.0.
I installed a newer version 1.0.1g.

Configured the new version to be used by the OS. 'openssl version' and 'which openssl' both show the new version.

However, when I try to add the new security from OpenSSL in the httpd.conf I get this error:

SSLProtocol: Illegal protocol 'TLSv1.2'

...showing that it is still not using updated OpenSSL.
Per Redhat. httpd2.2.17 should support this:

https://access.redhat.com/solutions/65030
RHEL 6: TLS v1, v1.1, & v1.2 support

You must have at least openssl-1.0.1e-15.el6, httpd-2.2.15-39, and mod_ssl-2.2.15-39 to have support for TLSv1, v1.1, & v1.2.
TLS v1.1 & v1.2 support added to OpenSSL with release of openssl-1.0.1e-15.el6 from RHBA-2013:1585, first shipped in RHEL 6.5.
The ability to specify TLSv1.1 & v1.2 in Apache with SSLProtocol was included in httpd-2.2.15-39, released in RHBA-2014:1386-1.

What needs to be done to do this other than recompiling Apache?
0
Survive A High-Traffic Event with Percona
LVL 3
Survive A High-Traffic Event with Percona

Your application or website rely on your database to deliver information about products and services to your customers. You can’t afford to have your database lose performance, lose availability or become unresponsive – even for just a few minutes.

I have a QNAP TS-459 Pro II NAS, which runs Linux (QTS).
It publishes shared folders with Samba to the network and my clients are all Windows.

Now, after upgrading to the latest Firmware, some users complains they can't access the  shared folders, while other still can.

Other owners of the NAS report they have the same problem, though there is no recommended solution yet from QNAP. These reports says basic troubleshooting like rebooting, removing and resetting permissions etc failed to kill the problem.

So: there is some bug causing random users not beeing able to access a shared folder.
I need to enumerate over my users and check if they can access the folder. Since it's a bug I'm not confident asking about permissions will work.

Question: How to test/check access to a network shared folder over several users from Windows 7/ 2012?
 
(tried a bit with accesschk and accessenum from sysinternals and they are not well fitted it seems)
0
how do you format a cd in ubuntu?  I've done it before but I forget the name(s)-methods.   I know I'm preaching to the choir but there are SO MANY names...
0
I need to clear the logs on my Xenserver running OS 6.5. Not being a linux guru, how do I get this done? Thanks
0
Hi,

I have a strange problem with my notebook. I installed Ubuntu then I removed every partition
on the disk.

Diskpart - Select disk 0 - clean

Then I re-installed windows 10 from scratch.

And still had these entries here. The other non-Ubuntu entries are valid. See the attached graphic.

So I am trying to work out how to remove them.

Any suggestions?

Thanks,

Ward
IMG_2458.JPG
0
I am working on PWM tool which is integrated with Directory Server 389. I can able to integrate with DirSrv successfully.  User is getting logging in to the tool and able to change the password from that tool.
I am facing an issue while user using forgot password option,
Once user press forgot password link, it will ask for security question, once he gives right answer it should allow user to reset his password. But I am getting error as " PWM 5046

An error occurred while unlocking your account. Please contact your administrator. { 5046 ERROR_UNLOCK_FAILURE (unable to unlock user uid=infosec,ou=People,dc=tetrasoft,dc=in error: [LDAP: error code 16 - No Such Attribute]) }

Can anyone help me in this regards?
0
I’ve installed the GNOME 3 desktop on an Oracle Linux 7.3 instance on Amazon AWS (AMI ID OL7.3-x86_64-HVM-2016-11-09).  The desktop seems a bit off, however.  As seen in https://imgur.com/a/EgAON the resolution is poor and, more importantly, the drop-down applications menu is missing.   I’m using TigerVNC (VNC Viewer  6.17.731) to connect to the server.  The desktop was installed with      yum groupinstall -y "Server with GUI"
 
Any insights would be welcome.
0
Hello,

When we create datanodes ,  for the disks do we need to use local disks or SAN disks?  Most of them are recommending the local disks. Why do we need to have local disks?
0
Experts - I’d like to create a Linux/Unix read-only-root role for Auditors, InfoSec and Tech Ops, so they can examine a system without risk of breaking anything.
-      Using sudo or Centrify, we can grant the privileges to run some commands as root, e.g.  ls, cat, cksum and tail –f
-      I don’t want to allow root privileges for e.g. find, view or more/less, as they can be used to modify a system

Creating the role is easy; Making it easy to use is harder
-      `sudo cat filename |less` would work fine – the `cat` is run as root, the `less` as the unprivileged user. I can create a little script utility called something like “Auditors_less” to remove the need to remember the syntax.
-      `dzdo cat filename > ~/my_copy_of_filename` would work for the same reason, and give them a local copy to work with. Call it “Auditors_cp” or just “Acp”
(`dzdo` is the Centrify equivalent to sudo)

Replacing the functionality of `find` is the part I can’t figure out. The output of `find` gives the full path to a file. `find` also allows you to select on ownership, permissions etc., but that part could be replaced by
`dzdo ls -l |grep {pattern}`

So a scriptlet that takes a starting directory as input and produces output in the form
/path/to/file      : ls –l output of file
would be great, as grep can filter the output, e.g. for globally writeable files/directories

I’ve found similar questions on formatting `ls -lR` output on stackoverflow.com, but no usable answers – general opinion seems to be…
0
Latest version of EMC Powerpath and
How to install EMC Powerpath on Red hat Linux 7 ?
0
What Is Blockchain Technology?
LVL 4
What Is Blockchain Technology?

Blockchain is a technology that underpins the success of Bitcoin and other digital currencies, but it has uses far beyond finance. Learn how blockchain works and why it is proving disruptive to other areas of IT.

So let me start this off as I have no control over how we do things.  The systems are configured to be as functional and secure as possible.  So things that would work on a home system may not work here.  such as keys.

So here is my question:

I have 831 systems that I audit on a weekly basis.  These systems are broken down into networks.  But they all can be reached via a netapps/security server.

I have, over the past few months, been able to write a main menu and many sub-menus to achieve all my goals for automating the audits with the exception of one network that is a bit more complex because it doesn't have a direct path to the NetApps/Security Server, It has to hop from one server to the next to the NetApps/Security Server.

The path looks something like this:
Security --> Network 1 --> Network 2 -->Host

User1 is an Active Directory account
User2 is an LDAP Account

I am writing the menu option to do the audits and move the audit findings to the security server.

So the current way I do it is I run a single script each time.  For this particular network/host it looks like this:
sshpass -p $pw ssh -q -t $user1@Network1 "ssh -q -t $user1@Network2 "ssh -q -t $user2@Host sudo su -; ./audit.sh"

Then I have to do this:
sshpass -p $pw ssh -q -t $user1@Network1 "ssh -q -t $user1@Network1 'sudo chmod 664 /tmp/audit-backup*;  sudo scp -q /tmp/audit-backup* $user1@Network1:/tmp; sudo rm -f /tmp/audit-backup*'"

And lastly I need to do this:
sshpass -p …
0
Linux machine was cloned but now showing a read only on the drive, I am not strong with Linux, is there a way to change that.
0
I had this question after viewing Solaris LDAP Client failure.

Team - I am experiencing the same issue - But I dont know how to allow anonymous access to directory server as per the solution from user jw124210.

Please HELP!!!!
0
Experts - I am in the process of configuring Solaris LDAP Client for 389 DS. I have created a profile as below -

dn: cn=shades, ou=profile,dc=my,dc=domain,dc=com
credentialLevel: proxy
serviceAuthenticationMethod: pam_ldap:tls:simple
defaultServerList: ldap.my.domain.com ldap2.my.domain.com
authenticationMethod: tls:simple
defaultSearchBase: dc=my,dc=domain,dc=com
objectClass: top
objectClass: DUAConfigProfile
cn: shades
serviceSearchDescriptor: passwd:ou=People,dc=my,dc=domain,dc=com?sub
serviceSearchDescriptor: shadow:ou=People,dc=my,dc=domain,dc=com?sub
serviceSearchDescriptor:
user_attr:ou=People,dc=my,dc=domain,dc=com?sub
serviceSearchDescriptor:
audit_user:ou=People,dc=my,dc=domain,dc=com?sub
serviceSearchDescriptor: group:ou=Group,dc=my,dc=domain,dc=com?sub

When I am running below script to initiate -

#ldapclient init -v -a profileName=shades -a domainName=example.com -a proxyDN="cn=proxyagent,ou=profile,dc=example,dc=com" -a proxyPassword="password" ldap.my.domain.com
Parsing profileName=shades
Parsing domainName=example.com
Parsing proxyDN=cn=proxyagent,ou=profile,dc=example,dc=com
Parsing proxyPassword=<password>
Arguments parsed:
        domainName: example.com
        proxyDN: cn=proxyagent,ou=profile,dc=example,dc=com
        profileName: shades
        proxyPassword: <password>
        defaultServerList: <ldap.my.domain.com>
Handling init option
About to configure machine by downloading a profile
Can not find the shades …
0
HI EE,

I am using Veeam Backup and Replication to perform some backups of some linux based VMS. Recently, I started getting this error below:

Getting VM info from vSphere
Error: DiskLib error: [16].No file exists for given path -- File open failed: Could not find the file --tr:Failed to start file downloading.
VMFS path: [LOCALSTORE] /vmfs/volumes/54c79103-71bdf1a2-4420-001f295aced9/Generic/generic.vmx].
--tr:NFC storage connection is unavailable. Storage: [stg:54c79103-71bdf1a2-4420-001f295aced9,nfchost:ha-host,conn:10.10.1.11]. Storage display name: [LOCALSTORE]. Failed to create NFC download stream. NFC path: [nfc://conn:10.10.1.11,nfcho

so far I have deleted the VMs from the backup job and readded them, created new backup jobs, but I still get the same error message
0
Hello guys,

I´m trying to get couple values from my QNAP over snmp but I´m still getting Timeout error...
I´ve already disabled DDos detection and increased MaxPacketPerSecond rate to 1000 but still getting time out.
NAS is fully upgraded to newest firmware version 4.3.3.0154

I´m using snmpwalk from other linux machine.

Thanks for any help

Regards

Jiri
0
Hi to All of you,
I have a server running Linux Red Hat 7 and Nessus Professional installed . I Need to move nessus on a new server and migrate all the activities I made during the last years ( 100 of network scans )

Is there a procedure to perform the backup and restore of Nessus to a different server mantaing all the jobs and reports?
Thank you
Carlettus
0
I am weary of dealing with web host companies that have creative pricing plans that drive me up a wall. I  would like to know the name of a company that charges ONE rate from the word GO all the way through at least 5 more years of renewals. I am a VERY low user of data. I simply want a reliable host with 24/7 technical help available for photos and emails. Can anyone recommend one or two? I am with one now that is demanding more than DOUBLE what I paid for the first few years! It isn't going to happen.
0
Hello, I'm a new employee taking over development efforts at my company. Currently we're trying to update our live site with minor changes. GitHub repo has been changed, I've made my way to the system that needs updating. However now 'm having issues gaining root access to the machine. It is possible with DigitalOcean to reset root password, however the machine has ssh restricted to key access only. I'm not sure where to go from here due to inexperience, so any help will be appreciated.
0
The Eight Noble Truths of Backup and Recovery
LVL 4
The Eight Noble Truths of Backup and Recovery

How can IT departments tackle the challenges of a Big Data world? This white paper provides a roadmap to success and helps companies ensure that all their data is safe and secure, no matter if it resides on-premise with physical or virtual machines or in the cloud.

When I unzip and upload restored files from backups the Owner ID & Group ID is SET to 0?

Why is it not obtained from my backup?

Is there a way to get or set the Owner ID to 512 & Group ID to 99 for all folders, subfolders and files?
0
I recently extended a virtual disk that is used for a red hat server in ESX1 6.  What do I need to do so the Red Hat OS recognizes the new storage?
0
Hi Team,

We have folder called /opt/app
inside /opt/app we are running applications as elasticsearch and kibana in below respective folders.

/opt/app/elasticsearch
/opt/app/kibana

and processes are running in those folders as we.

Due to space issue. Now i want to mount to /opt/app folder without any data loss on the fly?
is it possible or not? if possible could you please suggest me what is the best way to proceed?
0
I have an older XAMPP installation hosting one website that went down. I have backups that I created using mysqldump before the crash and was wondering if I bring up a new Linux, Apache, MySQL, PHP installation would it be possible to do a restore. What should I be looking out for before rebuilding this server? If the versions of the various components are different will that affect the recovery?Thanks.
0
Can i able to add RHEL6 and RHEL7 repos to spacewalk?

I have account on redhat where i can download all the rhel6/7 packages.  not sure how to add to spacewalk so i can utilize spacewalk to do lot of operations.
0
Here is a short snip of a centos web server log. Clients are connecting every second or so.

What I need is a way to search these logs using the Linux commands line looking for gap, meaning where clients didn't connect with a value such as
seconds, minutes, hours to search for.

x.x.x.x - - [12/Sep/2017:03:40:05 -0400] "HEAD / HTTP/1.1" 200 0 "-" "otm/1.0.0" "-"
x.x.x.x - - [12/Sep/2017:03:40:05 -0400] "HEAD / HTTP/1.1" 200 0 "-" "otm/1.0.0" "-"
x.x.x.x - - [12/Sep/2017:03:40:05 -0400] "HEAD / HTTP/1.1" 200 0 "-" "otm/1.0.0" "-"
x.x.x.x - - [12/Sep/2017:03:40:06 -0400] "HEAD / HTTP/1.1" 200 0 "-" "otm/1.0.0" "-"
x.x.x.x - - [12/Sep/2017:03:40:06 -0400] "HEAD / HTTP/1.1" 200 0 "-" "otm/1.0.0" "-"
x.x.x.x - - [12/Sep/2017:03:40:06 -0400] "HEAD / HTTP/1.1" 200 0 "-" "otm/1.0.0" "-"
x.x.x.x - - [12/Sep/2017:03:40:06 -0400] "HEAD / HTTP/1.1" 200 0 "-" "otm/1.0.0" "-"
x.x.x.x - - [12/Sep/2017:03:40:07 -0400] "HEAD / HTTP/1.1" 200 0 "-" "otm/1.0.0" "-"
x.x.x.x - - [12/Sep/2017:03:40:07 -0400] "HEAD / HTTP/1.1" 200 0 "-" "otm/1.0.0" "-"
x.x.x.x - - [12/Sep/2017:03:40:07 -0400] "HEAD / HTTP/1.1" 200 0 "-" "otm/1.0.0" "-"
x.x.x.x - - [12/Sep/2017:03:40:07 -0400] "HEAD / HTTP/1.1" 200 0 "-" "otm/1.0.0" "-"
x.x.x.x - - [12/Sep/2017:03:40:08 -0400] "HEAD / HTTP/1.1" 200 0 "-" "otm/1.0.0" "-"
x.x.x.x - - [12/Sep/2017:03:40:08 -0400] "HEAD / HTTP/1.1" 200 0 "-" "otm/1.0.0" "-"
x.x.x.x - - [12/Sep/2017:03:40:08 -0400] "HEAD / HTTP/1.1" 200 0 "-" "otm/1.0.0" "-"
x.x.x.x - …
0

Linux

64K

Solutions

32K

Contributors

Linux is a UNIX-like open source operating system with hundreds of distinct distributions, including: Fedora, openSUSE, Ubuntu, Debian, Slackware, Gentoo, CentOS, and Arch Linux. Linux is generally associated with web and database servers, but has become popular in many niche industries and applications.