[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

x

Unix OS

32K

Solutions

18K

Contributors

Unix is a multitasking, multi-user computer operating system originally developed in 1969 at Bell Labs. Today, it is a modern OS with many commercial flavors and licensees, including FreeBSD, Hewlett-Packard’s UX, IBM AIX and Apple Mac OS-X. Apart from its command-line interface, most UNIX variations support the standardized X Window System for GUIs, with the exception of the Mac OS, which uses a proprietary system.

Share tech news, updates, or what's on your mind.

Sign up to Post

I'll need a Shell (Bash) script (rather an exact command) that outputs

a)  files' name in the Solaris system, one file per line in the UNIX systems
b) that were modified/created the last 1470 minutes
c) exclude FIFO files, symbolic links, sockets (ie *.sock)
d) names of files of between 1 byte to 20MB in size
e) files in /dev, /devices, /kernel, /cdrom, /platform, /proc, /net
f) files mounted on NFS

I have about 1million files so hoping the command/script of outputting
the file can complete in 30mins, so may need efficient coding.


I need to amend the following script to read (ie for AV to scan) the above output file:

#!/bin/bash
LOGFILE="/var/log/clamav/`hostname`-$(date +'%Y-%m-%d').log";
## suggest to change dirs below to root but exclude databases
DIRTOSCAN="/var /opt /home /etc /tmp /export";

for S in ${DIRTOSCAN}; do
DIRSIZE=$(du -sh "$S" |grep -v "/proc" |grep -v "/dev" |grep -v ...  /2>/dev/null | cut -f1);
## add to grep -v for any other file types to exclude

echo "Starting a daily scan of "$S" directory.
Amount of data to be scanned is "$DIRSIZE".";

clamscan -ri "$S" >> "$LOGFILE";
0
Amazon Web Services
LVL 12
Amazon Web Services

Are you thinking about creating an Amazon Web Services account for your business? Not sure where to start? In this course you’ll get an overview of the history of AWS and take a tour of their user interface.

I'm looking for ways (most likely auditctl or audit) to monitor Solaris files
(/etc/group, sudoers,  root's  cron.*) & if possible email out a notification
once content of the file(s) is modified.

Will need exact/detailed steps.

I'm on Solaris 10 x86.

File integrity monitoring (like those used by Tripwire) tools is not an
option as we just want to use built-in Solaris tools
0
I'll need to monitor several "privilege escalation related" Solaris 10 & RHEL6 files using
ACLs (Access Ctrl Lists) :

a) /etc/group, /etc/sudoers, /etc/cron.daily (or .weekly or any crons owned by root):
    ACL to send to syslog (so that we can pipe to SIEM) when permissions, ownership
    or contents of the above files are changed

b)visudo, sudo, usermod, useradd    command binary files :
   when these are being executed/run, ACL to send to syslog (who & when it's being
   executed)

Appreciate an exact  setacl (or the actual commands/settings in RHEL6 & Solaris 10
x86  samples
0
I am trying to get a specific pid and ONLY that pid, not others that might have that pid embedded in them, i.e.

345
1345
5345

I only want to get the 345.  If they would let me use Perl it would be easy, but I have to use ksh.

Any ideas?

Thanks!
David
0
Need to harden a Solaris 10 that is connecting to Internet  from DMZ.

Anyone has a Solaris 10 hardening script that once run will harden for
a) Level 2 Profile
b) "Scored"

The attached which I got from GitHub doesn't seem quite fit to what's needed
& with all the "printf ...", it's more of listing out than actually doing hardening.


From CIS benchmark:

Scoring Information
================
A scoring status indicates whether compliance with the given recommendation impacts the assessed target's benchmark score. The following scoring statuses are used in this benchmark:
Scored  <==
Failure to comply with "Scored" recommendations will decrease the final benchmark score. Compliance with "Scored" recommendations will increase the final benchmark score.
Not Scored
Failure to comply with "Not Scored" recommendations will not decrease the final benchmark score. Compliance with "Not Scored" recommendations will not increase the final benchmark score.



Profile
=====

 Level 1
Items in this profile intend to:
o be practical and prudent;
o provide a clear security benefit; and
o not inhibit the utility of the technology beyond acceptable means.
 Level 2  <==
This profile extends the "Level 1" profile. Items in this profile exhibit one or more of the following characteristics:
o are intended for environments or use cases where security is paramount
o acts as defense in depth measure
o may negatively inhibit the utility or performance of the …
0
curl to download all artifacts from artifactory folder
0
For Clam's dependent packages required as indicated by
  https://www.opencsw.org/packages/CSWclamav/  ,

I can't get 2 packages for Solaris 10 (Update 9) x86 :

1. common : it can only locate the i386 package for SunOS 5.8 in url below
  http://rsync.opencsw.org/opencsw/testing/i386/5.10/

Likewise for
2. libbz2_1_0 : can only locate for SunOS 5.9


Anyone has access to Oracle subscription, can assist to download the above
packages & attach them here?


For the 10 dependent packages, what's given are for i386, so if can help
provide for Solaris 10 x86, appreciated:
https://www.opencsw.org/packages/CSWclamav/
0
https://www.manageengine.com/products/eventlog/system_requirement.html

We're trying to quickly set up ManageEngine Eventlog analyzer/SIEM for our
Solaris 10 x86   and  RHEL 6  servers : all are 64bit OS.

Somehow I can't locate anything for Solaris 10 x86 : need the agents installer.
Still looking for RHEL6.  I'm not too good with navigating.

Anyone can help locate & give the exact links?
0
What is difference between this two commands.

info_file_name=`echo $i | cut -d "/" -f 7`
 
info_file_name=`echo $i | cut -d "/" -f 6`
1
How to add a * DNS entry to the etc/hosts file in Centos? This is to allow S3 calls to a cloudian instance.
0
Learn Ruby Fundamentals
LVL 12
Learn Ruby Fundamentals

This course will introduce you to Ruby, as well as teach you about classes, methods, variables, data structures, loops, enumerable methods, and finishing touches.

Actually the file descriptor table is not a real table. It's just an array of pointers to the "open file table" (struct file). But let's say we will see it as a table. What are the columns? For example:

FD   | Pointer to "open file table"
----------------------------------
...  | ...

In short, that's the question. I see a lot of different figures on the internet, but they are all different. For example, see:

http://faculty.winthrop.edu/dannellys/csci325/10_shared.htm
There they have a column "fd flags" (read/write), but I would think that this column is part of the "open file table" and not part of the "file descriptor table". See for example: http://man7.org/linux/man-pages/man2/open.2.html


       A call to open() creates a new open file description, an entry in the
       system-wide table of open files.  The open file description records
       the file offset and the file status flags (see below).  A file
       descriptor is a reference to an open file description; this reference
       is unaffected if pathname is subsequently removed or modified to
       refer to a different file.  For further details on open file
       descriptions, see NOTES.

       The argument flags must include one of the following access modes:
       O_RDONLY, O_WRONLY, or O_RDWR.  These request opening the file read-
       only, write-only, or read/write, respectively.
0
how can i set up 2 subents in AWS and be able to route between them ?


NACLs? subents confing , etc??

i have never done this before  and very very new to AWS
0
Can anyone tell me why I'm getting this error on AIX?  It does the same thing from smitty too.  The useradd created the user fine.  I verified he's in there through smitty and I can su - to it.

aixutil -[root]/root>useradd -m -g staff -s /bin/ksh -c "Scott Field - BMC" sfield8
3004-689 User "sfield8" exists.
aixutil -[root]/root>echo "sfield8:password" | chpasswd
3004-687 User "sfield8" does not exist.


Thanks!
David
0
Let's start with a useless example of input redirection:

less 1< /test.txt

Open in new window


The result is:

Missing filename ("less --help" for help)

This I understand, because:

LESS-PROCESS:
FD 0 <- terminal file (keyboard)
FD 1 <- /test.txt
FD 2 -> terminal file (monitor)

FD 0 needs to get some content from a file, but there is no file in this case. There is /test.txt but it points to the wrong fd. Now let's take a look at a useless example of output redirection:

less 0> /test.txt

Open in new window


LESS-PROCESS:
FD 0 -> /test.txt
FD 1 -> terminal file (monitor)
FD 2 -> terminal file (monitor)

The program doesn't give file descriptor 0 some output, so "nothing" will be written to /test.txt. That why you will always end up with an empty /test.txt file. File descriptor 0 opens /test.txt for writing and not for reading. So the less-process doesn't get any file to read from. Then why the result is not:

Missing filename ("less --help" for help)

Instead, less is acting as it got an empty file as input. The file /test.txt is empty in the end, but this is about output redirection and not about input redirection, so there is no input. That's the reason why I would expect "Missing filename". Why this is not the case?
0
See: https://stackoverflow.com/questions/6170598/can-anyone-explain-to-me-what-the-purpose-of-dev-tty


You can start with the POSIX spec. From there, read about the "controlling terminal" of a process.

But just for example... /dev/tty is how a command like "ssh" can read your password even if its standard input comes from somewhere else:

tar cf - . | ssh dest 'tar xf -'

Open in new window


If ssh decides to prompt you for a password, it will read it from /dev/tty instead of stdin.

Conceptually, /dev/tty is "the keyboard and text terminal". More or less.

Let's say my "terminal-file" of the current session is /dev/pts/1. In such a case, then what's the difference between "/dev/pts/1" and "/dev/tty"? And if they are basically the same, then why  "/dev/tty" is used instead of "/dev/pts/1"?

And:

/dev/tty is how a command like "ssh" can read your password even if its standard input comes from somewhere else

Let's say the standard input comes from somewhere else, so let's say we have:

FD 0 <- file
FD 1 -> /dev/pts/1
FD 2 -> /dev/pts/1

How I see it: the fact that the standard input comes from somewhere else doesn't mean that /dev/pts/1 can not be read? The password comes from the keyboard and /dev/pts/1 represents i.a. the keyboard, right? So I still don't see what exactly the purpose is of /dev/tty?

@noci: I know you know the answer, but I don't understand your explanation so I've made this post so maybe other people can explain it to me in a way that I understand it.
0
On a Redhat Linux system running a bash shell script I need some help with an if then statement that has more than 2 conditions. I basically want to check for this
A AND B or C  
A AND B or D
A AND B or E

Something along these lines but it doesn't work and wondered if I have the correct usage of brackets. It's not what's contained for evaluation that's the issue it's the syntax of the AND and OR where there's more than two conditions that I am struggling with.

if [[ $(find /opt/app -name httptd*.conf | grep -v grep | grep -c http) -eq 0 ] && [ ! -f /etc/init.d/apache ] ||  [ $(find /app -name http*.conf | grep -v grep | grep -c http) -eq 0 ]] || \
[[ $(find /opt/app -name httptd*.conf | grep -v grep | grep -c http) -eq 0 ] && [ ! -f /etc/init.d/apache ] ||  [ $(find /application -name http*.conf | grep -v grep | grep -c http) -eq 0 ]] || \
[[ $(find /opt/app -name httptd*.conf | grep -v grep | grep -c http) -eq 0 ] && [ ! -f /etc/init.d/apache ] ||  [ $(find /application -name manifest* | grep -v grep | grep -c http) -eq 0 ]] ; then
.....
0
I'm reading about "redirection of input" on the internet. I understand what's behind it. For example:

command < file.ext

Open in new window


This is equivalent to:

command 0< file.ext

Open in new window


In general, if you have:

command n< file.ext

Open in new window


then the contents of file.ext go to file descriptor "n" as input. I've checked different websites explaining "input redirection". However, the problem is that I didn't see any good example. I'll discuss some examples I saw:

cat < file.txt

Open in new window


Then I'm thinking, "cat file.txt" does the same, so why do we need it? Another example:

sort < file_list.txt > sorted_file_list.txt

Open in new window


Then I'm thinking, "sort file_list.txt > sorted_file_list.txt" does the same, so why do we need it? Another example:

more < /etc/passwd

Open in new window


Then I'm thinking, "more /etc/passwd" does the same, so why do we need it? That's why these are not really good examples in my opinion. What is a good example to explain the purpose of input redirection in a terminal-window?

Probably internally something like "cat file.txt" is being treated as "cat 0< file.txt" (input redirection), but in a terminal-window ... when it really does make sense to use an "input redirection" in a terminal-window? Does someone have a good example?
0
First I create a regular file with some contents (manual page of find command):

man find > test.txt

Open in new window


Then I use the less command to display some of these contents:

less test.txt

Open in new window


Now I press CTRL-Z to suspend the process. The process is still open, so now I can execute this command:

lsof | grep 'less'

Open in new window


By doing this, I get an idea which files are open with respect to the less-process. My result:

COMMAND  PID    USER  FD   TYPE  DEVICE  SIZE/OFF  NODE       NAME
less     24565  root  cwd  DIR   0,38    4096      21473055   /
less     24565  root  rtd  DIR   0,38    4096      21473055   /
less     24565  root  txt  REG   0,38    149944    22143102   /usr/bin/less
less     24565  root  mem  REG   9,1               22143102   /usr/bin/less (path dev=0,38)
less     24565  root  mem  REG   9,1               22135172   /usr/lib/locale/locale-archive-rpm (path dev=0,38)
less     24565  root  mem  REG   9,1               21741879   /lib64/libc-2.12.so (path dev=0,38)
less     24565  root  mem  REG   9,1               22265955   /usr/local/lib/libpcre.so.0.0.1 (path dev=0,38)
less     24565  root  mem  REG   9,1               21741743   /lib64/libtinfo.so.5.7 (path dev=0,38)
less     24565  root  mem  REG   9,1               21741946   /lib64/ld-2.12.so (path dev=0,38)
less     24565  root  0u   CHR   136,1   0t0       4          /dev/pts/1
less     24565  root  1u   CHR   136,1   0t0       4          /dev/pts/1
less     

Open in new window

0
Let's say I type the following "in a terminal":

echo 'bla'

Open in new window


In my case, the shell is bash, so I assume the shell/bash-process receives "echo 'bla'" as standard input? Then it sees "echo", so a child process will be started. So then we will have at least:

ECHO PROCESS:
fd 0 (standard input)   <- terminal-file (keyboard)
fd 1 (standard output)  -> terminal-file (monitor)
fd 2 (standard error)   -> terminal-file (monitor)

Open in new window


I thought that for this process, only "bla" is the standard input. And then the output is also "bla", so I'll see "bla" on my monitor.

I was just a bit playing with "input redirections" and I noticed that the following does not work:

echo < bla-file.txt

Open in new window


After some Google searches, I found out that "echo" does not read from stdin. However, it prints all of its arguments. So it's working differently than normal. So how I have to see/change this:

ECHO PROCESS:
fd 0 (standard input)   <- terminal-file (keyboard)
fd 1 (standard output)  -> terminal-file (monitor)
fd 2 (standard error)   -> terminal-file (monitor)

Open in new window


I thought every process by default has fd's 0,1,2? But if fd 0 would be there something like this:

fd 0 (standard input)   <- nothing

Open in new window


Then it should be still possible to redirect (input) to something. So this means I can not see it like that. Does this mean that the echo process doesn't have a fd 0 at all? Or I must not see "echo" as a process with a fd table et cetera?

But the echo command displays something on my monitor, so at least this should be there:

fd 1 (standard output)  -> terminal-file (monitor)

Open in new window

1
Exploring ASP.NET Core: Fundamentals
LVL 12
Exploring ASP.NET Core: Fundamentals

Learn to build web apps and services, IoT apps, and mobile backends by covering the fundamentals of ASP.NET Core and  exploring the core foundations for app libraries.

For this question, let's forget about v-nodes/vnodes. So let's say the contents of a file are located in data block(s) on a real physical disc.

See for example: https://www.usna.edu/Users/cs/aviv/classes/ic221/s16/lec/21/lec.html#orgheadline4

2.3 V-node and I-node Tables

There they explain the inode-table. Actually the inode-table just leads you to the contents of a file. But I think they forgot to mention something important. Let's say I'm requesting a regular file in a filesystem. In such a case, for what I need the inode-table? I just see it like this:

dentry (possibly more than 1) -> inode -> data block(s)

The inode contains the pointers to the data block(s). So why we need an inode-table? Or is this inode from above actually just an entry in the inode-table? If that's true, then it's weird because the inode-table is stored in memory, so when restarting the computer all the inodes are gone. Furthermore, probably the inode-table only contains information about open files.

Or are the inodes of open files just cached in memory (in the inode-table) to speed things up? Then the purpose of the inode-table is i.a. caching?

Anyway I'm surprised that they don't say anything about this. I think understanding the inode-table starts with the question why there is an inode-table.
0
Hello,

I have a bash script that I want to output the variables into a .csv file.  Most of the variable I can have the variable output on one line but I'm having a problme with variables that are multiline. The two variables are passed the process id of a process and the process listing contains new line characters. I thought if I stripped out the newline characters that you could assign it to the variable but it complains "No such file or directory in the variable substition".  The other variable is just netstat -an | grep <port>.  I don't know how to use arrays in bash but ideally i'd like to be able to keep the newlines in the csv file, if it's possible, since i'm sticking with comma as the separator. Any help would be really appreciated. Hitting my head off the wall!

VAR1="$(ps -ef | grep 27656 | tr '\n' ' ' | sed -e '/s   */g')"
(printf '%s\n' $VAR1)
0
File descriptor table:      Open file table
FD 0 (stdin)                ?
FD 1 (stdout)               ?
FD 2 (stderr)               ?

Open in new window


By default file descriptors 0, 1 and 2 are associated with the terminal. The keyboard input is associated with the standard input. The monitor is associated with the standard output and standard error.

The question is: Do fd 0,1,2 all refer to the same entry in the "open file table"? Or do they refer to two entries?

 
FD 0  -> entry A
FD 1    
       > entry B  
FD 2    

Open in new window


Or do they refer to three entries?

 
FD 0  -> entry A
FD 1  -> entry B
FD 2  -> entry C

Open in new window


This seems a pretty basic question, but I'm reading different things about it on the internet.

See: https://www.usna.edu/Users/cs/aviv/classes/ic221/s16/lec/21/lec.html#orgheadline6

If fd 1 and fd 2 refer to a different entry in the open file table, then this should be also the case for fd 0. So according to this website, they refer to three different entries in the open file table.
 
But now see: https://www.enseignement.polytechnique.fr/informatique/INF422/INF422_8.pdf#page=160    (page 160, example of no redirection)

There, it's like:

 
FD 0  -> entry A
FD 1    
       > entry B  
FD 2    

Open in new window


So according to that website, they refer to two different entries in the open file table.

And see: https://www.experts-exchange.com/questions/29119936/How-the-open-file-table-entries-look-like-for-stdin-stdout-stderr.html#a42694025


the fd[0] , fd[1] & fd[2] should all point to the same central entry

According to this, they all refer to the same entry in the open file table.

I can execute the following command:

lsof | grep 'bash'

Open in new window


This i.a. prints:


Open in new window

0
If my rsyslog.conf is configured to write *.info *.warn *.kern and some others to /var/log/messages is there any way to identify the local6 *.info messages apart from the *kern and *.warn and others in  /var/log/messages? I've noticed sometimes that the messages contain kern and warn but not just sure what *.info are and if there's an easy way to identify them
I'd rather not have to configure /etc/rsyslog.conf to have another log file for just *.info if it can be avoided. If there's no other way then I might just have to do it but I'm curious what the local 6 information messages actually are.
0
How do you append the output value of a command  that is run plus the value of other variables to a log file in a bash script?
This is what I have just now

#!/bin/bash
host=$(hostname)
date=$(date '+%d%m%Y:%H%M)
log="installlogfile.txt"
runbinary  2>&1 | tee -a ${installlogfile.txt}  <- instead of this i want to be able to append also ${date} and ${host} to the installlogfile.txt but tee -a with multiple variables doesn't work. Any ideas on how to do this in bash would be much appreciated. If i echo or print the variables before i run the command there would be newline characters and I would like the output in the log file to be $(hostname) ${date} output from the command.
0
I have several Solaris systems at work all running SunOs 5.10, also known as Solaris 10.  My hardware team recently updated all of our Solaris-10 OS boxes, primarily to apply security updates.  We have a ton of bash shell scripts and now some scripts work on some Solaris machines, and some don't work on other Solaris machines.  Hardware team believes they applied the same updates to all of our Solaris boxes.  One of my smart team mates dug a little bit further and discovered different versions of bash running on the different machines.

How do I continue to run this problem down, i.e. during versions of bash running on various Solaris machines?  I need to have a lot of facts, details, etc. when I take this to my management team.  Our application is getting ready to go through major testing and the hope was to have multiple machines to test on.  Needless to say, we need to get this fixed ASAP, where all of our bash scripts work as they did prior to the Solaris update.

I understand there is:  $ echo $BASH_VERSION.  Need more than this.  Appreciate the help -thanks in advance.

Fast forward
0

Unix OS

32K

Solutions

18K

Contributors

Unix is a multitasking, multi-user computer operating system originally developed in 1969 at Bell Labs. Today, it is a modern OS with many commercial flavors and licensees, including FreeBSD, Hewlett-Packard’s UX, IBM AIX and Apple Mac OS-X. Apart from its command-line interface, most UNIX variations support the standardized X Window System for GUIs, with the exception of the Mac OS, which uses a proprietary system.