Wednesday, November 1, 2017

In this tutorial we are going to setup LDAP server using 389 Directory Server. The 389 Directory Server is an enterprise class open source LDAP server developed by Redhat Community.
Features
– Multi-Master Replication, to provide fault tolerance and high write performance.
– Scalability: thousands of operations per second, tens of thousands of concurrent users, tens of millions of entries, hundreds of gigabytes of data.
– Active Directory user and group synchronization.
– Secure authentication and transport (SSLv3, TLSv1, and SASL).
– Support for LDAPv3.
– On-line, zero downtime, LDAP-based update of schema, configuration, management and in-tree Access Control Information (ACIs).
– Graphical console for all facets of user, group, and server management.
Prerequisites
– The LDAP server should contain the valid FQDN. Add the ldap server details to your DNS server.
– Adjust the firewall to allow ldap ports.
– Enable EPEL and REMI repositories to avoid any dependencies problems.
Follow the below links to Add EPEL and REMI Repository.
In this how-to my LDAP server details are given below.
Operating System : CentOS 6.5 server
Host name        : server.unixmen.local
IP Address       : 192.168.1.101/24.
Set your server fully qualified domain in /etc/hosts file.
Edit file /etc/hosts/,
# vi /etc/hosts
Add your hostname as shown below.
[...]
192.168.1.101   server.unixmen.local    server
Change the values as per your requirement. This tutorial will applicable for all RHEL/CentOS/SL 6.x series.
Firewall Configuration
Add the following ldap ports to your iptables. To do that, edit file “/etc/sysconfig/iptables”,
# vi /etc/sysconfig/iptables
Add the following lines.
[...]
-A INPUT -m state --state NEW -m tcp -p tcp --dport 389 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 636 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 9830 -j ACCEPT
[...]
Restart firewall.
# service iptables restart
Performance and Security tuning for LDAP server
Before installing LDAP server, we have to adjust some files for performance and security.
Edit file “/etc/sysctl.conf”,
# vi /etc/sysctl.conf
Add the following lines at the end.
[...]
net.ipv4.tcp_keepalive_time = 300
net.ipv4.ip_local_port_range = 1024 65000
fs.file-max = 64000
Edit file “/etc/security/limits.conf”,
# vi /etc/security/limits.conf
Add the following lines at the bottom.
[...]
*               soft     nofile          8192   
*               hard     nofile          8192
Edit file “/etc/profile”,
# vi /etc/profile
Add the line at the end.
[...]
ulimit -n 8192
Edit file “/etc/pam.d/login”,
# vi /etc/pam.d/login
Add the line at the end.
[...]
session    required     /lib/security/pam_limits.so
Now Restart the server.
Install 389 Directory Server
Create a LDAP user account.
# useradd ldapadmin
# passwd ldapadmin
Now install 389 directory server using command:
# yum install -y 389-ds openldap-clients
Configure LDAP server
Now it’s time to configure LDAP server. It’s quite long way process. Run the following command to configure 389 directory server.
# setup-ds-admin.pl
You will be asked a couple of questions. Please read the instructions carefully and answer them accordingly.
If you made any mistake and want to go back to previous screen press CTRL+B and Enter. To cancel the setup press CTRL+C.
==============================================================================
This program will set up the 389 Directory and Administration Servers.

It is recommended that you have "root" privilege to set up the software.
Tips for using this program:
  - Press "Enter" to choose the default and go to the next screen
  - Type "Control-B" then "Enter" to go back to the previous screen
  - Type "Control-C" to cancel the setup program

Would you like to continue with set up? [yes]:   ## Press Enter ## 

==============================================================================
Your system has been scanned for potential problems, missing patches,
etc.  The following output is a report of the items found that need to
be addressed before running this software in a production
environment.

389 Directory Server system tuning analysis version 23-FEBRUARY-2012.

NOTICE : System is i686-unknown-linux2.6.32-431.el6.i686 (1 processor).

WARNING: 622MB of physical memory is available on the system. 1024MB is recommended for best performance on large production system.

WARNING  : The warning messages above should be reviewed before proceeding.

Would you like to continue? [no]: yes  ## Type Yes and Press Enter ##

==============================================================================
Choose a setup type:
   1. Express
       Allows you to quickly set up the servers using the most
       common options and pre-defined defaults. Useful for quick
       evaluation of the products.
   2. Typical
       Allows you to specify common defaults and options.
   3. Custom
       Allows you to specify more advanced options. This is 
       recommended for experienced server administrators only.
To accept the default shown in brackets, press the Enter key.

Choose a setup type [2]:  ## Press Enter ##

==============================================================================
Enter the fully qualified domain name of the computer
on which you're setting up server software. Using the form
.
Example: eros.example.com.

To accept the default shown in brackets, press the Enter key.

Warning: This step may take a few minutes if your DNS servers
can not be reached or if DNS is not configured correctly.  If
you would rather not wait, hit Ctrl-C and run this program again
with the following command line option to specify the hostname:

    General.FullMachineName=your.hostname.domain.name

Computer name [server.unixmen.local]:     ## Press Enter ##

==============================================================================
he servers must run as a specific user in a specific group.
It is strongly recommended that this user should have no privileges
on the computer (i.e. a non-root user).  The setup procedure
will give this user/group some permissions in specific paths/files
to perform server-specific operations.

If you have not yet created a user and group for the servers,
create this user and group using your native operating
system utilities.

System User [nobody]: ldapadmin  ## Enter LDAP user name created above #
System Group [nobody]: ldapadmin

==============================================================================
Server information is stored in the configuration directory server.
This information is used by the console and administration server to
configure and manage your servers.  If you have already set up a
configuration directory server, you should register any servers you
set up or create with the configuration server.  To do so, the
following information about the configuration server is required: the
fully qualified host name of the form
.(e.g. hostname.example.com), the port number
(default 389), the suffix, the DN and password of a user having
permission to write the configuration information, usually the
configuration directory administrator, and if you are using security
(TLS/SSL).  If you are using TLS/SSL, specify the TLS/SSL (LDAPS) port
number (default 636) instead of the regular LDAP port number, and
provide the CA certificate (in PEM/ASCII format).

If you do not yet have a configuration directory server, enter 'No' to
be prompted to set up one.
Do you want to register this software with an existing
configuration directory server? [no]:   ## Press Enter ##

==============================================================================
Please enter the administrator ID for the configuration directory
server.  This is the ID typically used to log in to the console.  You
will also be prompted for the password.
Configuration directory server
administrator ID [admin]:   ## Press Enter ##
Password:    ## create password ##
Password (confirm):    ## re-type password ##

==============================================================================
The information stored in the configuration directory server can be
separated into different Administration Domains.  If you are managing
multiple software releases at the same time, or managing information
about multiple domains, you may use the Administration Domain to keep
them separate.

If you are not using administrative domains, press Enter to select the
default.  Otherwise, enter some descriptive, unique name for the
administration domain, such as the name of the organization
responsible for managing the domain.

Administration Domain [unixmen.local]:   ## Press Enter ##

==============================================================================
The standard directory server network port number is 389.  However, if
you are not logged as the superuser, or port 389 is in use, the
default value will be a random unused port number greater than 1024.
If you want to use port 389, make sure that you are logged in as the
superuser, that port 389 is not in use.
Directory server network port [389]:   ## Press Enter ##

==============================================================================
Each instance of a directory server requires a unique identifier.
This identifier is used to name the various
instance specific files and directories in the file system,
as well as for other uses as a server instance identifier.

Directory server identifier [server]:  ## Press Enter ##

==============================================================================
The suffix is the root of your directory tree.  The suffix must be a valid DN.
It is recommended that you use the dc=domaincomponent suffix convention.
For example, if your domain is example.com,
you should use dc=example,dc=com for your suffix.
Setup will create this initial suffix for you,
but you may have more than one suffix.
Use the directory server utilities to create additional suffixes.

Suffix [dc=unixmen, dc=local]:   ## Press Enter ##

=============================================================================

Certain directory server operations require an administrative user.
This user is referred to as the Directory Manager and typically has a
bind Distinguished Name (DN) of cn=Directory Manager.
You will also be prompted for the password for this user.  The password must
be at least 8 characters long, and contain no spaces.
Press Control-B or type the word "back", then Enter to back up and start over.
Directory Manager DN [cn=Directory Manager]:   ## Press Enter ##
Password:               ## Enter the password ##
Password (confirm): 

==============================================================================
The Administration Server is separate from any of your web or application
servers since it listens to a different port and access to it is
restricted.

Pick a port number between 1024 and 65535 to run your Administration
Server on. You should NOT use a port number which you plan to
run a web or application server on, rather, select a number which you
will remember and which will not be used for anything else.
Administration port [9830]:   ## Press Enter ##

==============================================================================
The interactive phase is complete.  The script will now set up your
servers.  Enter No or go Back if you want to change something.

Are you ready to set up your servers? [yes]:  ## Press Enter ##
Creating directory server . . .
Your new DS instance 'server' was successfully created.
Creating the configuration directory server . . .
Beginning Admin Server creation . . .
Creating Admin Server files and directories . . .
Updating adm.conf . . .
Updating admpw . . .
Registering admin server with the configuration directory server . . .
Updating adm.conf with information from configuration directory server . . .
Updating the configuration for the httpd engine . . .
Starting admin server . . .
output: Starting dirsrv-admin: 
output:                                                    [  OK  ]
The admin server was successfully started.
Admin server was successfully created, configured, and started.
Exiting . . .
Log file is '/tmp/setupo1AlDy.log'
Make the LDAP server daemon to start automatically on every reboot.
# chkconfig dirsrv on
# chkconfig dirsrv-admin on
Test LDAP Server
Now let us test our LDAP Server now for any errors using following command.
# ldapsearch -x -b "dc=unixmen,dc=local"
Sample output:
# extended LDIF
#
# LDAPv3
# base  with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#

# unixmen.local
dn: dc=unixmen,dc=local
objectClass: top
objectClass: domain
dc: unixmen

# Directory Administrators, unixmen.local
dn: cn=Directory Administrators,dc=unixmen,dc=local
objectClass: top
objectClass: groupofuniquenames
cn: Directory Administrators
uniqueMember: cn=Directory Manager

# Groups, unixmen.local
dn: ou=Groups,dc=unixmen,dc=local
objectClass: top
objectClass: organizationalunit
ou: Groups

# People, unixmen.local
dn: ou=People,dc=unixmen,dc=local
objectClass: top
objectClass: organizationalunit
ou: People

# Special Users, unixmen.local
dn: ou=Special Users,dc=unixmen,dc=local
objectClass: top
objectClass: organizationalUnit
ou: Special Users
description: Special Administrative Accounts

# Accounting Managers, Groups, unixmen.local
dn: cn=Accounting Managers,ou=Groups,dc=unixmen,dc=local
objectClass: top
objectClass: groupOfUniqueNames
cn: Accounting Managers
ou: groups
description: People who can manage accounting entries
uniqueMember: cn=Directory Manager

# HR Managers, Groups, unixmen.local
dn: cn=HR Managers,ou=Groups,dc=unixmen,dc=local
objectClass: top
objectClass: groupOfUniqueNames
cn: HR Managers
ou: groups
description: People who can manage HR entries
uniqueMember: cn=Directory Manager

# QA Managers, Groups, unixmen.local
dn: cn=QA Managers,ou=Groups,dc=unixmen,dc=local
objectClass: top
objectClass: groupOfUniqueNames
cn: QA Managers
ou: groups
description: People who can manage QA entries
uniqueMember: cn=Directory Manager

# PD Managers, Groups, unixmen.local
dn: cn=PD Managers,ou=Groups,dc=unixmen,dc=local
objectClass: top
objectClass: groupOfUniqueNames
cn: PD Managers
ou: groups
description: People who can manage engineer entries
uniqueMember: cn=Directory Manager

# search result
search: 2
result: 0 Success

# numResponses: 10
# numEntries: 9
The output will look something like above. If you have got result as 2 shown in the  above output, you’re done. Now our LDAP server is ready to use.
Manage 389 ds with Admin Server Console
Please be mindful that if you want to manage your 389 ds server graphically, your server should have installed with a GUI environment. If you did a minimal installation, you can’t access the admin server console.
As i have minimal server, i am going to install XFCE desktop on my server.
# yum groupinstall Xfce
Reboot your server.
# reboot
Log in to server.
Now you can access the 389 ds admin console either locally or remotely.
To access 389 ds admin console locally, type 389-console.
To access 389-ds admin console from your remote system, enter the following command in Terminal.
$ ssh -X root@192.168.1.101 /usr/bin/389-console -a http://192.168.1.101:9830
Now you’ll be asked to enter your LDAP server administrative log in details. In my case my LDAP admin name is admin and password is centos.
389 Management Console Login (server.unixmen.local)_001This is how my admin server console looks.
389 Management Console (server.unixmen.local)_002From here you can create, delete or edit LDAP organizational units, groups and users graphically.
389-ds admin server console has two groups.
– Administration Server
– Directory Server
You can use any one of the server.
1. Administration Server
To access Administration Server interface, click on your LDAP domain name to expand. Go to Server Group  Administration Server and click Open on the right side. Refer the following screenshot.
389 Management Console (server.unixmen.local)_004Configuration tab:
In the Configuration tab, you change/edit your Admin server ip address, default port, LDAP admin password, default user directory. Also you can define which host names to allow and which ip addresses to allow to access your LDAP server.
Administration Server (server.unixmen.local)_005Tasks Tab:
In the Tasks section, you can Stop/Restart/Configure your server.
Administration Server (server.unixmen.local)_0062. Directory server
To access Directory Server interface, click on your LDAP domain name to expand. Go to Server Group  Directory Server and click Open on the right side. Refer the following screenshot.
389 Management Console (server.unixmen.local)_007In Directory Server section, you can do all necessary configuration for your LDAP server. You can change/modify default port, create users, groups, organizational units etc.
server.unixmen.local - 389 Directory Server - server (server.unixmen.local)_008
There are lot of options available in Directory Server section. Go thorough the each section and configure as per your requirement.
Create Organization units, Groups And Users
Create organizational unit:
Go to your Directory Server from the main console. In the Directory tab, right click on your Domain name (ex. Unixmen). Select New -> Organization Unit. Refer the following screen.
Menu_011Enter your OU name (ex. Support Division) and click Ok.
Create New Organizational Unit (server.unixmen.local)_012The new OU (ex. Support Division) will be created under Unixmen domain.
Create a Group:
Now navigate to Support Division OU and create a new group (ex. support_group).
Menu_013Enter group name and click Ok.
Create New Group (server.unixmen.local)_014The new group will be created under Unixmen/Support Division.
Create User:
Right click on the Support_group, and click New -> User.
Menu_015Enter the user details such as first name, last name, userid, mail id etc., and click Ok.
Create New User (server.unixmen.local)_016Verify Organizational Unit, Group, User with following command on our server.
# ldapsearch -x -b "dc=unixmen,dc=local"
Sample output:
# extended LDIF
#
# LDAPv3
# base  with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#

# unixmen.local
dn: dc=unixmen,dc=local
objectClass: top
objectClass: domain
dc: unixmen

# Directory Administrators, unixmen.local
dn: cn=Directory Administrators,dc=unixmen,dc=local
objectClass: top
objectClass: groupofuniquenames
cn: Directory Administrators
uniqueMember: cn=Directory Manager

# Groups, unixmen.local
dn: ou=Groups,dc=unixmen,dc=local
objectClass: top
objectClass: organizationalunit
ou: Groups

# People, unixmen.local
dn: ou=People,dc=unixmen,dc=local
objectClass: top
objectClass: organizationalunit
ou: People

# Special Users, unixmen.local
dn: ou=Special Users,dc=unixmen,dc=local
objectClass: top
objectClass: organizationalUnit
ou: Special Users
description: Special Administrative Accounts

# Accounting Managers, Groups, unixmen.local
dn: cn=Accounting Managers,ou=Groups,dc=unixmen,dc=local
objectClass: top
objectClass: groupOfUniqueNames
cn: Accounting Managers
ou: groups
description: People who can manage accounting entries
uniqueMember: cn=Directory Manager

# HR Managers, Groups, unixmen.local
dn: cn=HR Managers,ou=Groups,dc=unixmen,dc=local
objectClass: top
objectClass: groupOfUniqueNames
cn: HR Managers
ou: groups
description: People who can manage HR entries
uniqueMember: cn=Directory Manager

# QA Managers, Groups, unixmen.local
dn: cn=QA Managers,ou=Groups,dc=unixmen,dc=local
objectClass: top
objectClass: groupOfUniqueNames
cn: QA Managers
ou: groups
description: People who can manage QA entries
uniqueMember: cn=Directory Manager

# PD Managers, Groups, unixmen.local
dn: cn=PD Managers,ou=Groups,dc=unixmen,dc=local
objectClass: top
objectClass: groupOfUniqueNames
cn: PD Managers
ou: groups
description: People who can manage engineer entries
uniqueMember: cn=Directory Manager

# Support Division, unixmen.local
dn: ou=Support Division,dc=unixmen,dc=local
ou: Support Division
objectClass: top
objectClass: organizationalunit

# support_group, Support Division, unixmen.local
dn: cn=support_group,ou=Support Division,dc=unixmen,dc=local
objectClass: top
objectClass: groupofuniquenames
cn: support_group

# skumar, support_group, Support Division, unixmen.local
dn: uid=skumar,cn=support_group,ou=Support Division,dc=unixmen,dc=local
mail: sk@unixmen.com
uid: skumar
givenName: senthil
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: inetorgperson
sn: kumar
cn: senthil kumar

# search result
search: 2
result: 0 Success

# numResponses: 13
# numEntries: 12
As you see in the above output, a new OU called Support Division, a new group called support_vision, a new user called skumar has been created. I have covered only installation part and basic configuration. There are lot to learn about 389 ds. Refer the link provided at the bottom to know more about 389 ds.
In my personal experience, 389-ds is much easier than openldap in terms of installation and configuration. Let us see how to configure client systems to authenticate using LDAP server in our next article.

Wednesday, December 23, 2015

Some Useful Technical Information and issues.



At times, you may need to find out which Network switch and switch port are connected to which NIC of the server.
In these scenarios, you can use "tcpdump" command in your Linux/UNIX shell to find out network switch and switch port which is connected to a NIC.

Note: The server should have tcpdump installed to use this.

Here is the Syntax of the command:

tcpdump -nn -v -i -s 1500 -c 1 'ether[20:2] == 0x2000'

Example:

testserver:~ # tcpdump -nn -v -i eth3 -s 1500 -c 1 'ether[20:2] == 0x2000'
tcpdump: listening on eth3, link-type EN10MB (Ethernet), capture size 1500 bytes
03:25:22.146564 CDPv2, ttl: 180s, checksum: 692 (unverified), length 370
Device-ID (0x01), length: 11 bytes: 'ch-bx48-sw13'
Address (0x02), length: 13 bytes: IPv4 (1) 192.168.1.15
Port-ID (0x03), length: 15 bytes: 'FastEthernet0/7'
Capability (0x04), length: 4 bytes: (0x00000028): L2 Switch, IGMP snooping
Version String (0x05), length: 220 bytes:
Cisco Internetwork Operating System Software
IOS (tm) C2950 Software (C2950-I6Q4L2-M), Version 12.1(14)EA1a, RELEASE SOFTWARE
(fc1)
Copyright (c) 1986-2003 by cisco Systems, Inc.
Compiled Tue 02-Sep-03 03:33 by antonino
Platform (0x06), length: 18 bytes: 'cisco WS-C2950T-24'
Protocol-Hello option (0x08), length: 32 bytes:
VTP Management Domain (0x09), length: 6 bytes: 'ecomrd'
Duplex (0x0b), length: 1 byte: full
AVVID trust bitmap (0x12), length: 1 byte: 0x00
AVVID untrusted ports CoS (0x13), length: 1 byte: 0x00
1 packets captured
2 packets received by filter
0 packets dropped by kernel
testserver:~ #

In the above example, the network switch name and Port connected are highlighted.

##############################################

SSH time-lock tricks
You can also use different iptables parameters to limit connections to the SSH service for specific time periods.
You can use the /second, /minute, /hour, or /day switch in any of the following examples.

In the first example, if a user enters the wrong password, access to the SSH service is blocked for one minute, and the user gets only one login try per minute from that moment on:
~# iptables -A INPUT -p tcp -m state --syn --state NEW --dport 22 -m limit --limit 1/minute --limit-burst 1 -j ACCEPT
~# iptables -A INPUT -p tcp -m state --syn --state NEW --dport 22 -j DROP

In a second example, iptables are set to allow only host 192.168.10.25 to connect to the SSH service. After three failed login tries, iptables allows the host only one login try per minute:
~# iptables -A INPUT -p tcp -s 192.168.10.25 -m state --syn --state NEW --dport 22 -m limit --limit 1/minute --limit-burst 3 -j ACCEPT
~# iptables -A INPUT -p tcp -s 192.168.10.25 -m state --syn --state NEW --dport 22 -j DROP

############################################################################

Cron processing fails with user authentication failure in PAM.

Following message is logged in /var/log/cron.

crond[2393]: (root) FAILED to authorize user with PAM (The return value should be
ignored by PAM dispatch)

It seems that this message is logged every time sar processing which is executed by cron
starts and that the processing fails.

It was found that this message is logged when following configuration is
added in /etc/pam.d/password-auth (RHEL 6) and the same issue could be reproduced.

auth required pam_env.so
auth [default=1 success=ignore] pam_succeed_if.so gid = 511 <== Add
auth required pam_tally2.so deny=5 <== Add
auth sufficient pam_unix.so nullok try_first_pass
auth requisite pam_succeed_if.so uid >= 500 quiet
auth required pam_deny.so

The above configuration was added for security control of specific users. The processing
except cron works without problems. Only cron is affected by this configuration.

Also, if above configuration is changed like following then it works as expected without the
message in question.


Before change

auth [default=1 success=ignore] pam_succeed_if.so gid = 511

After change

auth [default=ignore success=1] pam_succeed_if.so gid != 511


Note - This solution is part of Red Hat’s fast-track publication program

############################################################

postfix doesn't start on boot with error
postfix: fatal: parameter inet_interfaces: no local interface found

postfix attempts to start on boot, but shuts down with the following error:
postfix[1516]: fatal: parameter inet_interfaces: no local interface found for  (insert ip address)

Root Cause: The network is not ready when postfix starts

Work around:

In /lib/systemd/system/postfix.service change “After=”

from:

After=syslog.target network.target

to

After=syslog.target network-online.target network.target NetworkManager-wait-online.service

Enable the new settings:

# systemctl enable NetworkManager-wait-online.service
########################################################

How to save command to history immediately after you type it in bash?

·         By default, if a user log in a system via bash, all the command history will be saved after exit.
·         How to save every command into ~/.bash_history immediately right after the user type it?
·         How to save all commands typed in a session manually before exit bash?



To save all commands typed in a session manually before exit bash, issue this command:
$ history -a

To save all typed commands immediately after you type it, add these two lines to ~/.bashrc file:
shopt -s histappend
PROMPT_COMMAND='history -a'
After making above changes, exit and log back in again.

To make it effective for all user in the system, append the following two lines into /etc/bashrc file:
shopt -s histappend
PROMPT_COMMAND='history -a'
############################################

How to clear ARP cache

The arp table or arp cache keeps track of all devices on your network that your computer is capable of communicating with. It stores the Layer 2 data (MAC addresses) as well as the interface in which the device is connected through (i.e. which interface the traffic came in on). This table can be viewed, modified, flushed using the arp command in Linux.

View Arp Cache Entries

arp -n

The output will something like the following.

Address                HWtype  HWaddress           Flags Mask            Iface
192.168.1.100          ether   00:21:a0:63:38:3f   C             eth0

To add an arp entry we simply take advantage of the options that the arp command provides. Let’s add an arbitrary entry.

arp -i eth0 -s 10.11.12.13 de:ad:be:ef:fe:ed

To remove an entry

arp -i -d 

Some example usages are given below

arp -i eth0 -d 10.11.12.13
arp -d 192.168.1.100
Clearing arp cache with verbose

ip -s -s neigh flush all
################################
Postfix - Flush the Mail Queue

Usually, you use the "sendmail -q" command to flush mail queue under Sendmail MTA.

Under Postfix MTA, just enter the following command to flush the mail queue:

# postfix flush

OR

# postfix -f

To see mail queue, enter:

# mailq

To remove all mail from the queue, enter:

# postsuper -d ALL

To remove all mails in the deferred queue, enter:

# postsuper -d ALL deferred

################################################################

File listing is not in sync for NFS Server/Client in RHEL

 

Þ      From time to time the file listing in directory on NFS server and on client are not in sync. In detail, there are files visible on the nfs-server, which doesn't exist anymore. Additionally, different files on nfs-client (in shared directory!), are seen which are not visible on the server and also does not exist anymore.
Þ      When touching this directory, by adding an empty file i.e. the file listing instantly gets in sync and this issue is gone for a while.

Cause:

In this case, noac option was tried for the mount. The noac option only controls the attribute cache and it does not have any effect on the lookup cache. This has been confirmed by doing a touch in the directory, the files are synced. In this case, NFS client checks the mtime of the directory. The reason why the “out of sync” issue happens is when client is doing a revalidation of cache and at that instant, the server side changes are also happening. Hence, if the modification happens after the client validates and at the same instant, another change happens, the NFS client may not take it up. The NFS client has a "lookup cache" which does positive and negative lookup of the directories. The lookup cache will be revalidated if the mtime changes which happens when a file is touched within the directory. By default, lookup cache includes both positive and negative lookups.
Solution:
Use the option lookupcache=pos with NFS mount. When using this option, it is recommended to remove the noac option and instead tune the actime option, else performance will come down
##############################################################

What is $* and $# in Linux?

$#           Stores the number of commandline arguments that were passed to the shell program.
$?           Stores the exit value of the last command that was executed.
$0           Stores the first word of the entered command (the name of the shell program).
$*            Stores all the arguments that were entered on the command line ($1 $2 ...).
$@          Stores all the arguments that were entered on the command line, individually quoted ("$1" "$2" ...).

So basically, $# is a number of arguments given when your script was executed. $* is a string containing all arguments. For example, $1 is the first argument and so on…

This is useful, if you want to access a specific argument in your script.

Here is a simple example. If you run following command:

./command yes no /home/username

$# = 3
$* = yes no /home/username
$@ = array: {"yes", "no", "/home/username"}
$0 = ./command, $1 = yes etc.

These are part of POSIX standard, and should be supported by all compliant shells.

For the reference, below is POSIX standard definitions for each special parameter.
Do note there's three additional variables: $-, $$ and $!
###################################################################

crontab was not working with dynamic date filename

To set a simple cronjob which will redirect the output to the log file with timestamp,

Hourly running cron entry mentioned below

0 * * * * ksh /root/test.sh > test_output_`date "+%Y-%m-%d_%H-%M"`.log

While running the same command on shell prompt, it was working fine. Execute permissions and path were proper.
But it kept on failing to create the proper log when scheduled through cron.


It worked, after the crontab entry was changed as follows, added \ before %

0 * * * * ksh /root/test.sh > test_output_`date "+\%Y-\%m-\%d_\%H-\%M"`.log



Good to note some of the date changers in Linux for finding yesterdays and tomorrows and so on:

[root@mylab ~]# date --date="1 Days Ago"
Wed Nov 18 07:50:24 UTC 2015
[root@mylab ~]# date --date=”yesterday”
Thu Nov 19 00:00:00 UTC 2015
[root@mylab ~]# date --date="tomorrow"
Fri Nov 20 07:54:26 UTC 2015
[root@mylab ~]# date --date='1 Day'
Fri Nov 20 07:55:03 UTC 2015
[root@mylab ~]# date --date='10 Day'
Sun Nov 29 07:55:22 UTC 2015
[root@mylab ~]# date --date='next Day'
Fri Nov 20 07:55:35 UTC 2015
[root@mylab ~]# date --date='-1 Day'
Wed Nov 18 07:55:56 UTC 2015
[root@mylab ~]# date --date='1 Week'
Thu Nov 26 07:56:22 UTC 2015
[root@mylab ~]# date --date='10 Week'
Thu Jan 28 07:56:27 UTC 2016
[root@mylab ~]# date --date='1 Month'
Sat Dec 19 07:56:40 UTC 2015
[root@mylab ~]# date --date='1 Year'
Sat Nov 19 07:56:55 UTC 2016
[root@mylab ~]#
#############################################################

Disaster Recovery tool in RHEL 7

In RHEL7.2GA and later: Red Hat contains REAR which can be used for image based backups.


In RHEL6 and earlier, Red Hat does not ship any network applications as such which can be used to do a complete OS disaster recovery. Third party applications and backup solutions compatible with Red Hat Enterprise Linux (RHEL) are available.

While Red Hat does not offer a direct bare-metal recovery solution you can use Red Hat Network Satellite to perform a kickstart and then use a custom channel with your own application specific RPMs to install the required applications and configuration files. You can further automate this by creating server specific kickstart profiles that include all the applications you require. Once you have set this up, the bare metal backup recovery can be done by kick starting a server profile and putting the application files back from backup. Alternatively you can keep your application data on NFS so you don't need to restore from backup at all.
###########################################################################################
Not able to detect SAN LUNs on the server.
Reboot too does not detect the LUNs from external storage.

Cause:

The root cause could be one of the following.

Þ      The LUNs are not visible in the BIOS due to improper zoning configuration.

Þ      HBA port status was seen as 'Link down' for all the fc ports on the HBA.
Þ      Re-enabling the corresponding fc ports on switch changed the HBA ports status to 'Online' after which all the LUNs from the external storage were detected.


 Tested Solution:


Þ      Check whether the LUN is visible in the BIOS. If not visible then check the zoning configuration.
Þ      Re-enable fibre channel ports that are used by this server on the fc switch.
##############################################

RHEL7: Firewalld – ZONE MANAGEMENT

Firewalld is the new user land interface in RHEL 7. It replaces the iptables interface and connects to the netfilter kernel code. It mainly improves the security rules management by allowing configuration changes without stopping the current connections.

To know if Firewalld is running, type:

# systemctl status firewalld
firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled)
   Active: active (running) since Tue 2014-06-17 11:14:49 CEST; 5 days ago
...

or alternatively:

# firewall-cmd --state
running

Note: If Firewalld is not running, the command displays not running.


If you’ve got several network interfaces in IPv4, you will have to activate ip forwarding.

To do that, paste the following line into the /etc/sysctl.conf file:

net.ipv4.ip_forward=1

Then, activate the configuration:

# sysctl -p

Although Firewalld is the RHEL 7 way to deal with firewalls and provides many improvements, iptables can still be used (but both shouldn’t run at the same time).
You can also look at the iptables rules created by Firewalld with the iptables-save command.

Zone Management

Also, a new concept of zone appears: all network interfaces can be located in the same default zone or divided into different ones according to the levels of trust defined. In the latter case, this allows to restrict traffic based on origin zone.

Note: Without any configuration, everything is done by default in the public zone. If you’ve got more than one network interface or use sources, you will be able to restrict traffic between zones.

To get the default zone, type:

# firewall-cmd --get-default-zone
public

To get the list of zones where you’ve got network interfaces or sources assigned to, type:

# firewall-cmd --get-active-zones
public
interfaces: eth0

To get the list of all the available zones, type:

# firewall-cmd --get-zones
block dmz drop external home internal public trusted work

To change the default zone to home permanently, type:

# firewall-cmd --set-default-zone=home
success

Note: This information is stored in the /etc/firewalld/firewalld.conf file.

Network interfaces can be assigned to a zone in a temporary (until the next reboot or reload) or in a permanent way. Either way, you don’t need to reload the firewall configuration.

To assign the eth0 network interface temporarily to the internal zone, type:

# firewall-cmd --zone=internal --change-interface=eth0
success

To assign the eth0 network interface permanently to the internal zone (a file called internal.xml is created in the /etc/firewalld/zones directory), type:

# firewall-cmd --permanent --zone=internal --change-interface=eth0
success

Note: This operation can also be done with the nmcli con mod command

To know which zone is associated with the eth0 interface, type:

# firewall-cmd --get-zone-of-interface=eth0
internal

To get the current configuration of the public zone, type:

# firewall-cmd --zone=public --list-all
 public (default, active)
 interfaces: eth0
 sources:
 services: dhcpv6-client ssh
ports:
 masquerade: no
 forward-ports:
 icmp-blocks:
 rich rules:

Note: The previous command displays the current configuration, ie the permanent settings and the temporary ones. To only get the permanent settings, use the –permanent option.

It is also possible to create new zones. To create a new zone (here test), type:

# firewall-cmd --permanent --new-zone=test
success
# firewall-cmd --reload
success

Note: Only permanent zones can be created.
#########################################################################
RHEL 7 - "WRITE SAME failed. Manually zeroing"


Each time when the server is rebooted, the following messages are recorded  in /var/log/messages.

Mar 19 07:53:55 serverX kernel: dm-4: WRITE SAME failed. Manually zeroing.
Mar 19 09:58:36 serverX kernel: dm-4: WRITE SAME failed. Manually zeroing.
Mar 19 10:00:52 serverX kernel: dm-4: WRITE SAME failed. Manually zeroing.

The WRITE SAME SCSI command implements a feature that allows to write multiple blocks with the same content. Red Hat Enterprise Linux use it mostly for ensuring blocks contain all zero’s.
The functionality defaults to use 0xFFFF (65535) blocks for WRITE SAME, except for certain classes of disks which are known not to implement the feature.

If the device supports WRITE SAME, use that to optimize zeroing of blocks. If the device does not support WRITE SAME or if the operation fails, fall back to writing zeroes the old-fashioned way.

To disable WRITE SAME on devices that do not support WRITE SAME and to get rid of the message, you have to set

max_write_same_block for the specific device in /sys to 0

As these settings will vanish during reboot, implementing tmpfiles.d(5) rules will help to make the change persistent during system reboots

$ cat /etc/tmpfiles.d/write_same.conf
w /sys/devices/pci0000:00/0000:00:02.0/0000:03:00.0/host0/target0:2:0/0:2:0:0/scsi_disk/0:2:0:0/max_write_same_blocks  -   -   - 0
###########################################################################
"tur checker reports path is down"

May 16 21:17:36 hostA multipathd: sde: tur checker reports path is down
May 16 21:17:36 hostA multipathd: sdf: tur checker reports path is down
 May 16 21:17:36 hostA multipathd: sdg: tur checker reports path is down
May 16 21:17:36 hostA multipathd: sdh: tur checker reports path is down


This is due to temporary issues, path from the storage was seen failed/faulty. When the issue was resolved it seems that the working path was not taken into consideration and still the wrong status of the path was seen.

You have to remove the device path which is down and these messages will stop.

For Example:

1. Check which device is down by multipath command

# multipath -ll
sda: checker msg is "tur checker reports path is down"
mpath0 (16465616462656166313a3100000000000000000000000000) dm-2 IET,VIRTUAL-DISK
[size=1020M][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:0:1 sda 8:0   [failed][faulty]    <===== (1)
\_ round-robin 0 [prio=0][enabled]
\_ 2:0:0:1 sdb 8:16  [active][ready]


2. Remove the device which is down

# echo "scsi remove-single-device 1 0 0 1" > /proc/scsi/scsi


3. Verify the device is removed

# multipath -ll
mpath0 (16465616462656166313a3100000000000000000000000000) dm-2 IET,VIRTUAL-DISK
[size=1020M][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 2:0:0:1 sdb 8:16  [active][ready]


If the path's "failed | faulty" status is temporary, rescanning the scsi bus with rescan-scsi-bus.sh command should bring the path / paths back in working condition.

Note - In the above case with two paths to the storage, the storage is active / active storage. Hence there would be no impact of the LUN as the other active path / paths will be available.

Additionally, you can check with fdisk –l ; if the device is present, if it's not then blacklist it in multipath.conf
############################################################################

Red Hat Enterprise Linux reports lun has a LUN larger than allowed by the host adapter

Errors

1 The following scsi errors are reported; where is greater than lpfc module parameter lpfc_max_luns.
kernel: scsi: host X channel Y id Z lun has a LUN larger than allowed by the host adapter
This issue can be also seen if you have a LUN ID that is greater than 255 even though you have a much fewer number of LUNs than 255.

2 After HBA rescan on physical server we are unable to add new LUN. We see this error in the system logs:
scsi: host 0 channel 0 id 0 lun557 has a LUN larger than allowed by the host adapter
scsi: host 1 channel 0 id 0 lun557 has a LUN larger than allowed by the host adapter

3 Unable to install RHEL 6.1 on LUNs exported from SAN when LUN id value is less than FC driver's supported maximum lun id value.
ERR kernel:lpfc 0000:0f:00.0: 1:1303 Link Up Event x1 received Data: x1 xf7 x20 x0 x0 x0 0
NOTICE kernel:scsi 3:0:0:0: RAID              EMC      SYMMETRIX        5874 PQ: 0 ANSI: 4
WARNING kernel:scsi: host 3 channel 0 id 0 lun16384 has a LUN larger than allowed by the host adapter


Probable Cause:
Storage has returned a LUN id value that exceeds the current maximum LUN id value supported by the driver. Either
the driver's maximum supported LUN id value needs to be increased, as in the case of lpfc, or
the LUN id value provided by storage must be changed via reconfiguration within the storage box.

$ cat /sys/class/scsi_host/hostX/lpfc_max_luns 255

The default maximum LUNs for lpfc is 256 (starting at 0) LUNs. Having a LUN number larger than default (255) or lpfc_max_luns will generate error.

For Qlogic HBA cards, you can find the value  in /var/log/dmesg
##############################################################

RHEL7: How to configure I/O schedulers.

I/O schedulers are used to optimize reads/writes on disk.

There are three types of I/O schedulers (also called I/O elevators) in RHEL 7:

·         CFQ (Completely Fair Queuing) promotes I/O coming from real time processes and uses historical data to anticipate whether an application will issue more I/O requests in the near future (causing a slight tendency to idle).
·         Deadline attempts to provide a guaranteed latency for requests and is particularly suitable when read operations occur more often than write operations (one queue for reads and one for writes, I/O’s are dispatched based on time spent in queue).
·         Noop implements a simple FIFO (first-in first-out) scheduling algorithm with minimal CPU cost.

With RHEL 7, the default I/O Scheduler is now CFQ for SATA drives and Deadline for everything else.

This is because Deadline outperforms CFQ for faster storage than SATA drives.

Configuration at boot

To define a global I/O scheduler (here cfq) at boot, type:

# grubby --update-kernel=ALL --args="elevator=cfq"


Configuration for a particular disk

To get the current configuration of a disk (here /dev/sda), type:

# more /sys/block/sda/queue/scheduler
 noop deadline [cfq]

To assign an I/O scheduler (here deadline) to a particular disk (here /dev/sda), type:

# echo deadline > /sys/block/sda/queue/scheduler

Note: This could be set permanently through the rc-local service.

To check the new configuration, type:

# more /sys/block/sda/queue/scheduler
   noop [deadline] cfq

#######################################################################

NTP: No association ID's returned

[root@station1 ~]# ntpq -p
No association ID's returned

Probable cause:

SELinux was preventing access to /etc/ntp.conf.

Solution:

Disable SELinux.

(or)

Restore selinux context for the /etc/ntp.conf file :-

[root@station1 ~]# ls -lZ /etc/ntp.conf
-rw-r--r--. root root unconfined_u:object_r:admin_home_t:s0 /etc/ntp.conf

was changed to

[root@station1 ~]# restorecon -v /etc/ntp.conf
restorecon reset /etc/ntp.conf context unconfined_u:object_r:admin_home_t:s0->unconfined_u:object_r:net_conf_t:s0~~~
##############################################################
Remove virbr0 interface in Linux

virbr0 is an Ethernet bridge.
The virbr0 bridge interface is created by libvirtd's default network configuration. libvirtd is the service which provides a basis for the host to act as a hypervisor.

root@labserver:~ # ifconfig virbr0
virbr0 Link encap:Ethernet HWaddr 00:00:00:00:00:00
inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0
inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:41 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:7630 (7.4 KiB)

You use brctl to manage the interface.

brctl is used to set up, maintain, and inspect the Ethernet bridge configuration in the Linux kernel

root@labserver:~ # brctl show
bridge name bridge id STP enabled interfaces
virbr0 8000.000000000000 yes

Verify if libvirtd is running

root@labserver:~ # service libvirtd status
libvirtd (pid 12529) is running…

Verify if there is any host running. In my case there isn’t

root@labserver:~ # virsh list
Id Name State
———————————-

This is the default network set-up for the virtual machines

root@labserver:~ # virsh net-list
Name State Autostart
—————————————–
default active yes


One could prevent libvirtd's default network from being activated on boot, or you could prevent libvirtd itself from activating on boot. The former will prevent any VM guest attached to libvirtd's default network from having network connectivity and the latter would prevent VMs from running at all.

To immediately stop libvirtd's default network (this will not persist across reboots):
           # virsh net-destroy default

To permanently disable the libvirtd default network from being created at boot:
           # virsh net-autostart default --disable

To permanently remove the libvirtd default network:
           # virsh net-undefine default

To permanently disable the libvirtd service from starting at boot on RHEL5 and RHEL6:
# chkconfig libvirtd off

To permanently disable the libvirtd service from starting at boot on RHEL7:
           # systemctl disable libvirtd.service

###########################################################
SSH connections are closed suddenly in Red Hat Enterprise Linux 6
·         SSH connections are refusing with error "fatal: mm_request_send: write: Broken pipe"
·         SSH Client exits with the message "Connection closed by XXX.XXX.XXX.XXX" as soon as it starts.


Probable Cause:
·         It's possible that the sshd server is exposed by malware.
·         We found that malware udev-fall was killing sshd child processes.
·         Check if the process, which name is "udev-fall", is running on the sshd server.

$ cat ps |grep udev-fall
root     14289  0.7  0.0   1664  1192 ?        S    Nov20 143:58 /usr/bin/udev-fall -d


For Solution:

1.    Stop the udev-fall service.
2.    Disable the service with chkconfig
3.    Remove /usr/bin/udev-fall and the initscript, /etc/init.d/udev-fall.


Red Hat never shipped this init script in any product. So, it's needed to investigate how it's introduced in each customer's environment.

#################################################################
tail: cannot watch `/var/log/messages': No space left on device


When performing 'tail -f filename', error shows as follows

# tail -f /var/log/messages
tail: cannot watch `/var/log/messages': No space left on device


Increase /proc/sys/fs/inotify/max_user_watches (default: 8192) with doubled value.

# echo "fs.inotify.max_user_watches = 16354" >> /etc/sysctl.conf
# sysctl –p

#################################################################

XFS is now the RHEL7 default file system.

XFS brings the following benefits:
·         ability to manage up to 500TB file systems with files up to 50TB in size,
·         best performance for most workloads (especially with high speed storage and larger number of cores),
·         less CPU intensive than most of the other file systems (better optimizations around lock contention, etc),
·         the most robust at large scale (has been run at hundred plus TB sizes for many years),
·         the most common file system in multiple key upstream communities: most common base for ceph, gluster and openstack more broadly,
·         pioneered most of the techniques now in Ext4 for performance (like delayed allocation),
·         no file system check at boot time,
·         CRC checksum on all metadata blocks.
·         XFS was already fully supported in RHEL6 but, as it was not the default file system, perhaps are you not used to it. It’s time to rectify this.

XFS Basic Management

To create a new logical volume called lv_vol with a size of 100MB in the vg volume group, type:

# lvcreate --size 100M --name lv_vol /dev/vg

To create a new XFS file system, type:

# mkfs.xfs /dev/vg/lv_vol

To mount the new file system under /mnt, type:

# mount /dev/vg/lv_vol /mnt

To increase the file system size by 50MB, type:

# lvextend --size +50M /dev/vg/lv_vol
# xfs_growfs /mnt

Note 1: This is only possible for a mounted file system.
Note 2: You can’t reduce a XFS file system even though you unmount it. You have to back it up, drop it and recreate it.

XFS Advanced Management

If a problem occurs and you want to repair the file system, according to the man pages you are supposed to mount and unmount it before typing:

# xfs_repair /dev/vg/lv_vol

Note : Try the “-L” option (“force log zeroing“) to clear the log if nothing else works.

To assign a label (up to 12 characters) to the file system, type:

# umount /dev/vg/lv_vol
# xfs_admin -L "Label" /dev/vg/lv_vol

To read the label (the file system can be mounted or not but you need to specify the partition name), type:

# xfs_admin -l /dev/vg/lv_vol

XFS Backup Management

To do a full backup of the file system and put it into the /root/dump.xfs file without specifying any dump label, type:

# xfsdump -F -f /root/dump.xfs /mnt

Note 1: You can specify the mounting point or the partition name.
Note 2: It is only possible for a mounted file system.
Note 3: You can run an incremental dump by using the “-l” option with a number between 0 and 9 (0=full dump).

If you want to specify a session label and a media label, type:

# xfsdump -L session_label -M media_label -f /root/dump.xfs /dev/vg/lv_vol

To restore the file system, type:

# xfsrestore -f /root/dump.xfs /mnt

To get the list of all the available dumps, type:

# xfsrestore -I

To defragment the file system (operation normally not needed), type:

# xfs_fsr /dev/vg/lv_vol

Note : Both mount point and partition name are valid.

To freeze the file system before taking a snapshot, type:

# xfs_freeze -f /mnt

Note : You have to specify the mount point, the partition name is not allowed.

To unfreeze the file system, type:

# xfs_freeze -u /mnt

To copy the contents of a file system (here mounted under /mnt) to another directory, type:

# xfsdump -J - /mnt | xfsrestore -J - /new

Note : The “-J” option avoids any write into the inventory
###################################
RHEL7: Check if a system is vulnerable to a CVE.

CVE stands for Common Vulnerabilities and Exposure. It’s a dictionary of publicly known information security vulnerabilities and exposures.

CVE’s common identifiers enable data exchange between security products and provide a baseline index point for evaluating coverage of tools and services.

To check whether a RHEL 7 system is vulnerable or not to a CVE, first install the following yum plugin:

# yum install yum-plugin-security

Then, check whether the vulnerability is present (here openssl security update):

# yum updateinfo info --cve CVE-2014-0224
==============================================
Important: openssl security update
===============================================
Update ID : RHSA-2014:0679
Release :
 Type : security
Status : final
Issued : 2014-06-10 00:00:00
Bugs : 1087195 - CVE-2010-5298 openssl: freelist misuse causing
        a possible use-after-free
: 1093837 - CVE-2014-0198 openssl: SSL_MODE_RELEASE_BUFFERS NULL
   pointer dereference in do_ssl3_write()
: 1103586 - CVE-2014-0224 openssl: SSL/TLS MITM vulnerability
: 1103593 - CVE-2014-0221 openssl: DoS when sending invalid DTLS
   handshake
: 1103598 - CVE-2014-0195 openssl: Buffer overflow via DTLS
   invalid fragment
: 1103600 - CVE-2014-3470 openssl: client-side denial of service
   when using anonymous ECDH
CVEs : CVE-2014-0224
: CVE-2014-0221
: CVE-2014-0198
: CVE-2014-0195
: CVE-2010-5298
: CVE-2014-3470
Description : OpenSSL is a toolkit that implements the Secure
Sockets Layer

Note: In the case of a non-vulnerable system, nothing is displayed.

At any time, you can check a particular CVE to get more information:


All CVEs are available at the Red Hat CVE page.
##############################################################

AD users able to ssh in but AD user not able to sudo


=============var/log/sssd/sssd_pam.log================

Still see the same errors :

[sssd[pam]] [sbus_remove_timeout] (0x2000): 0x8cdd30
[sssd[pam]] [sbus_dispatch] (0x4000): dbus conn: 0x8d1e90
[sssd[pam]] [sbus_dispatch] (0x4000): Dispatching.
[sssd[pam]] [sss_dp_get_reply] (0x1000): Got reply from Data Provider - DP error code: 1 errno: 11 error message: Offline
[sssd[pam]] [pam_check_user_dp_callback] (0x0040): Unable to get information from Data Provider Error: 1, 11, Offline

==============var/log/sssd/sssd_example.com.log===========

[acctinfo_callback] (0x0100): Request processed. Returned 1,11,Offline
[sbus_dispatch] (0x4000): dbus conn: 0x20655c0
[sbus_dispatch] (0x4000): Dispatching.


To resolve this error, set the DNS discovery domain in the sssd.conf

[domain/example.com] section
dns_discovery_domain = example.com

Restart sssd service afterwards.

service sssd restart

#########################################################

How to exclude Data Protector (omni) info messages from /var/log/messages?

/var/log/messages file is flooded with below Data Protector (omni) messages.

Oct 17 07:14:34 hostname1 xinetd[5508]: EXIT: omni status=0 pid=1362 duration=869(sec)
Oct 17 07:15:07 hostname1 xinetd[5508]: START: omni pid=1631 from=::ffff:172.26.152.220
Oct 17 03:58:59 hostname1 xinetd[5508]: EXIT: omni status=1 pid=21514 duration=1732(sec)
Oct 17 04:00:09 hostname1 xinetd[5508]: START: omni pid=22009 from=::ffff:172.26.162.103

Comment out the "log_on_failure" and "log_on_success" lines in the /etc/xined.d/ and
restart the xinetd service and check whether still the issue persists.

log_on_success — Configures xinetd to log if the connection is successful. By
default, the remote host's IP address and the process ID of server processing
the request are recorded.

log_on_failure — Configures xinetd to log if there is a connection failure or

if the connection is not allowed.