Tuesday, September 24, 2013

Setup FreeRadius Authentication with OpenLDAP


Free Radius server configuration and integration with LDAP SERVER

This document describes how to setup a FreeRADIUS server. A MySQL server is used as backend and for the user accounting. OpenVPN and the radiusplugin are used together as nas service.

I do not guarantee for anything in this howto. In my environment this setup is doing a great job here. So hopefully it will do the same for you.
Required software

The installation was done on Ubuntu Gutsy Gibbon and is still valid up to current Lucid Lynx (versions may differ at the moment):

Ubuntu 12.04:
- freeradius (2.1.8+dfsg-1ubuntu1)
- freeradius-mysql (2.1.8+dfsg-1ubuntu1)
- freeradius-ldap (2.1.8+dfsg-1ubuntu1)
- mysql-server-5.1 (5.1.41-3ubuntu12.10)
- openvpn (2.1.0-1ubuntu1.1)
- radiusplugin_v2.1a_beta1.tar.gz (Please download separately)
- libgcrypt11-dev (1.4.4-5ubuntu2)

As time goes by, the versions may change. But package names mostly exist in later releases as well.

I act on the assumption that there is an already running MySQL server.
RADIUS-Server

After having successfully installed freeradius and freeradius-mysql using aptitude (apt-get), you have to change the directory to /etc/freeradius.

radiusd.conf:

Please change the following variables under the section PROXY CONFIGURATION
proxy_requests = no

Please comment out any files-entry and as you can see, please enable the sql statements. The changes should look similar like this::
authorize {
preprocess
chap
mschap
suffix
eap
sql
}

preacct {
preprocess
acct_unique
suffix
}

accounting {
detail
unix
radutmp
sql
}

For freeradius 2.x in file /etc/freeradius/sites-enabled/default:
authorize {
sql
}
authenticate {
}
preacct {
acct_unique
}
accounting {
sql
}
session {
sql
}
post-auth {
}
pre-proxy {
}
post-proxy {
}

As you can see, you only require the sql statements and no others. Please give a feedback, if you require more information on freeradius 2.x configuration.

You do not need to change anything else in this configuration files. It keeps as it is.

clients.conf:
client 127.0.0.1 {
secret = EinsupertollesSecret
shortname = localhost
}

The secret should be a secret as far as possible. It will be required in a later configuration file below.

sql.conf:
sql {
driver = "rlm_sql_mysql"
server = "127.0.0.1"
login = "radius"
password = "MySQL-passowrd-see-next-paragraph"
radius_db = "radius"
...
}
MySQL
mysql -u root -h 127.0.0.1 -p

Please insert the following schema into MySQL:
zcat /usr/share/doc/freeradius/examples/mysql.sql.gz | \
mysql -u root -prootpass radius

mysql -u root -prootpass
mysql> GRANT ALL ON radius.* to radius@'127.0.0.1' IDENTIFIED BY 'Use the same password as in sql.conf';

Next, some example entries:
mysql> select * from radcheck;
+----+------------+----------------+----+---------------+
| id | UserName | Attribute | op | Value |
+----+------------+----------------+----+---------------+
| 1 | croessner | Crypt-Password | := | XXXXXXXXXXXXX |
+----+------------+----------------+----+---------------+

You can use the MySQL ENCRYPT() function to create the passwords.
mysql> select * from radgroupcheck;
+----+-----------+-----------+----+-------------+
| id | GroupName | Attribute | op | Value |
+----+-----------+-----------+----+-------------+
| 1 | dynamic | Auth-Type | := | Crypt-Local |
+----+-----------+-----------+----+-------------+

mysql> select * from radgroupreply;
+----+-----------+-----------------------+----+-------------+
| id | GroupName | Attribute | op | Value |
+----+-----------+-----------------------+----+-------------+
| 1 | dynamic | Acct-Interim-Interval | = | 60 |
+----+-----------+-----------------------+----+-------------+

mysql> select * from radreply;
+----+------------+-------------------+----+-------------------------------+
| id | UserName | Attribute | op | Value |
+----+------------+-------------------+----+-------------------------------+
| 1 | croessner | Framed-IP-Address | = | 10.10.0.153 |
| 2 | croessner | Framed-Route | = | 192.168.3.0/24 10.10.0.2/32 1 |
+----+------------+-------------------+----+-------------------------------+

Short description:
After the user croessner as logged on, the IP 10.10.0.153 is assigned to his computer as a point-to-point connection with the endpoint IP 10.10.0.154. At the same time, the OpenVPN server manipulates its internal routing table and adds the network 192.168.3.0/24. If you wish to assign more than one route, you have to use the ‘+=’ operator for any additional data set.
mysql> select * from usergroup;
+-----------+-----------+----------+
| UserName | GroupName | priority |
+-----------+-----------+----------+
| croessner | dynamic | 1 |
+-----------+-----------+----------+

I have to mention for the table shown here that the usage of the operators seems not to be really trivial. But you can find more information in /usr/share/doc/freeradius/rlm_sql.gz.

I explicitly use “Crypt-Password” entries in these examples. If this is not desired, you can use the attribute “Cleartext-Password”. But doing so, you have to choose the value “Local” in the table “radgroupcheck”.

You can find more information in the README under http://wiki.freeradius.org/SQL_HOWTO.
OpenVPN
RadiusPlugin

As of writing this howto, the freeradius plugin is not available as an Ubuntu package. Therefor you have to download and compile the source code. Please install the GNU compiler “g++” and “make”. Simply a basic installation of tools, giving you the ability to compile C++ applications. Maybe the package “build-essential”.
cd /usr/local/src
wget "http://www.nongnu.org/radiusplugin/radiusplugin_v2.1a_beta1.tar.gz"
tar xvzf radiusplugin_v2.1a_beta1.tar.gz

Building the radius plugin:
cd radiusplugin
make
mkdir -p /etc/openvpn/plugins/
cp radiusplugin.so /etc/openvpn/plugins/
cp radiusplugin.cnf /etc/openvpn

The configuration should look something like this (I have removed the comments for better reading):

radiusplugin.cnf:
NAS-Identifier=OpenVpn # The service type which is sent to the RADIUS server
Service-Type=5
Framed-Protocol=1
NAS-Port-Type=5
NAS-IP-Address=127.0.0.1
OpenVPNConfig=/etc/openvpn/radiusvpn.conf
overwriteccfiles=true
server
{
acctport=1813
authport=1812
name=127.0.0.1
retry=1
wait=1
sharedsecret=Hier das Secret aus der client.conf des Radius-Servers
}

Point-to-Multipoint Server

Please setup a point-to-multipoint configuration. Tip: Use the easy-rsa-package, which you can install seperatly with aptitude:

i.e.:
cp -a /usr/share/doc/openvpn/examples/easy-rsa /etc
cd /etc/easy-rsa/2.0/

Edit the file vars and change the lines below, like described in the README.
source vars
./clean-all
./build-ca
./build-key-server server
./build-dh

Now you can create one or more client certificates:
./build-key cl1

cd keys
openvpn --genkey --secret ta.key

Please change to the directory /etc/openvpn
cd /etc/openvpn
mkdir ssl
cp -a /etc/easy-rsa/keys/{ca.crt,dh1024.pem,ta.key,server.crt,server.key} ssl/

Use an editor and put in the following sample configuration:

radiusvpn.conf:
# Which device
dev tun
fast-io

user nobody
group nogroup
persist-tun
persist-key

server 10.10.0.0 255.255.255.0
management 127.0.0.1 7505
float

username-as-common-name
client-config-dir ccd
client-to-client

push "redirect-gateway def1"
push "dhcp-option NTP 10.10.0.1"
push "dhcp-option DOMAIN lan"
push "dhcp-option DNS 10.10.0.1"

ping-timer-rem
keepalive 10 60

# Use compression
comp-lzo

# Strong encryption
tls-server
tls-auth ssl/ta.key 0
dh ssl/dh1024.pem
cert ssl/server.crt
key ssl/server.key
ca ssl/ca.crt

plugin /etc/openvpn/radiusplugin.so /etc/openvpn/radiusplugin.cnf

verb 3
mute 10

status /var/log/openvpn/status.log 1
log /var/log/openvpn/radiusvpn.log
mkdir /etc/openvpn/ccd
mkdir /var/log/openvpn

That´s it The server is ready to go. Now you can start the services freeradius, mysql and openvpn.

Afterwards you can configure the client(s). The following output is just an idea of how it could look like. Any further documentation can be found on the project website.

Client example
# Which device
dev tun
fast-io

persist-key
persist-tun
replay-persist radiusvpn.d/cur-replay-protection.cache

# Our remote peer
nobind
remote 1194

pull

# Use compression
comp-lzo

# Strong encryption
tls-client
tls-remote server
ns-cert-type server
tls-auth ssl/ta.key 1
cert ssl/common.crt
key ssl/common.key
ca ssl/ca.crt

verb 3
mute 10

auth-user-pass radiusvpn.d/auth-user-pass.conf

up /etc/openvpn/update-resolv-conf
down /etc/openvpn/update-resolv-conf

# log /var/log/openvpn.log
mkdir /etc/openvpn/radiusvpn.d

Change to the given directory and create the file auth-user-pass.conf. Please also refer to the openvpn manpage for the parameter –auth-user-pass.

Update 2010-08-31:
LDAP for authorization and authentication

Instead of using MySQL for authorization and authentication, you can bind FreeRADIUS at an LDAP server. I have not done this with OpenVPN as a NAS yet, but with pppoe-server (rp-pppoe) and the steps should be nearly the same. Here is what I have done.

To use LDAP with freeradius, you need to install freeradius-ldap and slapd.
authorize {
preprocess
files
sql
ldap
expiration
logintime
}
authenticate {
Auth-Type LDAP {
ldap
}
}
preacct {
preprocess
acct_unique
suffix
files
}
accounting {
sql
}
session {
sql
}
post-auth {
ldap
exec
}
pre-proxy {
}
post-proxy {
}

Notice: You also need the files module, else you can not have LDAP looking up profiles for reply-items. At the moment I do not know, if there is another way for looking up GroupName stuff. Maybe someone else might give a hint here

Modify the users file like this (example):
DEFAULT Ldap-Group == disabled, Auth-Type := Reject
Reply-Message = "Account disabled. Please call the helpdesk.",
Fall-Through = no

DEFAULT Ldap-Group == flat10000, User-Profile := "uid=flat10000,ou=profiles,ou=radius,ou=wl,dc=example,dc=org"
Fall-Through = no

DEFAULT Auth-Type := Reject
Reply-Message = "Please call the helpdesk."

I am currently unsure how to do some kind of traffic shaping here. The example above was for pppoe which generates ppp+ interfaces. For each interface that comes up, a script inside /etc/ppp/ip-up.d is called. Traffic shaping normally would take place there. But for tun interfaces I have not tried to figure this out, yet.

The ldap module configuration for freeradius might look like this:
ldap {
server = "wl00.wl.example.org" # Insert your exact FQDN here, if using TLS
identity = "cn=proxyuser,dc=example,dc=org"
password = YOUR-LDAP-PROXYUSER-PW-HERE
basedn = "ou=wl,dc=example,dc=org"
filter = "(uid=%{%{Stripped-User-Name}:-%{User-Name}})"
base_filter = "(objectclass=radiusprofile)"
ldap_connections_number = 5
timeout = 4
timelimit = 3
net_timeout = 1
tls {
start_tls = yes
cacertfile = /etc/ssl/certs/cacert_org.crt # I use certificates signed by http://www.cacert.org
require_cert = "demand"
}
dictionary_mapping = ${confdir}/ldap.attrmap
password_attribute = userPassword
edir_account_policy_check = no
groupname_attribute = radiusGroupName
groupmembership_filter = "(&(uid=%{%{Stripped-User-Name}:-%{User-Name}})(objectclass=radiusprofile))"
compare_check_items = no
}

Add the freeradius-schema for LDAP to the slapd.conf (or include it in slapd.d).

A sample init.ldif is shown here:
dn: dc=example,dc=org
objectClass: top
objectClass: dcObject
objectClass: organization
dc: example
o: MyCompany

dn: ou=wl,dc=example,dc=org
objectClass: organizationalUnit
objectClass: top
ou: wl

dn: ou=users,ou=wl,dc=example,dc=org
objectClass: organizationalUnit
objectClass: top
ou: users

dn: ou=radius,ou=wl,dc=example,dc=org
objectClass: organizationalUnit
objectClass: top
ou: radius

dn: ou=profiles,ou=radius,ou=wl,dc=example,dc=org
objectClass: organizationalUnit
objectClass: top
ou: profiles

# This sample is from PPPoE and shows some vendor specific attributes
dn: uid=flat10000,ou=profiles,ou=radius,ou=wl,dc=example,dc=org
objectClass: radiusObjectProfile
objectClass: top
objectClass: radiusprofile
uid: flat10000
cn: flat10000
radiusReplyItem: Acct-Interim-Interval := 360
radiusReplyItem: RP-Downstream-Speed-Limit := 10240
radiusReplyItem: RP-Upstream-Speed-Limit := 10240
radiusIdleTimeout: 3600
radiusSessionTimeout: 86400
radiusSimultaneousUse: 1

dn: cn=proxyuser,dc=example,dc=example
objectClass: simpleSecurityObject
objectClass: organizationalRole
cn: proxyuser
userPassword: {SSHA}***************
description: LDAP administrator (read-only)

dn: uid=wl00000000,ou=users,ou=wl,dc=example,dc=org
objectClass: inetOrgPerson
objectClass: radiusprofile
uid: wl00000000
cn: Christian Roessner
sn: Roessner
givenName: Christian
l: Cityname_here
postalCode: Zip_code_here
postalAddress: Foobar street 4711
homePhone: +49 000 00000000
mail: sample@example.org
userPassword: Test123West
description: Testuser
radiusGroupName: flat10000

Notice: Maybe you see that I am using cleartext passwords. This differs from using MySQL as source for storing users/pws. I do not see this as a security provlem.

I have configured LDAP to have a proxyuser that has access rights to all data with read-only support.

Here is my sample slapd.conf:
# /etc/ldap/slapd.conf

include /etc/ldap/schema/core.schema
include /etc/ldap/schema/cosine.schema
include /etc/ldap/schema/nis.schema
include /etc/ldap/schema/inetorgperson.schema
include /etc/ldap/schema/freeradius.schema # You can find it in the doc folder somewhere in freeradius

argsfile /var/run/slapd/slapd.args
pidfile /var/run/slapd/slapd.pid

modulepath /usr/lib/ldap
moduleload back_hdb.la

loglevel 256

# Sample security restrictions
# Require integrity protection (prevent hijacking)
# Require 112-bit (3DES or better) encryption for updates
# Require 63-bit encryption for simple bind
security ssf=1 update_ssf=112 simple_bind=64

TLSCACertificateFile /etc/ssl/certs/cacert_org.crt
TLSCertificateFile /etc/ssl/certs/newcert.pem
TLSCertificateKeyFile /etc/ssl/private/newkey.pem

database frontend

# Sample access control policy:
# Root DSE: allow anyone to read it
# Subschema (sub)entry DSE: allow anyone to read it
# Other DSEs:
# Allow self write access
# Allow authenticated users read access
# Allow anonymous users to authenticate
# Directives needed to implement policy:
access to dn.base=""
by * read
access to dn.base="cn=Subschema"
by * read
access to *
by self write
by users read
by anonymous auth

database config
rootdn cn=config
rootpw {SSHA}*****************

database hdb
suffix dc=example,dc=org
rootdn cn=admin,dc=example,dc=org
rootpw {SSHA}*****************
directory /var/lib/ldap
index objectClass eq
# ... More indexes where added with Apache-Directory-Studio and not listed here

access to attrs=userPassword,shadowLastChange
by self write
by dn.exact="cn=proxyuser,dc=example,dc=org" read
by anonymous auth
by * none

access to *
by dn.exact="cn=proxyuser,dc=example,dc=org" read
by users read
by * none

After finishing, you can delete everything from the MySQL server concerning users. The only table that will still be used is the radacct table. All the other tables are empty. But you also can store users in both servers. Storing one user in both is a bad idea

See a final radtest here:
radtest wl00000000 PW_for_wl00000000 127.0.0.1 0 The_Client_PW_for_radius
Sending Access-Request of id 215 to 127.0.0.1 port 1812
User-Name = "wl00000000"
User-Password = "PW_for_wl00000000"
NAS-IP-Address = 127.0.1.1
NAS-Port = 0
rad_recv: Access-Accept packet from host 127.0.0.1 port 1812, id=215, length=62
Idle-Timeout = 3600
Session-Timeout = 86400
Acct-Interim-Interval = 360
RP-Downstream-Speed-Limit = 10240
RP-Upstream-Speed-Limit = 10240

Well, again, you see results made for pppoe. Unsure, if this should remain here in this article, but maybe it gives you an idea, what you can do with LDAP and RADIUS…

If you run the ldap- and freeradius server on the same machine, you also could forget about using tls and use a unix socket instead (/etc/freeradius/module/ldap: server=”ldapi://%2fvar%2frun%2fslapd%2fldapi”). This works with ssf from slapd.conf aswell. I use ldapi and tls, so I can manage LDAP from remote with Apache Directory Studio and have a working setup, even I forgot to renew the server certificate

I know the part binding freeradisu to an ldap might be not as good as the first part of this howto, but I am short in time Hope it works for you.
1. What is SUID and SGID in Linux ?


Unix permissions: The sticky, SUID and SGID bits
Hello, today I'll write about more permissions, this time it will be about the Sticky, SUID and SGID bits.
I'll write about them because one of my readers told me that it would be a good idea to write about these bits.

Sticky bit

What the sticky bit does is that when you execute an application it will be residing in the memory, so if other user (thinking in a multiuser environment) executes that same application it will run faster because it's already active in memory.
So this permission fastens the executions if multiple users are using the same application.

SUID bit

If we apply the SUID bit to an application, it will run with the UID of the owner even if you are logged in as another user. For example if all the users need to execute fdisk without using sudo or escalate to root we just have to apply this bit to fdisk.

SGID bit

The same as SUID but it applies to the group owner of the application.


The SUID bit to execute bash scripts as root

Even if we can apply SGID to execute applications with a different user as the one we are logged in, there are things we can't do even if the owner of the script is root.

First of all, for a script to execute as root we have to apply the SUID bit to the shell we use to execute scripts, because the application in this case is the shell, the script is just a file that will be interpreted by the shell.

Special commandas like adduser can't be executed by any other user even if it has the SUID bit of root. So it won't work a script that executes the adduser command even if the shell has the SUID bit. If you don't believe me tray it ;) I even set the SUID bit for root to my shell, to adduser, to my script that executes adduser and the system didn't permit it.


The SUID bit and GTK+

GTK doesn't support the use of SUID or SGID. So if you try to run a GTK based application with one of these birs the execution will send and alert and it will not start.


Then how can we use these bits?

It can be used to create scripts that write files inside directories in which other users don't have permissions. For example, that any user can create a file inside the root home directory.
Or like I said it before, is possible to use some applications with SUID and SGID like fdisk.

And now you may be asking, well a script to write file in /root, but hadn't you said that to make this work you have to apply the SUID bit to the shell? OK, do this can be risky, to let the shell execute always as root with any user, but there is a trick to make it work without making all your shell root.

We just have to copy our shell to other directory and in that copy apply the SUID bit so the original shell doesn't have to be executed always by root. Then we point our sh to this shell.
Even though this is not a good practice because it isn't secure to allow scripts to be executed as root, but SGID can make easy a lot of things if the server or computer where it is going to be apply is managed only by the right people or SysAdmins.
1.How to create LVM on redhat ?

* The LVM is not currently configured or in used. Having say that, this is the LVM tutorial if you’re going to setup LVM from the ground up on a production Linux server with a new SATA / SCSI hard disk.

* Without a luxury server hardware, I tested this LVM tutorial on PC with the secondary hard disk dedicated for LVM setup. So, the Linux dev file of secondary IDE hard disk will be /dev/hdb (or /dev/sdb for SCSI hard disk).

* This guide is fully tested in Red Hat Enterprise Linux 4 with Logical Volume Manager 2 (LVM2) run-time environment (LVM version 2.00.31 2004-12-12, Library version 1.00.19-ioctl 2004-07-03, Driver version 4.1.0)!


How to setup Linux LVM in 3 minutes at command line?

1. Login with root user ID and try to avoid using sudo command for simplicity reason.

2. Using the whole secondary hard disk for LVM partition:
fdisk /dev/hdb

At the Linux fdisk command prompt,
1. press n to create a new disk partition,
2. press p to create a primary disk partition,
3. press 1 to denote it as 1st disk partition,
4. press ENTER twice to accept the default of 1st and last cylinder – to convert the whole secondary hard disk to a single disk partition,
5. press t (will automatically select the only partition – partition 1) to change the default Linux partition type (0×83) to LVM partition type (0×8e),
6. press L to list all the currently supported partition type,
7. press 8e (as per the L listing) to change partition 1 to 8e, i.e. Linux LVM partition type,
8. press p to display the secondary hard disk partition setup. Please take note that the first partition is denoted as /dev/hdb1 in Linux,
9. press w to write the partition table and exit fdisk upon completion.


3. Next, this LVM command will create a LVM physical volume (PV) on a regular hard disk or partition:
pvcreate /dev/hdb1

4. Now, another LVM command to create a LVM volume group (VG) called vg0 with a physical extent size (PE size) of 16MB:
vgcreate -s 16M vg0 /dev/hdb1

Be properly planning ahead of PE size before creating a volume group with vgcreate -s option!

5. Create a 400MB logical volume (LV) called lvol0 on volume group vg0:
lvcreate -L 400M -n lvol0 vg0

This lvcreate command will create a softlink /dev/vg0/lvol0 point to a correspondence block device file called /dev/mapper/vg0-lvol0.

6. The Linux LVM setup is almost done. Now is the time to format logical volume lvol0 to create a Red Hat Linux supported file system, i.e. EXT3 file system, with 1% reserved block count:
mkfs -t ext3 -m 1 -v /dev/vg0/lvol0

7. Create a mount point before mounting the new EXT3 file system:
mkdir /mnt/vfs

8. The last step of this LVM tutorial – mount the new EXT3 file system created on logical volume lvol0 of LVM to /mnt/vfs mount point:
mount -t ext3 /dev/vg0/lvol0 /mnt/vfs


To confirm the LVM setup has been completed successfully, the df -h command should display these similar message:

/dev/mapper/vg0-lvol0 388M 11M 374M 3% /mnt/vfs

Some of the useful LVM commands reference:

vgdisplay vg0

To check or display volume group setting, such as physical size (PE Size), volume group name (VG name), maximum logical volumes (Max LV), maximum physical volume (Max PV), etc.

pvscan

To check or list all physical volumes (PV) created for volume group (VG) in the current system.

vgextend

To dynamically adding more physical volume (PV), i.e. through new hard disk or disk partition, to an existing volume group (VG) in online mode. You’ll have to manually execute vgextend after pvcreate command that create LVM physical volume (PV).

Thanks,
Srinivas.k


NIS server and client installation and configuration on Centos6.3



nis
root@ns ~]#
yum -y install ypserv


[root@ns ~]#
ypdomainname server-linux.info

// set domain name

[root@ns ~]#
vi /etc/sysconfig/network


NETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=ns.server-linux.info
GATEWAY=192.168.0.1
NISDOMAIN=server-linux.info
// add at the bottom of file


[root@ns ~]#
vi /var/yp/Makefile


# MERGE_PASSWD=true|false
MERGE_PASSWD=
false
// line 42: change

#
# MERGE_GROUP=true|false
MERGE_GROUP=
false
// line 46: change

#
all: passwd
shadow
group hosts rpc services netid protocols
// line 109: add


[root@ns ~]#
vi /var/yp/securenets


host
127.0.0.1

255.255.255.0
192.168.0.0


// create a directory for web site automatically when a user is added in the system

[root@ns ~]#
mkdir /etc/skel/public_html

[root@ns ~]#
chmod 711 /etc/skel/public_html


// create a directory for email automatically when a user is added in the system

[root@ns ~]#
mkdir -p /etc/skel/Maildir/cur

[root@ns ~]#
mkdir -p /etc/skel/Maildir/new

[root@ns ~]#
mkdir -p /etc/skel/Maildir/tmp

[root@ns ~]#
chmod -R 700 /etc/skel/Maildir/


[root@ns ~]#
useradd cent

[root@ns ~]#
passwd cent

Changing password for user cent.
New UNIX password:
Retype new UNIX password:
passwd: all authentication tokens updated successfully.

[root@ns ~]#
/usr/lib/yp/ypinit -m

At this point, we have to construct a list of the hosts which will run NIS servers. ns.server-linux.info is in the list of NIS server hosts. Please continue to add the names for the other hosts, one per line. When you are done with the list, type a .
next host to add: ns.server-linux.info
next host to add:
// push Ctrl + D keys

The current list of NIS servers looks like this:

ns.server-linux.info

Is this correct? [y/n: y]
y
// input 'y' and push Enter key

We need a few minutes to build the databases...
Building /var/yp/server-linux.info/ypservers...
Running /var/yp/Makefile...
gmake[1]: Entering directory `/var/yp/server-linux.info'
Updating passwd.byname...
Updating passwd.byuid...
Updating group.byname...
Updating group.bygid...
Updating hosts.byname...
Updating hosts.byaddr...
Updating rpc.byname...
Updating rpc.bynumber...
Updating services.byname...
Updating services.byservicename...
Updating netid.byname...
Updating protocols.bynumber...
Updating protocols.byname...
Updating mail.aliases...
gmake[1]: Leaving directory `/var/yp/server-linux.info'

ns.server-linux.info has been set up as a NIS master server.

Now you can run ypinit -s ns.server-linux.info on all slave server.

[root@ns ~]#
/etc/rc.d/init.d/portmap start

Starting portmap:
[ OK ]

[root@ns ~]#
/etc/rc.d/init.d/ypserv start

Starting YP server services:
[ OK ]

[root@ns ~]#
/etc/rc.d/init.d/yppasswdd start

Starting YP passwd service:
[ OK ]

[root@ns ~]#
chkconfig portmap on

[root@ns ~]#
chkconfig ypserv on

[root@ns ~]#
chkconfig yppasswdd on


// It's neccessary to update NIS database with following way if new user is added again

[root@ns ~]#
cd /var/yp

[root@ns yp]#
make

Monday, September 23, 2013

Postgres server installation and Configuration

Linux downloads (RedHat/CentOS/Fedora/Scientific)
PostgreSQL is available on these platforms by default. However, each version of the platform normally "snapshots" a specific version of PostgreSQL that is then supported throughout the lifetime of this platform. Since this can often mean a different version than preferred, the PostgreSQL project provides a repository of packages of all supported versions.
Should packages not be available for your distribution, or there are issues with your package manager, there are graphical installers available.
Finally, most Linux systems make it easy to build from source.

Included in distribution

These distributions all include PostgreSQL by default. To install PostgreSQL from these repositories, use the yum command:
yum install postgresql
Which version of PostgreSQL you get will depend on the version of the distribution:
Distribution Version
RHEL/CentOS/SL 5 8.1 (also supplies package postgresql84)
RHEL/CentOS/SL 6 8.4
Fedora 16, 17 9.1
Fedora 18 9.2
The repository contains many different packages including third party addons. The most common and important packages are (substitute the version number as required):
  • postgresql - client libraries and client binaries
  • postgresql-server - core database server
  • postgresql-contrib - additional supplied modules
  • postgresql-devel - libraries and headers for C language development
  • pgadmin3 - pgAdmin III graphical administration utility

Post-installation

Due to policies for RedHat style distributions, the PostgreSQL installation will not be enabled for automatic start or have the database initialized automatically. To make your database installation complete, you need to perform these two steps: service postgresql initdb
chkconfig postgresql on
or, on Fedora 19 and other later derived distributions:
postgresql-setup initdb
chkconfig postgresql on

PostgreSQL Yum Repository

If the version supplied by your operating system is not the one you want, you can use the PostgreSQL Yum Repository. This repository will integrate with your normal systems and patch management, and provide automatic updates for all supported versions of PostgreSQL throughout the support lifetime of PostgreSQL.
To use the yum repository, you must first install the repository RPM. To do this, download the correct RPM from the repository RPM listing, and install it with commands like:
rpm -i http://yum.postgresql.org/9.2/redhat/rhel-6-x86_64/pgdg-redhat92-9.2-7.noarch.rpm
Once this is done, you can proceed to install and update packages the same way as the ones included in the distribution.
yum install postgresql92-server postgresql92-contrib
service postgresql-9.2 initdb
chkconfig postgresql-9.2 on

Package names in the PostgreSQL yum repository follows the same standard as the ones included in the main repositories, but include the version number, such as:
  • postgresql92
  • postgresql92-server
  • postgresql92-contrib
  • pgadmin3_92

Direct RPM download

If you cannot, or do not want to, use the yum based installation method, all the RPMs that are in the yum repository are available for direct download and manual installation as well.

Cross distribution packages

Generic RPM and DEB packages that provide a server-only distibution are avaliable for some 32 and 64-bit Linux distributions. These packages provide a single set of binaries and consistent packaging across different Linux distributions. They are designed for server installation where a GUI is not available and consistency across multiple distributions is a requirement.
Download the packages from OpenSCG for all supported versions.
Note: The cross distribution packages do not fully integrate with the platform-specific packaging systems.

Graphical installer

Installers are available for 32 and 64 bit Linux distributions and include PostgreSQL, pgAdmin and the StackBuilder utility for installation of additional packages. The PostgreSQL 8.4 installers have been tested with a number of Linux distributions and should work on Ubuntu 6.06 and above, Fedora 6 and above, CentOS/RedHat Enterprise Linux 4 and above and others. The 9.0 and later installers have only been tested on more recent distributions.
Download the installer from EnterpriseDB for all supported versions.
Note: The installers do not integrate with platform-specific packaging systems.

Build from source

The source code can be found in the main file browser. Instructions for building from source can be found in the documentation.

MySQL Backup and Recovery

If your site manages it's data with MySQL, then you obviously need to make sure the data is safe. In this blog post, I will show how to create a daily backup automatically. I will also show a continuous data protection plan for MySQL databases. This blog post uses the previous backup server configured in my Secure Backup & Recovery with rsnapshot, rssh and OpenSSH article.
In order to understand this blog, let's define some important terms :
  • Backup server's hostname : angel.company.com
  • First MySQL server's hostname : jedi.company.com
  • Second MySQL server's hostname : r2d2.company.com

Backup Server Setup (part 1 of 2)


The first thing we need to do on the backup server is to install the required software.

ssh angel.company.com
sudo yum -y openssh-clients rsnapshot mysql vim

Then configure a directory structure where the MySQL backups will be stored. Ideally, you want to create a seperate file system for this directory structure. And manage the file system under LVM2 so that you can increase it's size dynamically in the future. I'll skip the LVM2 setup for now.

sudo mkdir /export/backup/{conf,data,log,run,scripts}
sudo chown -R root:root /backup

Create two wrapper scripts to help the process.

sudo vim /export/backup/scripts/backup_runner.sh
sudo vim /export/backup/scripts/ssh_wrapper.sh

Make sure both scripts are executable and that they don't have any syntax errors in them.

sudo chmod a+x /backup/scripts/*.sh
sudo sh -n /export/backup/scripts/backup_runner.sh
sudo sh -n /export/backup/scripts/ssh_wrapper.sh

Configure both MySQL backup configuration files. WARNING : rsnapshot is very sensitive with spaces and tabs. DO NOT USE ANY SPACES IN THE CONFIGURATION FILE! You have been warned :)

sudo vim /export/backup/conf/rdbms.mysql.daily
sudo vim /export/backup/conf/rdbms.mysql.hourly

Make sure our backup log files don't consume too much disk space.

sudo vi /etc/logrotate.d/backup

And make sure our new logrotate configuration is still valid.

sudo logrotate -d /etc/logrotate.conf

Create the MySQL two backup scripts. Notice that in each of these two scripts, the variable MYSQL_HOST_LIST  is a space seperated list of all FQDN machines running MySQL. The beauty of this is that you can backup all your MySQL machines with a single script!

WARNING : be sure to change the user's password in both scripts!

sudo vim /export/backup/scripts/mysql_backup_daily.sh

sudo vim /export/backup/scripts/mysql_backup_hourly.sh
Protect those scripts because they hold the MySQL backup user's password.
sudo chown root:root /export/backup/scripts/mysql_backup_*.sh
sudo chmod 700 /export/backup/scripts/mysql_backup_*.sh
We now need to configure the MySQL clients with the proper database backup user.

MySQL Client Configuration

Connect to each MySQL machines in order to create the backup user in their mysql database. Again, don't forget to update the user's password in the SQL commands. Let's start by our first MySQL server.
ssh jedi.company.com
mysql -u root -p
mysql> create user 'backup'@'angel.company.com' identified by 'change_me';
mysql> grant all on *.* to 'backup'@'angel.company.com';
mysql> flush privileges;
mysql> exit;
exit
And now do the same with the other MySQL machine.
ssh r2d2.company.com
mysql -u root -p
mysql> create user 'backup'@'angel.company.com' identified by 'change_me';
mysql> grant all on *.* to 'backup'@'angel.company.com';
mysql> flush privileges;
mysql> exit;
exit

Backup Server Setup (part 2 of 2)

Back on the backup server, execute the mysql command to test if the new user can connect?
mysql -u backup -p -h jedi.company.com
mysql -u backup -p -h r2d2.company.com
Once that is done, we can configure root's crontab to execute both of these scripts.
sudo crontab -e
Once the backups are done, you will now have the following data in your data folder.
sudo ls -AlFR /export/backup/data/rdbms.mysql/

/export/backup/data/rdbms.mysql/:
total 20
drwxr-xr-x  3 root root 4096 Feb 22 13:30 daily.0/
drwxr-xr-x  4 root root 4096 Feb 22 14:00 hourly.0/
drwxr-xr-x  3 root root 4096 Feb 22 13:46 hourly.1/

/export/backup/data/rdbms.mysql/daily.0:
total 12
drwxr-xr-x 2 root root 4096 Feb 22 13:30 all_servers/

/export/backup/data/rdbms.mysql/daily.0/all_servers:
total 292
-rw------- 1 root root 139900 Feb 22 13:30 jedi.company.com.ALL.20130222.sql.gz
-rw------- 1 root root    985 Feb 22 13:30 jedi.company.com.information_schema.20130222.sql
-rw------- 1 root root 138348 Feb 22 13:30 jedi.company.com.mysql.20130222.sql.gz
-rw------- 1 root root   2247 Feb 22 13:30 jedi.company.com.net2ftp.20130222.sql.gz

/export/backup/data/rdbms.mysql/hourly.0:
total 16
drwxr-xr-x 2 root root 4096 Feb 22 14:00 all_servers/
drwxr-xr-x 2 root root 4096 Feb 22 13:46 prod/

/export/backup/data/rdbms.mysql/hourly.0/all_servers:
total 152
-rw------- 1 root root    985 Feb 22 14:00 jedi.company.com.information_schema.2013.02.22-14:00.sql
-rw------- 1 root root 138354 Feb 22 14:00 jedi.company.com.mysql.2013.02.22-14:00.sql.gz
-rw------- 1 root root   2254 Feb 22 14:00 jedi.company.com.net2ftp.2013.02.22-14:00.sql.gz

/export/backup/data/rdbms.mysql/hourly.0/prod:
total 152
-rw------- 2 root root    985 Feb 22 13:46 jedi.company.com.information_schema.2013.02.22-13:46.sql
-rw------- 2 root root 138356 Feb 22 13:46 jedi.company.com.mysql.2013.02.22-13:46.sql.gz
-rw------- 2 root root   2255 Feb 22 13:46 jedi.company.com.net2ftp.2013.02.22-13:46.sql.gz

/export/backup/data/rdbms.mysql/hourly.1:
total 12
drwxr-xr-x 2 root root 4096 Feb 22 13:46 prod/

/export/backup/data/rdbms.mysql/hourly.1/prod:
total 152
-rw------- 2 root root    985 Feb 22 13:46 jedi.company.com.information_schema.2013.02.22-13:46.sql
-rw------- 2 root root 138356 Feb 22 13:46 jedi.company.com.mysql.2013.02.22-13:46.sql.gz
-rw------- 2 root root   2255 Feb 22 13:46 jedi.company.com.net2ftp.2013.02.22-13:46.sql.gz

Recovery

Should you ever need to recover (and you should try this before you really have to use this!) simply use one of the SQL scripts generated. For example, if we need to restore the entire database on host jedi.company.com, we would do this :
mysql -u backup -p -h jedi.company.com < jedi.company.com.ALL.20130222.sql.gz
That is assuming the host was reinstalled as a result of a catastrophic failure or security beach. If you already have your databases on the host, make sure to drop them all before you do that.
 
 
Thanks,
Srinivas

Open Ldap 2.4 server installation and configuration on Centos6.3

HOWTO : OpenLDAP 2.4 Replication on CentOS 6.2

We continue our OpenLDAP 2.4 on CentOS 6.2 with a description on how to setup  between two OpenLDAP 2.4 servers. This happens to be the final bullet point in our list of goals :
  1. Install OpenLDAP 2.4.
  2. Configure Transport Layer Security (TLS).
  3. Manage users and groups in OpenLDAP.
  4. Configure pam_ldap to authenticate users via OpenLDAP.
  5. Use OpenLDAP as sudo's configuration repository.
  6. Use OpenLDAP as automount map repository for autofs.
  7. Use OpenLDAP as NFS netgroup repository again for autofs.
  8. Use OpenLDAP as the Kerberos principal repository.
  9. Setup OpenLDAP backup and recovery.
  10. Setup OpenLDAP replication.
Of course the first thing to do in order to be able to replication our DIT is to have another CentOS machine. So go ahead and install it on a seperate computer. We will continue with our example two machines : alice and bob. Alice is the current OpenLDAP server while bob was the client. At the end of this document, bob will be the second OpenLDAP server. Which in OpenLDAP syncrepl parlance, we have these entities :
  • provider : alice.company.com (a.k.a. master server)
  • consumer : bob.company.com (a.k.a. replica server)
Another important thing to do is to read the OpenLDAP replication chapter in the administrator's guide. The following exerpt is of particular interest :
Syncrepl supports both pull-based and push-based synchronization. In its basic refreshOnly synchronization mode, the provider uses pull-based synchronization where the consumer servers need not be tracked and no history information is maintained. The information required for the provider to process periodic polling requests is contained in the synchronization cookie of the request itself. To optimize the pull-based synchronization, syncrepl utilizes the present phase of the LDAP Sync protocol as well as its delete phase, instead of falling back on frequent full reloads. To further optimize the pull-based synchronization, the provider can maintain a per-scope session log as a history store. In its refreshAndPersist mode of synchronization, the provider uses a push-based synchronization. The provider keeps track of the consumer servers that have requested a persistent search and sends them necessary updates as the provider replication content gets modified.
Replication is handeled by an OpenLDAP overlay. Check the slapo-syncprov(5) man page for the provider overlay information. With that in mind, we will setup a refreshAndPersist replication using the delta-syncrepl replication scheme. Note that, as the official documentation says :
As you can see, you can let your imagination go wild using Syncrepl and slapd-ldap(8) tailoring your replication to fit your specific network topology.

Provider Configuration


Setting up delta-syncrepl requires configuration changes on both the master (i.e. provider) and replica (i.e. consumer) servers. We will start by configuring the provider machine (i.e. alice.company.com) and then continue to the consumer machine (i.e. bob.company.com).

So, connect to the provider server.

ssh alice.company.com

On this machine, we need to setup several things :
  1. A cn=module configuration.
  2. An accesslog database to store the accesslog data (i.e. cn=accesslog)
  3. A syncprov overlay over the accesslog database.
  4. Two overlays over our primary database (i.e. dc=company,dc=com)
  5. A new user object to authenticate and fetch the data.
  6. Limits and ACLs to the new object.
Let's configure those items one at a time.

Provider Module Configuration


To configure a module on the provider, we first need to check if we have one? Since we have a Kerberos realm and SASL GSSAPI authentication setup, let's use this to simplify our queries. Note that you can always use the cn=admin,dc=company,dc=com RootDN to perform all the tasks in this blog post, but the queries are longer to write.

kinit -p drobilla/admin@COMPANY.COM
ldapsearch -ZLLLb cn=config olcModulePath
The above query did not return the olcModulePath object. So we need to create it. But what is our module path? The cn=module documentation shows us that module names end in « .la ». With that info, a simple rpm query will show us where they're stored on the filesystem.
rpm -ql openldap-servers | grep '\.la$'
/usr/lib/openldap/accesslog.la
/usr/lib/openldap/auditlog.la
/usr/lib/openldap/collect.la
/usr/lib/openldap/constraint.la
/usr/lib/openldap/dds.la
/usr/lib/openldap/deref.la
/usr/lib/openldap/dyngroup.la
/usr/lib/openldap/dynlist.la
/usr/lib/openldap/memberof.la
/usr/lib/openldap/pcache.la
/usr/lib/openldap/ppolicy.la
/usr/lib/openldap/refint.la
/usr/lib/openldap/retcode.la
/usr/lib/openldap/rwm.la
/usr/lib/openldap/seqmod.la
/usr/lib/openldap/smbk5pwd.la
/usr/lib/openldap/sssvlv.la
/usr/lib/openldap/syncprov.la
/usr/lib/openldap/translucent.la
/usr/lib/openldap/unique.la
/usr/lib/openldap/valsort.la
Ok, so we know our olcModulePath is /usr/lib/openldap. We also know that we have quite a bunch of different modules available. Let's write another LDIF file to set the olcModulePath, but also which modules we want to load. Since we need both the accesslog and the syncprov overlays, we might as well load them right?!

Load this new configuration.
ldapmodify -aZf ~/ldap/module.ldif

Check to see if it's installed? Then what's in it?

ldapsearch -ZLLLb cn=config dn | grep module
ldapsearch -ZLLLb cn=module{0},cn=config
dn: cn=module{0},cn=config
objectClass: olcModuleList
cn: module{0}
olcModulePath: /usr/lib/openldap
olcModuleLoad: {0}accesslog.la
olcModuleLoad: {1}syncprov.la
Good. We can proceed to the next provider objective.

Provider Accesslog Database


Now that we have the accesslog overlay module loaded, we must create a database in which to store the accesslog data. We of course do this with another LDIF file.

vi ~/ldap/accesslog.ldif 

As we can see, this database uses a new directory that must be created first. We also need to drop a DB_CONFIG file in there and fix permissions.

sudo mkdir -p /var/lib/ldap/accesslog
sudo cp `rpm -ql openldap-servers | grep DB_CONFIG` /var/lib/ldap/accesslog/DB_CONFIG
sudo chown -R ldap:ldap /var/lib/ldap

Alright, we can now create the new accesslog database. Note that we're using the hdb instead of the bdb for this database. There's no real reason, choose whichever you prefer.

ldapmodify -aZf ~/ldap/accesslog.ldif

We should now have a new database. Let's see if that's true?

ldapsearch -ZLLLb cn=config dn | grep hdb
dn: olcDatabase={3}hdb,cn=config

What does it contain?

ldapsearch -ZLLLb olcDatabase={3}hdb,cn=config
dn: olcDatabase={3}hdb,cn=config
objectClass: olcDatabaseConfig
objectClass: olcHdbConfig
olcDatabase: {3}hdb
olcDbDirectory: /var/lib/ldap/accesslog
olcSuffix: cn=accesslog
olcRootDN: cn=admin,dc=company,dc=com
olcDbIndex: default eq
olcDbIndex: entryCSN,objectClass,reqEnd,reqResult,reqStart

Great! Let's continue with our next objective.

Provider Syncprov Overlay Over the Accesslog Database

Our next objective is to setup a syncprov overlay on the new accesslog database. Create this LDIF file :
And add the new setup to our OpenLDAP server.

ldapmodify -aZf ~/ldap/overlay.accesslog.ldif 

We should now have a new dn: in our olcDatabase={3}hdb,cn=config database.

ldapsearch -ZLLLb olcDatabase={3}hdb,cn=config

dn: olcDatabase={3}hdb,cn=config
objectClass: olcDatabaseConfig
objectClass: olcHdbConfig
olcDatabase: {3}hdb
olcDbDirectory: /var/lib/ldap/accesslog
olcSuffix: cn=accesslog
olcRootDN: cn=admin,dc=company,dc=com
olcDbIndex: default eq
olcDbIndex: entryCSN,objectClass,reqEnd,reqResult,reqStart

dn: olcOverlay={0}syncprov,olcDatabase={3}hdb,cn=config
objectClass: olcOverlayConfig
objectClass: olcSyncProvConfig
olcOverlay: {0}syncprov
olcSpNoPresent: TRUE
olcSpReloadHint: TRUE

Sure enough, it's there. We can now continue with our setup.

Provider Overlays On Primary Database


Do do this objective, we must of course write another LDIF file in which we will a) setup new indexes to our primary database, b) add the syncprov overlay and c) add the accesslog overlay.

vi ~/ldap/overlay.primary.ldif

We then add this new LDIF into our system.

ldapmodify -aZf ~/ldap/overlay.primary.ldif 

Which then gives us two new overlays on the primary database.

ldapsearch -ZLLLb olcDatabase={1}bdb,cn=config dn
dn: olcDatabase={1}bdb,cn=config
dn: olcOverlay={0}syncprov,olcDatabase={1}bdb,cn=config
dn: olcOverlay={1}accesslog,olcDatabase={1}bdb,cn=config

And sure enough, we do have the two new overlays.

Provider Replication User


We need a user for the replication. That user will be used to authenticate the replication server and read the data. Period. Once again, we need an LDIF file.

vi ~/ldap/replication.ldif 


Apply this LDIF file.
ldapmodify -aZf ~/ldap/replication.ldif 
This user needs a real password. Make sure to record this password and keep it safe.

ldappasswd -xZ -S cn=replication,dc=company,dc=com

Note that in this example our replication user is named just « replication ». If you plan on having more than one replicated server, then ideally choose a unique name for each of the replicated server. This way you can audit which machine replicates and when. You can also decide which part of the DIT gets replicated to which machine. But that's another story.

Provider Limits and ACLs to the Replication User


We now need to give access via ACLs and some limits to the new user. Let's do it in two steps starting with the limits (or lack of actually :)

vi ~/ldap/limits.ldif

Then apply this LDIF.

ldapmodify -Zf ~/ldap/limits.ldif

Next step is to give read access to the replication user. Before we do this, it's usually a good idea to double-check our current ACL list.

ldapsearch -LLLb cn=config olcAccess

We can now create another LDIF file to update our ACLs. The idea here is to have the same sets of ACLs on both the provider and the consumer. Except that on the consumer, we don't need any ACLs with regards to the replication user (i.e. the one defined in olcSyncRepl:).

vi ~/ldap/consumer.acl.ldif

Enable the ACL changes.

ldapmodify -f ~/ldap/consumer.acl.ldif

Check the ACL listing.

ldapsearch -Zb olcDatabase={1}bdb,cn=config olcAccess
Test to see if we can access our entire DIT with the replication DN?

ldapsearch -xZWD cn=replication,dc=company,dc=com -H ldap://alice.company.com

This is of course crucial. Do not continue with this blog post until you can access the entire DIT with the replication DN. Once it works, we need to setup our consumer OpenLDAP machine.

Consumer Configuration


Consumer OpenLDAP Installation


Install another CentOS 6 machine and connect to it.

ssh bob.company.com

Make sure that ntpd(8) is running and that both alice and bob's clocks are synchronized. All of your servers' clocks must be tightly synchronized using either NTP(See http://www.ntp.org/ for info on NTP), an atomic clock or some other reliable time reference.

ntpq -p

Then connect to bob.company.com and install a few packages.

sudo yum -y install openldap-servers cyrus-sasl-gssapi pam_krb5 krb5-server-ldap

Configure an empty OpenLDAP server. I've already covered how to do this in a previous blog post, but we now need to change this a little. So we can't really follow the other post. Let's do it all over again then. So the first thing we must do is upgrade the sudo package to have the latest sudo schema. I've explained how to achieve this goal in another blog post. But I'll show here just what we need which starts by installing the latest (as of this writing) sudo package from the sudo website. Note that if your machine is 32 bit, then the package would be sudo-1.8.4-5.el6.i386.rpm.

wget http://www.sudo.ws/sudo/dist/packages/Centos/6/sudo-1.8.4-5.el6.x86_64.rpm

Upgrade sudo.

sudo rpm -U ./sudo-1.8.4-5.el6.x86_64.rpm

This sudo package comes with an OpenLDAP schema. Get the path to the file. Keep note of this path as we will include it in our slapd configuration file.

rpm -ql sudo | grep -i openldap
/usr/share/doc/sudo-1.8.5-1.el6/schema.OpenLDAP
We now need to create the config file. It needs the RootDN password, so record the output of the slappasswd(8C) command

slappasswd
New password: 
Re-enter new password: 
{SSHA}JGjPUbxyCn7wa/pt8YM5rzK7s/hUGncW

Plug the above output into the file below. To understand the syncrepl syntax, refer to the official syncrepl documentation.

mkdir ~/ldap
vi ~/ldap/slapd.conf.consumer

Create the configuration.

sudo slapcat -f ~/ldap/slapd.conf.consumer -F /tmp -n 0

Remove the old configuration.

sudo rm -rf /etc/openldap/slapd.d/*

Install the new one.

sudo cp -rp /tmp/cn\=config* /etc/openldap/slapd.d
sudo chown -R ldap:ldap /etc/openldap/slapd.d
Check to see if the new configuration is ok?
sudo slaptest -uF /etc/openldap/slapd.d
config file testing succeeded
Install the DB_CONFIG file.
egrep -vi '^$|^#' `rpm -ql openldap-servers | grep DB_CONFIG` > /tmp/DB_CONFIG
sudo mv /tmp/DB_CONFIG /var/lib/ldap/DB_CONFIG

sudo chown -R ldap:ldap /var/lib/ldap

Prepare the log system.

sudo vi /etc/rsyslog.conf

Touch the new log file.

sudo touch /var/log/slapd.log

Make sure the log file doesn't grow to humongous proportions.

sudo vi /etc/logrotate.d/slapd

Restart the rsyslog daemon so that it knows about the changes.

sudo /etc/init.d/rsyslog restart

Edit the slapd(8) system configuration file.

sudo vi /etc/sysconfig/ldap

Start the slapd(8) daemon.
sudo /etc/init.d/slapd start
Make sure the daemon starts when the server boots.
sudo chkconfig slapd on
Configure a system-wide LDAP client configuration. This is to simplifiy our life and reduce typing later on. Don't worry about the TLS configurations for now. We will configure them later, but it doesn't hurt to have them in the file at the moment. Don't forget that in this blog post, the OpenLDAP server's FQDN is bob.company.com and not alice.company.com as we used to. This is important because here we don't want to query nor modify the provider (i.e. alice.company.com), but only the consumer (i.e. bob.company.com).
Check if our admin user can connect? Double check the logs to make sure you're not binding to the provider!
ldapwhoami -WD cn=admin,dc=company,dc=com
Enter LDAP Password:
dn:cn=admin,dc=company,dc=com

Consumer TLS Configuration

Ok, let's configure TLS for the consumer. I'll still use a Windows CA for this post.

openssl req -newkey rsa:2048 -keyout `hostname`.key -nodes -out `hostname`.req -subj /CN=bob.company.com/O=Company/C=CA/ST=QC/L=Montreal

Upload the .req file to the CA machine sign it.

C:\> certreq -submit -attrib "CertificateTemplate:WebServer" -config "caserver.company.com\Company CA" bob.company.com.req bob.company.com.pem

Upload the .pem file to our consumer OpenLDAP server then place both the .pem and .key files into the proper location with the appropriate permissions.

sudo mv bob.company.com.pem bob.company.com.key /etc/pki/tls/certs
sudo chown ldap:ldap /etc/pki/tls/certs/bob.company.com.*
sudo chmod 600 /etc/pki/tls/certs/bob.company.com.key

Grab a copy of the CA's certificate from our provider server.

scp alice.company.com:/etc/pki/tls/certs/companyCA.crt /tmp
sudo mv /tmp/companyCA.crt /etc/pki/tls/certs
sudo chown root:root /etc/pki/tls/certs/companyCA.crt

Configure TLS in our consumer OpenLDAP server.

vi ~/ldap/tls.consumer.ldif

Add the TLS configuration to the cn=config base.

ldapmodify -WD cn=admin,dc=company,dc=com -H ldapi:/// -f ~/ldap/tls.consumer.ldif

Check if our configuration has been installed?

sudo ldapsearch -LLLY EXTERNAL -H ldapi:/// -b cn=config -s base | grep olcTLS

Check to see if we can connect with TLS (i.e. the « -Z » switch).

ldapwhoami -xZWD cn=admin,dc=company,dc=com -H ldap://bob.company.com

Revist the LDAP client configuration file to enable the TLS configs.

sudo vi /etc/openldap/ldap.conf

Modify the consumer cn=config and database configurations.

vi ~/ldap/consumer.ldif

Apply the changes.

ldapmodify -WD cn=admin,dc=company,dc=com -H ldapi:/// -f ~/ldap/consumer.ldif 

Check the configuration changes to both the cn=config and the primary database.

sudo ldapsearch -LLLY EXTERNAL -H ldapi:/// -b cn=config "(|(cn=config)(olcDatabase={1}bdb))"

Consumer SASL GSSAPI Configuration


Enable SASL GSSAPI authentication.

vi ~/ldap/consumer.gssapi.ldif

Add the modification to the server.

ldapmodify -xZWD cn=admin,dc=company,dc=com -H ldap://bob.company.com -f ~/ldap/consumer.gssapi.ldif

Change the system-wide OpenLDAP daemin configuration to add a Kerberos keytab.

sudo vi /etc/sysconfig/ldap

Create the OpenLDAP kerberos keytab, the the host's principal key and the autofsclient key while we're in the kadmin dialogue. IMPORTANT : notice how we place the host/ and autofsclient/ principals in the /etc/krb5.keytab while the ldap/ principal gets stored in /etc/openldap/krb5.keytab.

sudo kadmin -p drobilla/admin@COMPANY.COM
kadmin:   addprinc -randkey ldap/bob.company.com@COMPANY.COM
kadmin:   ktadd -k /etc/openldap/krb5.keytab ldap/bob.company.com@COMPANY.COM

kadmin:   addprinc -randkey host/bob.company.com@COMPANY.COM
kadmin:   ktadd -k /etc/krb5.keytab host/bob.company.com@COMPANY.COM

kadmin:   addprinc -randkey autofsclient/bob.company.com@COMPANY.COM
kadmin:   ktadd -k /etc/krb5.keytab autofsclient/bob.company.com@COMPANY.COM
kadmin:   exit

Fix permissions on the new Kerberos keytab. The goal here is to let the ldap group be able to read the ldap/ principal stored in the keytab.

sudo chown root:ldap /etc/openldap/krb5.keytab 
sudo chmod 640 /etc/openldap/krb5.keytab
Restart the slapd(8) daemon.

sudo /etc/init.d/slapd restart

Test the SASL GSSAPI authentication.

kdestroy
kinit -p drobilla/admin@COMPANY.COM
ldapwhoami

Consumer DIT Initial Load


One last step before we start the replication is to load the DIT from our provider into our consumer. It's not required, but if your DIT is large, this will save quite a lot of time. Do to this, follow these steps :

Connect to the provider.

ssh alice.company.com

Create a new LDIF file with the entire DIT of the provider.

sudo slapcat | tee -a /tmp/provider.slapcat.ldif

Transfer the LDIF file over to the consumer.

scp /tmp/provider.slapcat.ldif bob.company.com:/tmp

Remove the temporary file. After all, it's the entire DIT that's in there in clear text...

sudo rm /tmp/provider.slapcat.ldif

Connect to the consumer.

ssh bob.company.com

Stop slapd.

sudo /etc/init.d/slapd stop

Destroy the database, but save the DB_CONFIG file.

sudo cp /var/lib/ldap/DB_CONFIG /tmp
sudo rm -rf /var/lib/ldap
sudo mkdir /var/lib/ldap
sudo mv /tmp/DB_CONFIG /var/lib/ldap

Load the provider LDIF file into the consumer's database. Notice the « -w » switch to slapadd(8C) which will write syncrepl context information. Once all entries are added, the contextCSN will be updated with the greatest CSN in the database. That's pretty handy in our case :)
sudo slapadd -l /tmp/provider.slapcat.ldif -w

Again, remove the temporary file. After all, it's the entire DIT that's in there in clear text...

sudo rm /tmp/provider.slapcat.ldif

Start the consumer daemon.

sudo /etc/init.d/slapd start

Check to see if the entire DIT is now on the consumer machine?

ldapsearch -xZWD cn=admin,dc=company,dc=com -b dc=company,dc=com -H ldap://bob.company.com

Check to see if the SASL GSSAPI user has the proper ACLs to this DIT on the consumer?

kinit -p drobilla/admin@COMPANY.COM
ldapsearch -ZLLLb dc=company,dc=com -H ldap://bob.company.com

Replication Test


We now have a TLS and SASL GSSAPI enabled OpenLDAP server bob.company.com configured as the consumer of our provider alice.company.com machine. Let's see if it works? Do test this, connect to the provider and change something. In this example, we will change our test.user's shell.

Connect to the provider server.

ssh alice.company.com

Check the current value for loginShell.
ldapsearch -LLLZb cn=test.user,ou=users,dc=company,dc=com loginShell

loginShell: /bin/bash
Change the loginShell value to /bin/sh.
ldapmodify <<-eof span="">
dn: cn=test.user,ou=users,dc=company,dc=com
changetype: modify
replace: loginShell
loginShell: /bin/sh
EOF

Keep an eye on the slapd.log file. You should now see these lines on the consumer :

slapd[6620]: do_syncrep2: rid=000 cookie=rid=000,csn=20120608192235.282075Z#000000#000#000000
slapd[6620]: syncrepl_entry: rid=000 LDAP_RES_SEARCH_ENTRY(LDAP_SYNC_MODIFY)
slapd[6620]: syncrepl_entry: rid=000 be_search (0)
slapd[6620]: syncrepl_entry: rid=000 cn=test.user,ou=users,dc=company,dc=com
slapd[6620]: slap_queue_csn: queing 0xa1601040 20120608192235.282075Z#000000#000#000000
slapd[6620]: slap_graduate_commit_csn: removing 0xa16050c8 20120608192235.282075Z#000000#000#000000
slapd[6620]: syncrepl_entry: rid=000 be_modify cn=test.user,ou=users,dc=company,dc=com (0)
slapd[6620]: slap_queue_csn: queing 0xa1601040 20120608192235.282075Z#000000#000#000000
slapd[6620]: slap_graduate_commit_csn: removing 0xa1604a48 20120608192235.282075Z#000000#000#000000

And if you query the consumer, you should see that the loginShell has indeed replicated to the consumer.

ssh bob.company.com

ldapsearch -LLLZb cn=test.user,ou=users,dc=company,dc=com loginShell
loginShell: /bin/sh

Success! :)

Consumer Kerberos Slave Setup


The whole point of replicating our DIT is to have two copies of it should the provider fails. But our DIT also supports our Kerberos infrastructure. We must thus make sure that our consumer can also act as a Kerberos slave should our provier -and Kerberos master- fails.

To enable the consumer machine to become a Kerberos slave, we must of course install the required packages and we already did that (see above). We must then send several provider's Kerberos files over to our consumer. Those files are the Kerberos private key and the Kerberos stash keyfile.

ssh alice.company.com

sudo scp /var/kerberos/krb5kdc/.k5.COMPANY.COM drobilla@bob.company.com:/tmp
sudo scp /etc/krb5.d/stash.keyfile drobilla@kong.caprion.com:/tmp
Now connect to the consumer and move the files to their proper locations and fix permissions.
ssh bob.company.com
sudo mv /tmp/.k5.COMPANY.COM /var/kerberos/krb5kdc
sudo mkdir /etc/krb5.d
sudo mv /tmp/stash.keyfile /etc/krb5.d
sudo chown -R root:root /var/kerberos/krb5kdc /etc/krb5.d
Make sure to edit the Kerberos ACL file on the consumer. This file contains a single line.
sudo vi /var/kerberos/krb5kdc/kadm5.acl
*/admin@COMPANY.COM *
Edit the consumer's Kerberos kdc.conf file.
Grab a copy of the krb5.conf file on the provider machine.
scp alice.company.com:/etc/krb5.conf /tmp
sudo mv /tmp/krb5.conf /etc
sudo chown root:root /etc/krb5.conf
Modify the database to include several new indexes which will help the Kerberos LDAP lookups. While we're at it, let's also add several other required indexes :)

vi ~/ldap/kerberos.indexes.ldif
ldapmodify -xZWD cn=admin,dc=company,dc=com -H ldap://bob.company.com -f ~/ldap/kerberos.indexes.ldif
Check to see if we have the new indexes in place?
ldapsearch -ZLLLb olcDatabase={1}bdb,cn=config olcDbIndex
Make sure the krb5kdc daemon is running when the machine boots. Note that we do not run the kadmin daemon on the slave KDC.
sudo chkconfig krb5kdc on

Start the krb5kdc daemon.
sudo /etc/init.d/krb5kdc start

Consumer Backup


Don't forget to backup the consumer now that it's working. See this blog post on how to do just that.

Client Configuration


Now that we have two OpenLDAP servers, we need to configure the client machines to use them both. So perform these configuration changes to all your LDAP client machines :

ssh client.company.com

Change the sudoers LDAP configuration file.

sudo vi /etc/ldap.conf

Change the system LDAP configuration file.

sudo vi /etc/openldap/ldap.conf

Change the client's nslcd configuration file.

sudo vi /etc/nslcd.conf

Change the pam_ldap configuration file.

sudo vi /etc/pam_ldap.conf

Change the Kerberos configuation file.

sudo vi /etc/krb5.conf

As you can see, this requires quite a few changes. So you should probably script these changes. I usually setup an admin server that runs Apache with configuration files on it. Clients can thus simply wget them. Easy. Or you can use Puppet. Even better!

With the new client configuration, try shutting down the LDAP and Kerberos services on alice.company.com and see if the clients can still work by using bob.company.com.

Troubleshooting


olcLogLevel


If you're not sure about the syncrepl engine, then enable logging for this.

ldapmodify -Z <<-eof span="">
dn: cn=config
changetype: modify
replace: olcLogLevel
olcLogLevel: stats sync
EOF
You can do this on both the consumer and the provider.

syncrepl_message_to_entry


If you get this type of error :

syncrepl_message_to_entry: rid=000 mods check (objectClass: value #1 invalid per syntax)

It means that you're missing a schema in on the consumer server. Of course the rid=000 can be different on your server. It's the replication ID configured in the olcSyncrepl: config of the consumer server. Compare the schemas on both machines and fix the consumer so that it has exactly the same schemas as the provider. So, the first thing you must do is check the schemas on the provider :

ssh alice.company.com

ldapsearch -ZLLLb cn=schema,cn=config dn
Then connect to the consumer and check the schemas there :
ssh bob.company.com
ldapsearch -ZLLLb cn=schema,cn=config dn
If the consumer is missing some schemas present on the provider, then add those missing schemas to the consumer and try the replication again.

Missing DB_CONFIG, but it's there?


Your logs show these error messages when you start slapd :

slapd[7722]: hdb_db_open: warning - no DB_CONFIG file found in directory /var/lib/ldap/accesslog: (14).#012Expect poor performance for suffix "cn=accesslog".
slapd[7722]: slapd starting
slapd[7722]: <= bdb_equality_candidates: (objectClass) not indexed
slapd[7722]: <= bdb_inequality_candidates: (reqStart) not indexed
But when you look into the /var/lib/ldap/accesslog directory, there is a DB_CONFIG file and the permissions are good.
What's the problem?
Well, it's simply that the HDB has the wrong « objectClass: olcDdbConfig ». It should be « objectClass: olcHdbConfig ». Notice the small, but very critical, difference?
That's it! 
That means we finished our initial goals :
  1. Install OpenLDAP 2.4.
  2. Configure Transport Layer Security (TLS).
  3. Manage users and groups in OpenLDAP.
  4. Configure pam_ldap to authenticate users via OpenLDAP.
  5. Use OpenLDAP as sudo's configuration repository.
  6. Use OpenLDAP as automount map repository for autofs.
  7. Use OpenLDAP as NFS netgroup repository again for autofs.
  8. Use OpenLDAP as the Kerberos principal repository.
  9. Setup OpenLDAP backup and recovery.
  10. Setup OpenLDAP replication.
The next set of goals are coming. I want to enable Referential Integrity in the LDAP DIT (i.e. when you delete a user it is also deleted from the various groups he's part of). I'm also interested in pulling information, such as users, passwords and grous from Active Directory servers and thus remove Samba.