At work I got a new task. First I thought, no problem I can
do this in few minutes. But then when trying to implement it,
I needed nearly 2 hours.
The problem!
You have a group of 20 or more developers accessing a corporate
subversion server. All developers are in a Active Directory managed
group and have read-write access. Now a single person should get
read-only access to the repository.
How I manage it?
As the internal authz_svn Apache authorization module does not
support to overwrite rw rights with just r rights, you can
not use it. Tried different combinations without success.
The only possibility would be to generate a new group in the
access list file, but then I would need to sync it with Active Directory.
The solution:
After a short discussion with a colleague, I got an idea.
I remount the SVN repository to a new directory via bind mount.
mkdir /var/svn-ro
mount -o bind /var/svn /var/svn-ro
mount -o remount,ro /var/svn-ro
You need to add it to /etc/fstab and the remount call
to /etc/rc.local. It is not possible to express this in a single
fstab line.
After that I exported this repository with just the single user
in a special Active Directory group. And voila, everything is fine now.
Happy coding,
Waldemar
Sunday, May 18. 2014
HP IRF
At work we got two new HP 5500SL switches and we are using them to get better
redundancy and performance for our ESXi 5.5 server. HP got the IRF
technology from H3C.
Here a short summary, what need to be configured, to get a nice high availability setup, as always the documention and examples
found on the internet are not so good:
Prerequisite: same firmware version on both switches and two JD360B 2-port 10GbE Local Connect Modules.
step 1
The local connect modules are not build in, yet. Login on both switches.
step 2
Set the member id for both switches:
Then on both:
step 3
Configure IRF MAC persistency, so that the "virtual" switch is always available with the same MAC address.
Configuration of the priority:
Then on both:
step 4
Put the 10GE local connect modules into the switches and connect them cross. That means master switch port 1 connected to slave switch port 2 and
master switch port 2 connected to slave switch port 1.
step 5
Connect the physical IRF ports with the logical ones.
Shutdown the interfaces:
Create logical IRF ports:
Activate IRF:
Before you reboot now, be sure you have no network cable connected between the master and slave switch, otherwise you get a loop!
After the reboot you should only reach the master switch with the configured IP address:
Configuration of MAD (Multiple Access Detection)
Our switches do not support MAD BFD. For MAD LACP we would need another switch, with MAD LACP extensions. We do not have one.
We use the third possible method, MAD ARP with a direct network cable connection between master and slave switch.
For MAD ARP you need to change the MAC address persistence:
Now we create a VLAN and activate MAD ARP for this link:
Now the switches can be connected with a network cable on port 48.
Configuration of an aggregation port for connection to a Linux server
Example for the connection to a Linux server, we use LACP (dynamic link aggregation):
Now the Linux system can be connected with two network cables to Port 17 on both switches.
Then ethernet bonding on Debian is configured via /etc/network/interfaces:
Execute following:
On the switch it look like:
Configuration of an aggregation port for an ESX-Host
Example for the connection of an ESXi host to the DMZ VLAN, we use static link aggregation without LACP.
The teaming mode in ESXi must be configured, before the switches are configured.
After that you can connect the ESXi host with two network cables to both switches on Port 18.
The link aggregation summary on the switch then looks like:
To use the full performance of 2 GBit/s with VMotion, you should take a look at
Vmware KB 2007467.
This does only works since ESXi 5.5, when using ESXi 5.1 there is a bug, which does not allow any VMotion, while one switch is in maintenance or out of order.
Furthermore you should ignore the warnings about using IP hash. IP hash is required to use with HP IRF technology.
After the configuration you should verify that everything works, by disconnecting power of one switch while doing some machine migration.
For the measurement of the performance we used SNMP with PRTG, Cacti didn't worked well.
redundancy and performance for our ESXi 5.5 server. HP got the IRF
technology from H3C.
Here a short summary, what need to be configured, to get a nice high availability setup, as always the documention and examples
found on the internet are not so good:
Prerequisite: same firmware version on both switches and two JD360B 2-port 10GbE Local Connect Modules.
step 1
The local connect modules are not build in, yet. Login on both switches.
step 2
Set the member id for both switches:
[HP-ESX-Master]irf member 1 renumber 1
[HP-ESX-Slave]irf member 1 renumber 2
Then on both:
[HP-All]quit
[HP-All]save
[HP-All]reboot
step 3
Configure IRF MAC persistency, so that the "virtual" switch is always available with the same MAC address.
[HP-ESX-Master]irf mac-address persistent always
Configuration of the priority:
[HP-ESX-Master]irf member 1 priority 32
[HP-ESX-Slave]irf member 1 priority 30
Then on both:
[HP-All]quit
[HP-All]save
[HP-All]reboot
step 4
Put the 10GE local connect modules into the switches and connect them cross. That means master switch port 1 connected to slave switch port 2 and
master switch port 2 connected to slave switch port 1.
step 5
Connect the physical IRF ports with the logical ones.
Shutdown the interfaces:
[HP-ESX-Master]interface Ten-GigabitEthernet 1/1/1
[HP-ESX-Master]shutdown
[HP-ESX-Master]quit
[HP-ESX-Master]interface Ten-GigabitEthernet 1/1/2
[HP-ESX-Master]shutdown
[HP-ESX-Master]quit
[HP-ESX-Slave]interface Ten-GigabitEthernet 2/1/1
[HP-ESX-Slave]shutdown
[HP-ESX-Slave]quit
[HP-ESX-Slave]interface Ten-GigabitEthernet 2/1/2
[HP-ESX-Slave]shutdown
[HP-ESX-Slave]quit
Create logical IRF ports:
[HP-ESX-Master]irf port 1/1
[HP-ESX-Master]port group interface Ten-GigabitEthernet 1/1/1
[HP-ESX-Master]irf port 1/2
[HP-ESX-Master]port group interface Ten-GigabitEthernet 2/1/1
[HP-ESX-Slave]irf port 2/1
[HP-ESX-Slave]port group interface Ten-GigabitEthernet 1/1/2
[HP-ESX-Slave]irf port 2/2
[HP-ESX-Slave]port group interface Ten-GigabitEthernet 2/1/2
Activate IRF:
[HP-ESX-Master]irf-port-configuration active
Before you reboot now, be sure you have no network cable connected between the master and slave switch, otherwise you get a loop!
[HP-All]quit
[HP-All]save
[HP-All]reboot
After the reboot you should only reach the master switch with the configured IP address:
[HP-ESX-Master]display irf
Switch Role Priority CPU-Mac Description
+1 Master 32 xxxx-yyyy-zzzz -----
2 Slave 30 xxxx-yyyy-zzzz -----
--------------------------------------------------
indicates the device is the master.
+ indicates the device through which the user logs in.
The Bridge MAC of the IRF is: xxxx-yyyy-zzzz
Auto upgrade : yes
Mac persistent : always
Domain ID : 0
[HP-ESX-Master]display irf topology
Topology Info
-------------------------------------------------------------------------
IRF-Port1 IRF-Port2
Switch Link neighbor Link neighbor Belong To
1 UP 2 UP 2 xxxx-yyyy-zzzz
2 UP 1 UP 1 xxxx-yyyy-zzzz
[HP-ESX-Master]display irf configuration
MemberID NewID IRF-Port1 IRF-Port2
1 1 Ten-GigabitEthernet1/1/1 Ten-GigabitEthernet1/1/2
2 2 Ten-GigabitEthernet2/1/1 Ten-GigabitEthernet2/1/2
Configuration of MAD (Multiple Access Detection)
Our switches do not support MAD BFD. For MAD LACP we would need another switch, with MAD LACP extensions. We do not have one.
We use the third possible method, MAD ARP with a direct network cable connection between master and slave switch.
For MAD ARP you need to change the MAC address persistence:
[HP-ESX-Master]undo irf mac-address persistent
Now we create a VLAN and activate MAD ARP for this link:
[HP-ESX-Master]vlan 7
[HP-ESX-Master-vlan7]description ARP-MAD-VLAN
[HP-ESX-Master-vlan7]port GigabitEthernet 1/0/48
[HP-ESX-Master-vlan7]port GigabitEthernet 2/0/48
[HP-ESX-Master-vlan7]quit
[HP-ESX-Master]interface vlan-interface 7
[HP-ESX-Master]ip address 192.168.168.168 24
[HP-ESX-Master]mad arp enable
[HP-ESX-Master]save
Now the switches can be connected with a network cable on port 48.
Configuration of an aggregation port for connection to a Linux server
Example for the connection to a Linux server, we use LACP (dynamic link aggregation):
[HP-ESX-Master]vlan 9
[HP-ESX-Master-vlan9]description ESX-DMZ
[HP-ESX-Master-vlan9]port GigabitEthernet 1/0/17
[HP-ESX-Master-vlan9]port GigabitEthernet 2/0/17
[HP-ESX-Master-vlan9]quit
[HP-ESX-Master]interface bridge-aggregation 17
[HP-ESX-Master-Bridge-Aggregation17]description fw-dmz
[HP-ESX-Master-Bridge-Aggregation17]link-aggregation mode dynamic
[HP-ESX-Master-Bridge-Aggregation17]quit
[HP-ESX-Master]interface GigabitEthernet 1/0/17
[HP-ESX-Master-GigabitEthernet1/0/17]port link-aggregation group 17
[HP-ESX-Master-GigabitEthernet1/0/17]quit
[HP-ESX-Master]interface GigabitEthernet 2/0/17
[HP-ESX-Master-GigabitEthernet2/0/17]port link-aggregation group 17
[HP-ESX-Master-GigabitEthernet2/0/17]quit
[HP-ESX-Master]interface bridge-aggregation 17
[HP-ESX-Master-Bridge-Aggregation17]port access vlan 9
[HP-ESX-Master-Bridge-Aggregation17]quit
Now the Linux system can be connected with two network cables to Port 17 on both switches.
Then ethernet bonding on Debian is configured via /etc/network/interfaces:
auto bond0
iface bond0 inet static
address 192.168.1.1
netmask 255.255.255.0
bond-slaves eth0 eth4
bond-mode 802.3ad
bond-miimon 100
xmit_hash_policy layer2+3
Execute following:
ifup bond0
On the switch it look like:
[HP-ESX-Master]display link-aggregation summary
Aggregation Interface Type:
BAGG -- Bridge-Aggregation, RAGG -- Route-Aggregation
Aggregation Mode: S -- Static, D -- Dynamic
Loadsharing Type: Shar -- Loadsharing, NonS -- Non-Loadsharing
Actor System ID: 0x8000, xxxx-yyyy-zzzz
AGG AGG Partner ID Select Unselect Share
Interface Mode Ports Ports Type
-------------------------------------------------------------------------------
BAGG17 D 0xffff, xxxx-yyyy-zzzz 2 0 Shar
..
Configuration of an aggregation port for an ESX-Host
Example for the connection of an ESXi host to the DMZ VLAN, we use static link aggregation without LACP.
The teaming mode in ESXi must be configured, before the switches are configured.
[HP-ESX-Master]vlan 9
[HP-ESX-Master-vlan9]port GigabitEthernet 1/0/18
[HP-ESX-Master-vlan9]port GigabitEthernet 2/0/18
[HP-ESX-Master-vlan9]quit
[HP-ESX-Master]interface bridge-aggregation 18
[HP-ESX-Master-Bridge-Aggregation18]description esx1-dmz
[HP-ESX-Master-Bridge-Aggregation18]quit
[HP-ESX-Master]interface GigabitEthernet 1/0/18
[HP-ESX-Master-GigabitEthernet1/0/18]port link-aggregation group 18
[HP-ESX-Master-GigabitEthernet1/0/18]quit
[HP-ESX-Master]interface GigabitEthernet 2/0/18
[HP-ESX-Master-GigabitEthernet2/0/18]port link-aggregation group 18
[HP-ESX-Master-GigabitEthernet2/0/18]quit
[HP-ESX-Master]interface Bridge-Aggregation 18
[HP-ESX-Master-Bridge-Aggregation18]port access vlan 9
[HP-ESX-Master-Bridge-Aggregation18]quit
After that you can connect the ESXi host with two network cables to both switches on Port 18.
The link aggregation summary on the switch then looks like:
[HP-ESX-Master]display link-aggregation summary
Aggregation Interface Type:
BAGG -- Bridge-Aggregation, RAGG -- Route-Aggregation
Aggregation Mode: S -- Static, D -- Dynamic
Loadsharing Type: Shar -- Loadsharing, NonS -- Non-Loadsharing
Actor System ID: 0x8000, xxxx-yyyy-zzzz
AGG AGG Partner ID Select Unselect Share
Interface Mode Ports Ports Type
-------------------------------------------------------------------------------
BAGG18 S none 2 0 Shar
..
To use the full performance of 2 GBit/s with VMotion, you should take a look at
Vmware KB 2007467.
This does only works since ESXi 5.5, when using ESXi 5.1 there is a bug, which does not allow any VMotion, while one switch is in maintenance or out of order.
Furthermore you should ignore the warnings about using IP hash. IP hash is required to use with HP IRF technology.
After the configuration you should verify that everything works, by disconnecting power of one switch while doing some machine migration.
For the measurement of the performance we used SNMP with PRTG, Cacti didn't worked well.
Saturday, April 5. 2014
grub2 RAID fun with Debian wheezy
A colleague of mine has ordered a new root server. The server has 4x 500 MB disks, but when you want to have root access, the provider only installs a Debian on the first disk without software raid. The provider supposed to install on a predefined partition layout, which would allow to install Debian on a free first partition. In reality this does not happen. The layout looked like:
As an old Windows-NT user with C: for the system and D: for data this does not like like
On a test system I developed two shell scripts, which allows to install Debian on a software raid with level 5. The installation was done in 3 steps:
1. http://www.waldemar-brodkorb.de/scripts/grub2-debian-wheezy.sh used to install Debian 7 in the partition 8 && reboot
2. with parted, partprobe and fdisk modification on the partition table as long as /dev/sda1 was 20 GB in size and the intermediate system was on
/dev/sda5 in an extended partition at the end of the disk. Don't forget to update /etc/fstab and /boot/grub/grub.cfg. Use grub-install /dev/sda to reinstall grub into the master boot record. If any error occur, try to use partprobe or kpartx to inform the kernel about the new partition table && reboot
3. http://www.waldemar-brodkorb.de/scripts/grub2-raid-debian-wheezy.sh used to install Debian 7 on the first partition on a software raid 5 && reboot
After all this is done, you can remove the extended partition and create two new partitions for swap and data (D:) for raid 5 usage. On top configure encryption for /data and everything is fine.
I am asking myself why the configuration of grub2 and RAID is so bad documented. The tests on the test machine with 4 disks resulted in working setup with RAID 0, RAID 1, RAID 5, RAID 6 and RAID 10 and the system still booted after disabling 1-2 disk depending on the used raid level.
Only for RAID 6 you have to install grub2 from Debian/sid otherwise the system does not boot after a disk failure. (Bug in grub)
May be I create a YAIFD, to automate the process to eliminate the error prone repartition tasks.
(in 2005 I had created a similar project called YAIFO to install OpenBSD on a root server with only SSH access: https://github.com/jedisct1/yaifo)
Happy booting!
Device Boot Start End Blocks Id System
/dev/sda1 2048 4095 1024 83 Linux
/dev/sda2 * 4096 206847 101376 83 Linux -> /boot
/dev/sda3 208894 976771071 488281089 5 Extended
/dev/sda5 208896 8593407 4192256 82 Linux swap / Solaris
/dev/sda6 8595456 12787711 2096128 83 Linux -> /
/dev/sda7 12789760 111482879 49346560 83 Linux -> /tmp
/dev/sda8 111484928 976771071 432643072 83 Linux -> /home
As an old Windows-NT user with C: for the system and D: for data this does not like like

On a test system I developed two shell scripts, which allows to install Debian on a software raid with level 5. The installation was done in 3 steps:
1. http://www.waldemar-brodkorb.de/scripts/grub2-debian-wheezy.sh used to install Debian 7 in the partition 8 && reboot
2. with parted, partprobe and fdisk modification on the partition table as long as /dev/sda1 was 20 GB in size and the intermediate system was on
/dev/sda5 in an extended partition at the end of the disk. Don't forget to update /etc/fstab and /boot/grub/grub.cfg. Use grub-install /dev/sda to reinstall grub into the master boot record. If any error occur, try to use partprobe or kpartx to inform the kernel about the new partition table && reboot
3. http://www.waldemar-brodkorb.de/scripts/grub2-raid-debian-wheezy.sh used to install Debian 7 on the first partition on a software raid 5 && reboot
After all this is done, you can remove the extended partition and create two new partitions for swap and data (D:) for raid 5 usage. On top configure encryption for /data and everything is fine.
I am asking myself why the configuration of grub2 and RAID is so bad documented. The tests on the test machine with 4 disks resulted in working setup with RAID 0, RAID 1, RAID 5, RAID 6 and RAID 10 and the system still booted after disabling 1-2 disk depending on the used raid level.
Only for RAID 6 you have to install grub2 from Debian/sid otherwise the system does not boot after a disk failure. (Bug in grub)
May be I create a YAIFD, to automate the process to eliminate the error prone repartition tasks.
(in 2005 I had created a similar project called YAIFO to install OpenBSD on a root server with only SSH access: https://github.com/jedisct1/yaifo)
Happy booting!
Wednesday, April 2. 2014
Puppet with LDAP and a firewall class
At work a colleague and I tried to implement a firewall class for our puppet system, which automatically configures the local firewall of our Debian servers. The information, which ports should be opened, are saved in LDAP attributes. Unfortunately puppet does not support any data structures other than simple key-value pairs. To create some more complex iptables rules, we worked a little bit with ERB,templates and manifest magic.
From the manifest:
class firewall{
$values = split($tcp_ip_ports, ',')
...
}
The LDAP attribute contains f.e.:
tcp_ip_ports=192.168.1.23:3306,192.168.1.42:443
In the template we do then following with the values array:
<% if @values %>
<% @values.each do |val| -%>
<% ip = scope.function_regsubst(val,'^(\d+.\d+.\d+.\d+):\d+$', '\1') -%>
<% port = scope.function_regsubst(val,'^\d+.\d+.\d+.\d+:(\d+)$', '\1') -%>
-A INPUT -p tcp -s <%= ip %> --dport <%= port %> -j ACCEPT
<% end -%>
The generated rules are read by iptables-restore.
So it is really possible to use some more complex data structures in LDAP, when you know the syntax of templates and how to call functions inside them. We found very few examples for this in the internet.
Have fun.
From the manifest:
class firewall{
$values = split($tcp_ip_ports, ',')
...
}
The LDAP attribute contains f.e.:
tcp_ip_ports=192.168.1.23:3306,192.168.1.42:443
In the template we do then following with the values array:
<% if @values %>
<% @values.each do |val| -%>
<% ip = scope.function_regsubst(val,'^(\d+.\d+.\d+.\d+):\d+$', '\1') -%>
<% port = scope.function_regsubst(val,'^\d+.\d+.\d+.\d+:(\d+)$', '\1') -%>
-A INPUT -p tcp -s <%= ip %> --dport <%= port %> -j ACCEPT
<% end -%>
The generated rules are read by iptables-restore.
So it is really possible to use some more complex data structures in LDAP, when you know the syntax of templates and how to call functions inside them. We found very few examples for this in the internet.
Have fun.
Saturday, March 22. 2014
OpenBSD authpf SSH gateway
To reduce the surface for attacks on our infrastructure at work via secure shell, I installed a dedicate secure shell gateway. As basis I used OpenBSD/amd64 5.4 virtualized under VMware ESXi 5. The server s connected to the internal DMZ and the central firewall forwards all traffic to the official address of the ssh gateway to the internal DMZ address.
For the use of authpf we create local users:
The next step is to add the public ssh key to /home/foo/.ssh/authorized_keys and to create the user specific firewall rules and messages for the login screen. For example the external TCP port 3389 will be forwarded over the ssh gateway to an internal machine in the DMZ, to allow remote desktop access to a windows machine:
( /etc/authpf/users/foo/athpf.rules )
In this case the ssh gateway is in the same ethernet segment as the windows machine!
The user can get following message after login:
( /etc/authpf/users/foo/authpf.message )
Warning: There is a difference between Linux iptables and OpenBSD pf. All firewall rules will be examined and the last rule counts. Except you are using the keyword "quick", then examination is terminated at this rule.
To look at the created rules, you can find out the PID via ps:
With this solution you can allow access to internal services for external companies or employees, without opening the services to god and the world.
I used authpf in 2002 the first time, while living in a local hacker residential community in Bonn. It took some time to use it in production the first time.
Puffy rockz!
For the use of authpf we create local users:
adduser -s -class authpf -batch foo nogroup "Foo Bar"
The next step is to add the public ssh key to /home/foo/.ssh/authorized_keys and to create the user specific firewall rules and messages for the login screen. For example the external TCP port 3389 will be forwarded over the ssh gateway to an internal machine in the DMZ, to allow remote desktop access to a windows machine:
pass in on em0 inet proto tcp from $user_ip to em0 port 3389 rdr-to 192.168.xxx.yyy port 3389
pass out on em0 inet proto tcp from $user_ip to 192.168.xxx.yyy port 3389 nat-to em0
( /etc/authpf/users/foo/athpf.rules )
In this case the ssh gateway is in the same ethernet segment as the windows machine!
The user can get following message after login:
You can now access following service:
RDP to Windows machine: rdesktop foo@ssh-gateway"
( /etc/authpf/users/foo/authpf.message )
Warning: There is a difference between Linux iptables and OpenBSD pf. All firewall rules will be examined and the last rule counts. Except you are using the keyword "quick", then examination is terminated at this rule.
To look at the created rules, you can find out the PID via ps:
ps axw|grep authpf
pfctl -a "authpf/foo(xxx)" -s rules
With this solution you can allow access to internal services for external companies or employees, without opening the services to god and the world.
I used authpf in 2002 the first time, while living in a local hacker residential community in Bonn. It took some time to use it in production the first time.
Puffy rockz!
(Page 1 of 2, totaling 6 entries)
next page »