LXC firewall logging and udev upgrade in Ubuntu

Today I’m going to write about a couple of major gotchas with LXC. Now these issues are documented in various places but I wanted to put all the relevant information together in one place to make it easier for people.

Before going any further it’s important to note that I created my LXC container with the official Ubuntu template from the latest “stable” LXC release i.e. I downloaded the tarball and put the template in the correct place as Ubuntu 10.04’s LXC package doesn’t contain said template. This helps minimise all sorts of problems especially ones related to the LXC console crashing and the like.

Firstly you will find when running “apt-get upgrade” (if you have Lucid updates enabled in /etc/apt/sources.list) that you get this error on upgrading udev:-

mknod: `/lib/udev/devices/ppp': Operation not permitted

Continue reading LXC firewall logging and udev upgrade in Ubuntu

Converting to GPT in Ubuntu

GPT stands for “GUID Partition Table” and it refers to a relatively new format for disk partition tables. It was designed to get around the limitations of the MBR format, namely that you can have more than 4 primary partitions and partitions can be more than 2.2 terabytes in size. GPT is part of the EFI standard but it can also be used on standard BIOS only machines i.e. most non-MAC PCs. Converting to GPT will allow you to future proof your partitions. 2 TB disks are becoming common and it won’t be long before the 2.2TB limit of MBR will stop people using all their disk space in one partition, so in the name of usability and flexibility GPT is the way forward.

Continue reading Converting to GPT in Ubuntu

Partition alignment largely a moot point now

To summarise. It is very important for recent disk performance that partitions are aligned on a 1MB/2048 sector boundary. This stops data from sitting astride blocks and killing disk performance. It is especially important with first generation SSDs, as they have poor write performance anyway, and will save your SSD from an early demise (flash memory has limited write cycles).

Windows Vista and above will use the 2048 sector alignment as will Ubuntu, so it isn’t necessary to worry about this issue any more, unless you are installing Windows XP. MAC OS X is the big loser in all of this as it doesn’t care about alignment beyond 4k which may or may not work well depending on the specific block size of your HD/SSD.

Continue reading Partition alignment largely a moot point now

Iptables-apply or how to avoid unnecessary site visits when changing firewall configuration

Today’s post is definitely of the short and sweet variety. I happened across the file list for iptables the other day and noticed a binary I had not come across before “iptables-apply”. Iptables-apply is a script that applies firewall rules and then waits a configurable amount of time, for user input, to confirm the changes were successful. In other words if you aren’t a perfect admin (who is right!) and manage to accidentally lock yourself out by putting an iptables rule in wrong, iptables-apply will automatically revert back to the previous set of rules and you’ll get access again.

Could’ve saved me literally some diesel over the past few years that one!

From the iptables-apply man page:

iptables-apply   will  try  to  apply  a  new  ruleset  (as  output  by
iptables-save/read by iptables-restore) to iptables,  then  prompt  the
user  whether the changes are okay. If the new ruleset cut the existing
connection, the user will not be able to answer affirmatively. In  this
case,  the  script rolls back to the previous ruleset after the timeout
expired. The timeout can be set with -t.

This has the advantage over Shorewall in that Shorewall will only keep existing connections open when new rules are applied. If you happen to lose connectivity, tough luck, Shorewall will obediently block further connections on your borked firewall.

Transparent bridging firewalls

The commands in this article can be used on any Ubuntu/Debian machine.

A transparent bridging firewall is a firewall which can be inserted anywhere on a network, but usually between the network segment containing internet access and the rest of a LAN. Generally they are used to silently police and log traffic from the network to the internet and vice versa, the main advantage being that they can easily be inserted and removed without any network reconfiguration.

Further to this the segment between and including the bridged firewall and internet router can be considered a DMZ where internet facing servers can be placed. Personally I think it is a good idea to place all servers in this no mans land as they are as likely to come under attack from Windows clients on their own LAN as any hacker from the internet. The bridged firewall provides protection for both sides.

Continue reading Transparent bridging firewalls

Ubuntu as a wireless 80211g/n access point/router

Following relatively recent improvements in the Linux wireless stack and driver support it is now possible to setup a Linux machine as an access point, even if you don’t have an Atheros chipset (which was historically the case). Support is patchy but I would say there is a good chance you can do this if you have purchased a laptop with built in wireless in the last 2 years. It is even possible to set one up with a USB wireless adapter (which even Madwifi couldn’t do) if you have an Ralink chipset.

Why would you want to do this? Well, there aren’t that many reasons considering ISP’s routinely hand out wireless routers these days, but I will give you a couple:-

Continue reading Ubuntu as a wireless 80211g/n access point/router

Filtering POP3 mailboxes with fdm

Place the following in /usr/local/bin/extractlastip.pl, use the /etc/procmailrc below, and make the perl script executable.


use Regexp::Common qw/net/;

while (<>) {
 /$RE{net}{IPv4}{dec}{-keep}/ and $last = $1 unless /127\.0\.0\.1|(10\.\d+|172\.(1[6-9]|2\d|3[0-1])|192\.168)(\.\d+){2}/;

print $last, "\n";
# Global procmail definitions
# Define a file for procmail to send it's log information.
# Make sure procmail verbose logging is turned off.
# Define a new line character for use in procmail LOG entries.
# note: the quote spanning two lines below is deliberate.
# Directory where we will store mail folders
# Note: This directory MUST exist!
#Mail folder for incoming whitelisted listmail
# Location of formail on our system. (for use in procmail actions since
# those typically need a shell meta pattern in procmail action lines to work as intended)
# Location of file containing From: addresses of people we correspond with on a regular basis
#Location of a folder containing blacklisted email.
# Uncomment this if you would rather just delete the blacklisted email.
# Location of a file containing regular expressions of patterns that we don't want.
# to see in the Subject: From: or Reply-to: headers
# Location of file containing To: addresses we have given to news letters
# or web sites that map to my real account via sendmail aliases.
# Capture the message ID string (if any) for future reference in log entries.
* ^Message-ID:
{ MESSAGEID=`${FORMAIL} -cx "Message-ID:" |sed -e 's/[ \t]\{1,\}//g'` }
:0 E
{ MESSAGEID='none' }
# Sample procmail recipe to enumerate the Received: headers, and store them
# in the ${RECEIVEDHEAD} variable. Note the backtics that launch an embedded
# shell script.
:0 W
* H ?? 1^1 ^Received:
RECEIVEDHEAD=`${FORMAIL} -X "X-Originating-IP" -X "Received" | /usr/local/bin/extractlastip.pl`
# Sample procmail recipe which will extract the IPv4 address from the first
# Received: header. This could be adapted if you have several internal
# servers through which the mail passes.
# Also, the header IP extraction in this recipe is assuming that the header line was
# generated by sendmail. If you are using another server, you may need to adjust
# the regular expression to accommodate that.
# Initialize the SOURCEIP variable
* RECEIVEDHEAD ?? [0-9]+\.[0-9]+\.[0-9]+\.[0-9]+
LOG="[$$]$_: Extracted IP ${SOURCEIP} from Received: headers.${NL}"
:0 E
{ LOG="[$$]$_: Failed to find any source IP in the first Received: header.${NL}" }
# Sample procmail recipe which will generate the reverse IPv4 from
# the SOURCEIP, for use in blocklist lookups.
# It will also verify that the number we are looking at is a real Internet
# address.
# Initialize the SOURCEIPREV variable
# Check for valid IPv4 address range.
# Then if the address is not an IANA non-routable address
# generate the reverse IP for use in subsequent DNS lookups.
# Build a procmail style regular expression to test for a valid IPv4 range.
# Build a procmail style regular expression to test for IPv4 ranges that should not be used on the Internet.
# These are based on RFC-3330 Para 3 summary table.
# Note: These expressions should be periodically verified and updated as needed
# Combine the above into one regular expression.
# Note: IP is included in network defined above
#       as part of the CLASSA regular expression variable.
* ! SOURCEIP ?? ^(000\.000\.000\.000)$
* $ ! SOURCEIP ?? ^${RFC_3330_INVALID}$
* SOURCEIP ?? ^[0-9]+\.[0-9]+\.[0-9]+\.\/[0-9]+
{ QUAD4=${MATCH} }
* SOURCEIP ?? ^[0-9]+\.[0-9]+\.\/[0-9]+
{ QUAD3=${MATCH} }
* SOURCEIP ?? ^[0-9]+\.\/[0-9]+
{ QUAD2=${MATCH} }
* SOURCEIP ?? ^\/[0-9]+
{ QUAD1=${MATCH} }
LOG="[$$]$_: IP ${SOURCEIP} is a valid IPv4 address${NL}"
:0 E
LOG="[$$]$_: IP ${SOURCEIP} is an IANA Non-Routable IPv4 address${NL}"
:0 E
LOG="[$$]$_: Error - ${SOURCEIP} has an invalid range for an IPv4 address.${NL}"
# Here is another example of a more complex blocklist lookup technique
# which will lookup an IP on zen.spamhaus.org, decode the response, and
# tag the email.
# References:
# http://www.spamhaus.org/zen/index.lasso
# http://www.spamhaus.org/faq/answers.lasso?section=DNSBL%20Technical#200
SPAMHAUSLOOKUP=`host ${SOURCEIPREV}.zen.spamhaus.org`
* SPAMHAUSLOOKUP ?? 127\.0\.0\.([2-9]|1[01])$
# SBL Spamhaus Maintained
* SPAMHAUSLOOKUP ?? 127\.0\.0\.2$
# --- reserved for future use
* SPAMHAUSLOOKUP ?? 127\.0\.0\.3$
# XBL CBL Detected Address
* SPAMHAUSLOOKUP ?? 127\.0\.0\.4$
# XBL NJABL Proxies (customized)
* SPAMHAUSLOOKUP ?? 127\.0\.0\.5$
# XBL reserved for future use
* SPAMHAUSLOOKUP ?? 127\.0\.0\.6$
# XBL reserved for future use
* SPAMHAUSLOOKUP ?? 127\.0\.0\.7$
# XBL reserved for future use
* SPAMHAUSLOOKUP ?? 127\.0\.0\.8$
# --- reserved for future use
* SPAMHAUSLOOKUP ?? 127\.0\.0\.9$
# PBL ISP Maintained
* SPAMHAUSLOOKUP ?? 127\.0\.0\.10$
# PBL Spamhaus Maintained
* SPAMHAUSLOOKUP ?? 127\.0\.0\.11$
{ SPAMHAUSLOG="${SPAMHAUSLOG}PBL-SpamHaus Maintained, " }
SPAMHAUSLOG=`echo "${SPAMHAUSLOG}" |sed -e "s/, $/\n\tSee: http:\/\/www\.spamhaus\.org\/query\/bl\?ip=${SOURCEIP}/"`
LOG="[$$]$_: Result codes: ${SPAMHAUSLOG}${NL}"
:0 f
|${FORMAIL} -A "X-blocklists: ${SOURCEIP} found in SpamHaus. Blocklist lookup results: ${SPAMHAUSLOG}"

Obviously this is open to abuse because the spammer could add additional received headers to throw the recipe off the scent, however I can’t see that happening unless this becomes a popular way of filtering. Also if there are any non RFC compliant SMTP servers between the spammer and you they may mess with the headers which could screw the whole thing up also. You can’t please all the people all the time.

Update 09/09/10 – While this is an interesting and educational excercise it’s important to point out that the protection afforded by default in Ubuntu by using Amavisd-new and Spamassassin will automatically do all this and more. Spamassassin checks zen.spamhaus.org by default and additionally scans every ip in the received headers against this black list (and many more) except for “trusted” and reserved addresses. By default in Ubuntu, Spamassassin will assume every IP in the headers except your own is un-trusted.

Remove residual config files in Ubuntu – A one liner

I have spent literally hours over the last year or two searching for an elegant way to remove configuration files left over from package installs, in a command line environment, with Ubuntu.

Googling would provide a frustrating list of solutions that would either involve installing extra packages, using a complicated command line, or script, solutions that I would never be happy with and would “redo” the search again, each time I wanted to perform the same task, in the hope of finding something better.

In the end Aptitude and Xargs were my friends. Without further ado ….

Continue reading Remove residual config files in Ubuntu – A one liner

Remotely upgrading a server from 32 to 64 bit linux

This post isn’t designed to be a “how to” merely an overview of how I achieved the subject. It is possible to do this without any physical intervention but in practice I have had to visit site at least once to fix a boot error on every one I have done.

Disclaimer:- When attempting this having some sort of remote access solution that will give access to the server even when it won’t boot is desirable i.e. BMC, DRAC or KVM over IP. Obviously resizing and deleting partitions and file systems is very dangerous so you need to be ultra careful and ultra sure you understand the process and exactly what you are doing at each step. It may also be helpful to draw the partition layout at each stage so you have a clear view of what is happening. Don’t come crying to me when it all blows up in your face. You have been warned!

Continue reading Remotely upgrading a server from 32 to 64 bit linux

The merits of LXC containers

First of all what are containers? In this context we are referring to Linux running inside a “containerised” environment i.e. an environment which is to all intents and purposes isolated from the main operating system. The containerised environment doesn’t have it’s own kernel or virtualised hardware and runs at native speeds because of this.

In what ways could a containerised environment be useful and desirable from a business perspective? The first use that springs to mind is being able to easily move your dedicated server from one hoster to another (which is the way I use it). You can simply “rsync” the contents of the container from the source hoster, then take the container down, run another quick rsync and start the container at the new hoster. This is really useful, saving hours and possibly days rebuilding the server at the new hoster in a more conventional manner.

Continue reading The merits of LXC containers