So, I have been using LXC to host my server services for a period of time, with a view to keeping things portable should I need to change provider. It’s very good in that it’s integrated into the Linux kernel and in Ubuntu at least it’s not too difficult to setup, however there are a number of problems with it.
First and foremost, every time the container operating system upgrades anything to do with init scripts, it won’t boot any more, so you are forced to hold back packages with varying amounts of success. Secondly, there does seem to be some overhead running things in an LXC container, and thirdly it isn’t as portable as it could be i.e. there is no live migration. and you will have to change config files if you move hoster to reflect you new IP address.
As I’m not selling containers as VPS, I only need to run 1 server instance, and therefore don’t really need containerisation at all, enter schroot. Schroot is like chroot without the hassle and with added flexibility, in a nutshell it will mount and start everything correctly for you to the point where you can automate startup and running of services in the chroot, it doesn’t suffer from init script borkage since the init system isn’t used at all, and it’s more portable as networking is irrelevant to a chroot (it simply uses the hosts networking).
Ok so where to start, well if you are already using LXC you can use the directory your container is stored in. I opted to move mine to a sane location before starting, in the interests of convention and easy administration. So, I created a “schroot” directory in the /home directory i.e.
Continue reading Using schroot instead of LXC containers
With the release of Ubuntu 11.04 Natty Narwhal on April 28th, it is now possible to use the Revenenue Online Service, hitherto referred to as ROS, on a default install of Ubuntu desktop in Firefox (still doesn’t work in Chromium).
Copy the certificate files into “ROS/RosCerts” (case sensitive) in your home directory. When you go to the ROS website for the first time you will get a security warning, say “yes” or “accept” and enjoy!
Since Ubuntu 9.10 Karmic Koala, Ubuntu has an inbuilt way of sharing any connection. There are a couple usage scenarios for this:-
- You want to share your mobile broadband connection with other computers on the network
- You want to use your Ubuntu machine to extend your network wirelessly e.g. Your laptop is connected to the network via it’s wireless adapter, which is then connected to another machine via a CAT 5 or CAT 6 cable, enabling the other machine to connect to the network and internet.
I wouldn’t envisage using this to share a DSL broadband connection since most people will already be using an ISP supplied router to do this.
Using it is as simple as enabling the method “Shared to other computers” in the network connection in network manager.
This turns your machine into a mini DHCP/DNS server and starts handing out IP addresses in a 10.x.x.x address range. The machine then NAT/routes any traffic coming in on that interface and forwards it to the networks real gateway. I’ve used it a few times now and it works well.
Following relatively recent improvements in the Linux wireless stack and driver support it is now possible to setup a Linux machine as an access point, even if you don’t have an Atheros chipset (which was historically the case). Support is patchy but I would say there is a good chance you can do this if you have purchased a laptop with built in wireless in the last 2 years. It is even possible to set one up with a USB wireless adapter (which even Madwifi couldn’t do) if you have an Ralink chipset.
Why would you want to do this? Well, there aren’t that many reasons considering ISP’s routinely hand out wireless routers these days, but I will give you a couple:-
Continue reading Ubuntu as a wireless 80211g/n access point/router
First of all what are containers? In this context we are referring to Linux running inside a “containerised” environment i.e. an environment which is to all intents and purposes isolated from the main operating system. The containerised environment doesn’t have it’s own kernel or virtualised hardware and runs at native speeds because of this.
In what ways could a containerised environment be useful and desirable from a business perspective? The first use that springs to mind is being able to easily move your dedicated server from one hoster to another (which is the way I use it). You can simply “rsync” the contents of the container from the source hoster, then take the container down, run another quick rsync and start the container at the new hoster. This is really useful, saving hours and possibly days rebuilding the server at the new hoster in a more conventional manner.
Continue reading The merits of LXC containers
The idea here is to setup DNSMASQ and HAVP to provide DNS, DHCP and content filtering in a Windows 7/Vista/XP client environment on Ubuntu Server Edition. DNSMASQ is a light package which will provide DNS caching and DHCP to a network (amongst other things). HAVP is a proxy server which uses a third party virus scanner (usually ClamAV) to scan internet content for viruses. This assumes that you already have Ubuntu Server Edition installed on a suitable machine and have a working internet connection. In the settings “192.168.1.254” refers to this machine which is acting as a router/firewall, you could equally set it to the ip of another router on the network. “192.168.1.253” refers to the ip of a Windows server. First off install DNSMASQ:-
apt-get install dnsmasq
nano -w /etc/dnsmasq.conf
We now need to set the relevant options:-
Continue reading Setup DNS, DHCP and Content Filtering using DNSMASQ and HAVP in Ubuntu.