Next Previous Contents

5. Getting and Compiling the Source Code

In order to use divert sockets under Linux you will need two things - the kernel source code that has been patched for divert sockets and the source code to ipchains-1.3.9 that, also, has been patched to use divert sockets.

5.1 Getting *The Source*

Both pieces of source code can be retrieved from the divert socket web-site http://www.anr.mcnc.org/~divert You can get the source code for divert sockets kernel in two forms - as a patch to linux-2.2.12 that you have to apply to a fresh 2.2.12 source, or as an already patched kernel tarball (much larger than the patch). ipchains source is provided as complete source tarball only.

5.2 Compiling

Compiling ipchains is straightforward - simply say

make
in the ipchains-1.3.9 subdirectory.

When compiling the divert-socket kernel - use your favorite way of configuring it:

make config
or
make menuconfig
or
make xconfig
Don't forget to enable "Prompt for development and/or incomplete code/drivers" before proceeding. There are only three compile-time options that affect the behavior of divert sockets and they are explained in the following section

Kernel compile-time options

In order to enable divert sockets in your kernel you must enable firewalling and IP firewalling first. The three kernel compile-time options that affect the behavior of divert sockets are:

IP: divert sockets

Enables the divert sockets in your kernel.

IP: divert pass-through

Changes the behavior of DIVERT rules: by default if a DIVERT rule is present in a firewall and no application is listening on the port that the rule specifies, any packet that satisfies the rule is silently dropped, as if it were a DENY rule.

Enabling the pass-through mode results in such packets continuing their way through the IP stack as if nothing happened. This could be helpful if you want to have a static rule in the firewall, but don't always want to listen on it.

IP: always defragment

Changes the way that the sockets deal with fragmentation. By default the divert socket receives individual fragments of packets that are larger than MTU, which it then forwards to user space. The burden of defragmentation in this case lies with the application listening on the divert socket. Also, an application cannot inject any fragments that are larger than MTU, because they will be dropped (this is the limitation of the kernel, not the divert sockets - Linux kernels up to 2.2.x do NOT fragment raw packets with IP_HDRINCL option set). Typically, thats OK, since if you simply reinject the fragments the way you received them, everything will work fine, since none of them are going to be larger than MTU.

If you enable the always defragment option, then all the defragmentation will be done for you in the kernel. This severely affects the performance of the interception mechanism, since now every large packet you want intercepted will first have to be reassembled prior to being forwarded to you, and then, if you choose to reinject it - it will have to be fragmented again (the kernel with this option will be enabled to fragment raw packets with IP_HDRINCL)

This was the only option available for divert sockets under Linux 2.0.36 because of the way the firewall code was structured - it only looked at the first fragment of every packet and passed all other fragments without looking at them. This way, if the first fragment were dropped by the firewall, the rest of them would be eventually discarded by the defragmenter. That's why in order for DIVERT sockets to work you were forced to compile the always defragment option in, so that you would always get the whole packet diverted to you and not just the first fragment.

In 2.2.12, thanks to changes in the firewall code you now have an option of having the kernel or yourself doing fragmentation/defragmentation.

NOTE: the defragmentation feature has not been added as of release 1.0.4 of divert sockets. It is in the works though.


Next Previous Contents