Tuesday, 31 December 2013

Running a service on a restricted port using IP Tables

Common problem - you need to run up a service (e.g. an HTTP server) on a port <= 1024 (e.g. port 80). You don't want to run it as root, because you're not that stupid. You don't want to run some quite complicated other thing you might misconfigure and whose features you don't actually need (I'm looking at you, Apache HTTPD) as a proxy just to achieve this end. What to do?

Well, you can run up your service on an unrestricted port like 8080 as a user with restricted privileges, and then do NAT via IP Tables to redirect TCP traffic from a restricted port (e.g. 80) to that unrestricted one:

iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 8080

However, this isn't quite complete - if you are on the host itself this rule will not apply, so you still can't get to the service on the restricted port. To work around this I have so far found you need to add an OUTPUT rule. As it's an OUTPUT rule it *must* be restricted to only the IP address of the local box - otherwise you'll find requests apparently to other servers are being re-routed to localhost on the unrestricted port. For the loopback adapter this looks like this:

iptables -t nat -A OUTPUT -p tcp -d 127.0.0.1 --dport 80 -j REDIRECT --to-ports 8080

If you want a comprehensive solution, you'll have to add the same rule over and over for the IP addresses of all network adapters on the host. This can be done in Puppet as so:

define localiptablesredirect($to_port) {
  $local_ip_and_from_port = split($name,'-')
  $local_ip = $local_ip_and_from_port[0]
  $from_port = $local_ip_and_from_port[1]

  exec { "iptables-redirect-localport-${local_ip}-${from_port}":
    command => "/sbin/iptables -t nat -A OUTPUT -p tcp -d ${local_ip} --dport ${from_port} -j REDIRECT --to-ports ${to_port}; service iptables save",
    user    => 'root',
    group   => 'root',
    unless  => "/sbin/iptables -S -t nat | grep -q 'OUTPUT -d ${local_ip}/32 -p tcp -m tcp --dport ${from_port} -j REDIRECT --to-ports ${to_port}' 2>/dev/null"
  }
}

define iptablesredirect($to_port) {
  $from_port = $name
  if ($from_port != $to_port) {
    exec { "iptables-redirect-port-${from_port}":
      command => "/sbin/iptables -t nat -A PREROUTING -p tcp --dport ${from_port} -j REDIRECT --to-ports ${to_port}; service iptables save",
      user    => 'root',
      group   => 'root',
      unless  => "/sbin/iptables -S -t nat | grep -q 'PREROUTING -p tcp -m tcp --dport ${from_port} -j REDIRECT --to-ports ${to_port}' 2>/dev/null";
    }

    $interface_names = split($::interfaces, ',')
    $interface_addresses_and_incoming_port = inline_template('<%= @interface_names.map{ |interface_name| scope.lookupvar("ipaddress_#{interface_name}") }.reject{ |ipaddress| ipaddress == :undefined }.uniq.map{ |ipaddress| "#{ipaddress}-#{incoming_port}" }.join(" ") %>')
    $interface_addr_and_incoming_port_array = split($interface_addresses_and_incoming_port, ' ')

    localiptablesredirect { $interface_addr_and_incoming_port_array:
      to_port    => $to_port
    }
  }
}

iptablesredirect { '80':
  to_port    => 8080
}

Monday, 30 December 2013

Fixing Duplicate Resource Definitions for Defaulted Parameterised Defines in Puppet

Recently I have been working on a puppet module which defines a new resource which in turn requires a certain directory to exist, as so:

define mything ($log_dir='/var/log/mythings') {

  notify { "${name} installed!": }

  file { $log_dir:
    ensure => directory
  }

  file { "${log_dir}/${name}":
    ensure => directory,
    require => File[$log_dir]
  }
}

As you can see the log directory is parameterised with a default, combining flexibility with ease of use.

As it happens there's no reason why multiple of these mythings shouldn't be installed on the same host, as so:

mything { "thing1": }
mything { "thing2": }

But of course that causes puppet to bomb out:
Duplicate definition: File[/var/log/mythings] is already defined

The solution I've found is to realise a virtual resource defined in an unparameterised class, as so:

define mything ($log_dir='/var/log/mythings') {

  notify { "${name} installed!": }

  include mything::defaultlogging

  File <| title == $log_dir |>

  file { "${log_dir}/${name}":
    ensure => directory,
    require => File[$log_dir]
  }
}

class mything::defaultlogging {
  @file { '/var/log/mythings':
    ensure => directory
  }
}

Now the following works:
mything { "thing1": }
mything { "thing2": }

If we want to override and use a different log directory as follows:
mything { "thing3":
  log_dir => '/var/log/otherthing'
}
we get this error:
Could not find dependency File[/var/log/otherthing] for File[/var/log/otherthing/thing3] at /etc/puppet/modules/mything/manifests/init.pp:12

This just means we need to define the new log directory as so:
$other_log_dir='/var/log/otherthing'
@file { $other_log_dir:
  ensure => directory
}
mything { "thing3":
  log_dir => $other_log_dir
}
and all is good. Importantly, applying this manifest will not create the default /var/log/mythings directory.

Tuesday, 10 December 2013

H2 & HSQLDB for Simulating Oracle

H2 & HSQLDB are two Java in-memory databases. They both offer a degree of support for simulating an Oracle database in your tests. This post describes the pros and cons of each.

H2

How to setup:


import org.h2.Driver;
import javax.sql.DataSource;
import org.springframework.jdbc.datasource.DriverManagerDataSource;

Driver.load();
DataSource dataSource = new DriverManagerDataSource(
    "jdbc:h2:mem:MYDBNAME;MVCC=true;DB_CLOSE_DELAY=-1;MODE=Oracle",
    "sa",
    "");

DB_CLOSE_DELAY is vital here or the database is deleted whenever the number of connections drops to zero - a highly unintuitive situation.

Pros:

In general I've found I had to make fewer compromises on my SQL syntax in general and my DDL syntax in particular using H2's Oracle compatibility mode. For instance it supports sequences and making the default value of a column a select from a sequence, which HSQLDB does not.

Cons:

The transaction capabilities are not as good as HSQLDB. Specifically, if you use MVCC=true in the connection string then H2 does not support a transaction isolation of serializable, only read committed. If you do not set MVCC=true then a transaction isolation of serializable does work but only by doing a full table lock, which is not at all how Oracle does it.

HSQLDB

How to setup:

import org.hsqldb.jdbc.JDBCDriver;
import javax.sql.DataSource;
import org.springframework.jdbc.core.JdbcTemplate; 
import org.springframework.jdbc.datasource.DriverManagerDataSource;

JDBCDriver.class.getName();
DataSource dataSource = new DriverManagerDataSource(
    "jdbc:hsqldb:mem:MYDBNAME",
    "sa",
    "")
JdbcTemplate  jdbcTemplate = new JdbcTemplate(dataSource)
jdbcTemplate.execute("set database sql syntax ORA TRUE;");
jdbcTemplate.execute("set database transaction control MVCC;"); 

Pros:

MVCC with a transaction isolation of serializable works as expected - other transactions can continue to write whilst a transaction sees only the state of the DB when it started.

Cons:

Support for Oracle syntax, particularly in DDL, is patchy - I was unable to run the following, which works fine in Oracle:
CREATE SEQUENCE SQ_TABLE_A;
CREATE TABLE TABLE_A (
  ID NUMBER(22,0) NOT NULL DEFAULT (SELECT SQ_TABLE_A.NEXTVAL from DUAL),
  SOME_DATA NUMBER(22,0) NOT NULL,
  TSTAMP TIMESTAMP(3) DEFAULT CURRENT_TIMESTAMP);