I have been struggling with a problem of TCP freezing on my VPS. Mainly when using SSH to tunnel traffic. Initially it appeared to be a problem with OpenSSH, as the problem manifested itself when the tunnel was under heavy load. But days of searching did not yield any indication of a problem with openssh that would cause freezing, not only of the tunnel, but the entire server’s TCP connections.
After a ton of searching (whatever did we do before google?), I found the answer in the server’s Virtuozzo panel. The answer was 3 not-so-simple words, TCP receive buffer. My vps host only gave me 2 MB worth, and at idle I was already using 256 KB just from the services that were listening (the kernel gives about 87 KB as a receive buffer to every tcp socket by default). Then I learned I couldn’t really change the kernel default in a virtualized environment as sysctl failed with an unknown error.
The real trouble came from the fact that due to the tunneling the receive buffer was allocated twice, once for the connection from the server to the internet and one more internally between the server and the ssh tunnel. Since the ssh tunnel is much slower than the servers connection to the internet (god bless Saudi bandwidth), the tunnel’s receive buffer was maxing out (using ~170 KB) which was causing the upstream connection to also max out. In short, three ongoing connections were enough to bring the server to its knees. One solution I found was to set the upstream server, in this case squid, to buffer more incoming data into memory (the squid tag is read_ahead_gap in case anyone is wondering). This way only the tunnel would be maxed out allowing roughly 25% increase in active connections without killing the server (the other buffers still use up space even when they aren’t maxed).
So, the main issue that has been plaguing me this whole time is that I need more buffer space. I guess I have to get in touch with my host to see if I can increase it.