Posts

Speed up Bazel builds inside docker containers on OSX

Image
I've been using Bazel to developing on a Mac.

Occasionally I need to test docker images that are built with the amazing rules_docker. But running anything inside docker on OSX is really painfully slow. And Bazel builds need to read a ton of files, so things can take forever.

I don't think there's a definitive answer for that other than avoiding the combination of OSX+docker, but I found that helped:
Crank up Docker's own resource limits Click on the little whale sitting on your OSX status bar, and go to Preferences, Advanced and crank that up as much as you want. I've given it the maximum but I understand my browser navigation can be affected. YMMV.

Use a cache directory for Bazel builds inside docker  Deep down, what's really slow is docker going through OSX's Linux virtualization to do volume I/O. We can't currently speed up the IO, but we can reduce the amount of work that Bazel has to do every time it builds. We can cache things.

Normally, every con…

How to install VirtualBox on Scaleway's x86_64 servers

Scaleway offers reasonably priced dedicated servers that are now even cheaper than Hetzner's robo market.

I wanted to use them for doing Ansible tests using Vagrant. The problem is you can't easily install VirtualBox there, and it's needed for Vagrant to work.

Here's a script that should do most of the work for you:

#!/bin/bash # Expects Ubuntu 16.06 (xenial) and kernel 4.x. # Based upon a blog post by Zach at http://zachzimm.com/blog/?p=191 set -eux # Have the user call sudo early so the credentials is valid later on sudo whoami for x in xenial xenial-security xenial-updates; do egrep -qe "deb-src.* $x " /etc/apt/sources.list || echo "deb-src http://archive.ubuntu.com/ubuntu ${x} main universe" | sudo tee -a /etc/apt/sources.list done echo "deb http://download.virtualbox.org/virtualbox/debian xenial contrib" | sudo tee -a /etc/apt/sources.list.d/virtualbox.list sudo apt update sudo apt-get install dkms virtualbox-5.0 -y KERN_VER…

I've built something: stardew.farm

Image
I've built something called stardew.farm. It helps Stardew Valley players to share screenshots of their pretty farms.

It's a bunch of open-source software written in Go.

A Windows client watches the player's save files and uploads them to a RabbitMQ server whenever the game saves state (once a day in the game).

Then there's a Go program that parses every new save game and renders a screenshot using the image/draw and github.com/disintegration/imaging packages. We put that screenshot in a nice little website for everyone to see.

The screenshot emulates the game's appearance, except that it plots the entire farm in one image. People love that.


There's a bunch of people helping me on the project. The javascript frontend was written entirely by another guy, also on his free time. Another person is helping fix rendering problems. And a bunch of people have spent a lot of time helping test it. The SDV modding community has been very supportive so this has been a gre…

sync.Pool is coming soon

I predict that sync.Pool, an upcoming Go 1.3 feature, will be everywhere. Everyone will know how to use it and will change their existing programs to use it.

sync.Pool is a nice way to save allocations. In one example, I've replaced the buffer in bencode-go (used by Taipei Torrent) with a sync.Pool and it lead to massive savings in allocations. And the resulting code isn't ugly.

Readable and fast code == WIN.

These are the benchmark tests for bencode. Note the drop from 64655 bytes per operation to 7998 bytes per operation, for the BenchmarkDecodeAll test.

Before $ go test -bench=. -benchmem PASS BenchmarkDecodeAll         10000            105804 ns/op           64655 B/op        186 allocs/op BenchmarkUnmarshalAll      10000            174444 ns/op           69304 B/op        292 allocs/op After $ go test -bench=. -benchmem PASS BenchmarkDecodeAll         50000             51194 ns/op            7998 B/op        160 allocs/op BenchmarkUnmarshalAll      10000            106387…

Caveats about Linux connection tracking and high traffic servers

Dear Internet, whenever you're setting up a high-performance TCP or specially a UDP server on Linux, don't be stupid like me, and do remember to pay attention to connection tracking on your server. What is connection tracking? Connection tracking is normally used by Linux for certain firewall rules, like those that depend on connection state such as NEW, ESTABLISHED, RELATED, etc. 
Even UDP connection can have pseudo-state tracked by Linux.
Connection tracking is enabled by default. Even if a system has no netfilter rules configured to use conntrack, Linux keeps a large table of connection states in memory. I assume it tracks connections even if no stateful firewall rules exist because a user would expect that new firewall rules should apply immediately. 
In general, the connection tracking is extremely efficient and performs very well. But if you find that it's consuming too much resources, specially in a very constrained system, or if you don't want to think about th…

New project: Mask IPs from Netflix so I can access their USA content from anywhere

See also this post on reddit.

If you want to be an alpha tester, go to: http://www.maskedhost.com/

Internet services I recommend

I'm a happy consumer of Internet services. Besides buying products from Amazon, iTunes, Google and other giants, I also subscribe to a bunch of small service providers that prove their competence and respect for consumers. It's only fair that I give them some praise and recommend their services to whoever might find this post.

rsync.net is a remote storage site that provides flexible data access like SSH, WebDAV and obviously rsync. They are a bit pricy but offer nice features like Git support. The problem is I can't use them to backup all of my MP3 bought from iTunes and Amazon, because of their price. And I've been using github.com for private git repositories, so I'll probably cancel my subscription with rsync.net sometime soon. I'm still looking for another cheap per-byte general purpose backup provider. This is a commodity service and features are not *that* important. So there is a chance I'll be stuck with Amazon S3 or Google Docs (which doesn't s…