Wikipedia did not replace books after all

I contributed to Wikipedia in its early days. It's incredible that after so many years of contributions from so many people, there are still many, many areas where articles could be massively improved. See the one about Startup companies . As of December 2018, it's still weird. This surprised me. In those idealistic years of the early 2000s, I was sure that by now almost all knowledge in the world would be captured in an open and free encyclopedia. I also believed that other forms of collaborative content would replace guide books and they would be much better. I was very wrong. I have a few hypotheses as to why. Certain topics are so ambiguous that it's a struggle to write about them in an objective way. The Wikipedia style is to write about "succinct description of dry verifiable facts", but for these topics people want to write about their preferred insights . Insights are subjective, you can't find scientific consensus about them, so the page beco

bpf_trace_printk as a last-resort method to debug eBPF programs

It's hard to debug problems in eBPF programs. When everything fails, there is a last-resort: use bpf_trace_printk. bpf_trace_printk can be used as such: bpf_trace_printk("fname %s\\n", valp->fname); The double-escaped \\n is needed when the C source code is embedded in a Python multi-line string, which is the case for most bcc examples. You can use formatting directives like %s and %d, but you can only use one per line. To see the output, first run the bcc program as usual, then do this in a separate terminal: $ sudo cat /sys/kernel/debug/tracing/trace_pipe          foo-29323 [002] d... 12090253.569332: : fname /etc/           foo-29323 [002] d... 12090253.569350: : fname /lib/x86_64-linux-gnu/           foo-29323 [002] d... 12090253.569384: : fname /lib/x86_64-linux-gnu/           foo-29323 [002] d... 12090253.570230: : fname /proc/sys/net/core/somaxconn           foo -29323 [002] d... 12090253.571336

nictuku/dht in the wild

Open-source is magical. When I wrote my distributed hash table library back in 2012, I just wanted to write something cool that made computers talk to each other efficiently at a very large scale. Since publishing it, the library has been used by botnets , has a source code evolution video and was used by someone's Bachelor thesis in Switzerland . And today I found out that computer scientists from University of British Columbia and University of Bamberg have analyzed the library in a very cool paper to verify that it's correct ! They analyzed logs and checked that the library's behavior follows invariants expected for a correct implementation of Kademlia. In text:  We logged state after the results of a Find_Value request were added to a peer’s routing table. On each execution we found that ∀ peers i, j, peeri .min_distance = peerj .min_distance in all total-order groups. This invariant, in conjunction with O(LoŠ“(n)) message bound, provides strong evidence for

Speed up Bazel builds inside docker containers on OSX

I've been using Bazel to developing on a Mac. Occasionally I need to test docker images that are built with the amazing rules_docker . But running anything inside docker on OSX is really painfully slow. And Bazel builds need to read a ton of files, so things can take forever. I don't think there's a definitive answer for that other than avoiding the combination of OSX+docker, but I found that helped: Crank up Docker's own resource limits Click on the little whale sitting on your OSX status bar, and go to Preferences , Advanced  and crank that up as much as you want. I've given it the maximum but I understand my browser navigation can be affected. YMMV. Use a cache directory for Bazel builds inside docker  Deep down, what's really slow is docker going through OSX's Linux virtualization to do volume I/O. We can't currently speed up the IO, but we can reduce the amount of work that Bazel has to do every time it builds. We can cache things. No

How to install VirtualBox on Scaleway's x86_64 servers

Scaleway offers reasonably priced dedicated servers that are now even cheaper than Hetzner's robo market. I wanted to use them for doing Ansible tests using Vagrant. The problem is you can't easily install VirtualBox there, and it's needed for Vagrant to work. Here's a script that should do most of the work for you: #!/bin/bash # Expects Ubuntu 16.06 (xenial) and kernel 4.x. # Based upon a blog post by Zach at set -eux # Have the user call sudo early so the credentials is valid later on sudo whoami for x in xenial xenial-security xenial-updates; do egrep -qe "deb-src.* $x " /etc/apt/sources.list || echo "deb-src ${x} main universe" | sudo tee -a /etc/apt/sources.list done echo "deb xenial contrib" | sudo tee -a /etc/apt/sources.list.d/virtualbox.list sudo apt update sudo apt-get install dkms virtualbox-5.0 -y K

I've built something:

I've built something called . It helps  Stardew Valley  players to share screenshots of their pretty farms. It's a bunch of open-source software written in Go . A Windows client watches the player's save files and uploads them to a RabbitMQ server whenever the game saves state (once a day in the game). Then there's a Go program that parses every new save game and renders a screenshot using the image/draw  and packages. We put that screenshot in a nice little website for everyone to see. The screenshot emulates the game's appearance, except that it plots the entire farm in one image. People love that. There's a bunch of people helping me on the project. The javascript frontend was written entirely by another guy, also on his free time. Another person is helping fix rendering problems. And a bunch of people have spent a lot of time helping test it. The SDV modding community has been very supportive so t

sync.Pool is coming soon

I predict that sync.Pool, an upcoming Go 1.3 feature, will be everywhere. Everyone will know how to use it and will change their existing programs to use it. sync.Pool is a nice way to save allocations. In one example, I've replaced the buffer in bencode-go (used by Taipei Torrent) with a sync.Pool and it lead to massive savings in allocations. And the resulting code isn't ugly. Readable and fast code == WIN. These are the benchmark tests for bencode. Note the drop from 64655 bytes per operation to 7998 bytes per operation, for the BenchmarkDecodeAll test. Before $ go test -bench=. -benchmem PASS BenchmarkDecodeAll         10000            105804 ns/op           64655 B/op        186 allocs/op BenchmarkUnmarshalAll      10000            174444 ns/op           69304 B/op        292 allocs/op After $ go test -bench=. -benchmem PASS BenchmarkDecodeAll         50000             51194 ns/op            7998 B/op        160 allocs/op BenchmarkU