3 weeks without coffee

Three weeks ago I decided that was going to take a break from coffee. Every once in a while I take a break from certain things, or try to minimize their usage. Some months ago I minimized the amount of soft drinks I was drinking, and 3 weeks ago it was time to quit coffee. I wanted to break the habit and get rid of my caffeine dependence.

I'd done this before so I knew what to expect, and it was not very different this time around. The first day was fine except for the habit of getting coffee, and I accidentally got coffee when visiting a client, out of habit. "You want coffee?" I got asked, and just like that I said "sure". I didn't realize I had quit coffee until I already finished half of it. The second and third day I had some headaches and got a lot of urges to get coffee. I resisted the urges, got water or tea instead, and pretty much got off my addiction (or dependence, or habit). From day 4 onward I had pretty much no urge to get coffee anymore.

The effects? I sleep better. I wake up less tired (well, except when I go to bed really late of course). I am also less tense, feel more relaxed. Long story short, it just feels a bit better.

I intend to keep this up for a long time. Now that I'm used to not drinking coffee, it's not that hard anymore, and the urge to get coffee is gone. I'm also considering doing a similar habit-breaking experiment with alcohol.

If you've got similar experiences with experiments like these, I'd be happy to hear from you.

Installing Bolt extensions on Docker

I'm currently working on a website with the Bolt CMS. For this website, I am using an extension. Now, the "problem" with extensions is that they are installed using Composer by Bolt, and end up in the .gitignore'd vendor/ directory. Which is OK while developing, because the extension will just be in my local codebase, but once I commit my changes and push them, I run into a little problem.

Some context

Let's start with a bit of context: Our current hosting platform is a bunch of Digital Ocean droplets managed by Rancher. We use Gitlab for our Git hosting, and use Gitlab Pipelines for building our docker containers and deploying them to production.

The single line solution

In Slack, I checked with Bob to see what the easiest way was of getting the extensions installed when they're in the configuration but not in vendor/, and the solution was so simple I had not thought of it:

Run composer install --no-dev in your extensions/ directory

So I adapted my Dockerfile to include a single line:

RUN cd /var/www/extensions && composer install --no-dev

I committed the changes, pushed them, Gitlab picked them up and built the new container, Rancher pulled the new container and switched it on, and lo and behold, the extension was there!

Sometimes the simple solutions are actually the best solutions

A rant about best practices

I have yet to talk to a developer that has told me that they were purposefully writing bad software. I think this is something that is part of being a developer, that you write software that is as good as you can possibly make it within the constraints that you have.

In our effort to write the Best Software Ever (TM) we read up on all the programming best practices: design patterns, refactoring and rewriting code, new concepts such as Domain-Driven Design and CQRS, all the latest frameworks and of course we test our code until we have a decent code coverage and we sit together with our teammates to do pair programming. And that's great. It is. But it isn't.

In my lightning talk for the PHPAmersfoort meetup on Tuesday, January 9th, 2018, I ranted a bit about best practices. In this blog post, I try to summarize what I ranted about.

Test Coverage

Test coverage is great! It is a great tool to measure how much of our code is being touched by unit (and possibly integration) tests. A lot of developers I talk to tell me that they strive to get 100% code coverage, 80% code coverage, 50% code coverage or any other arbitrary percentage. What they don't mention is whether or not they actually look at what they are testing.

Over the years I have encountered so many unit tests that were not actually testing anything. They were written for a sole purpose: To make sure that all the lines in the code were "green", were covered by unit tests. And that is useless. Completely useless. You get a false sense of security if you work like this.

There are many ways of keeping track of whether your tests actually make sense. Recently I wrote about using docblocks for that purpose, but you can also use code coverage to help you write great tests. Generating code coverage can help you identify which parts of your code are not covered by tests. But instead of just writing a test to ensure the line turns green, you need to consider what that line of code stands for, what behavior it adds to your code. And you should write your tests to test that behavior, not just to add a green line and an extra 0.1% to your code coverage. Code coverage is an indication, not a proof of good tests.

Domain-driven design

DDD is a way of designing the code of your application based on the domain you're working in. It puts the actual use cases at the heart of your application and ensures that your code is structured in a way that makes sense to the context it is running in.

Domain-Driven Design is a big hit in the programming world at the moment. These days you don't count anymore if you don't do DDD. And you shouldn't just know about DDD or try to apply it here and there, no: ALL YOUR CODES SHOULD BE DDD!1!1shift-one!!1!

Now, don't get me wrong: There is a lot in DDD that makes way more sense than any approach I've used in the past, but just applying DDD on every bit of code you write does not make any sense. Doing things DDD is not that hard, but doing DDD right takes a lot of learning and a lot of effort. And for quite a few of the things that I've seen people want to use full-on DDD recently, I wonder whether it is worth the effort.

So yes, dig into DDD, read the blue book if you want, read any book about it, all the blog post, and apply it where it makes sense. Go ahead! But don't overdo it.


I used to be a framework zealot. I was convinced that everyone should use frameworks, and everyone should use it all the time. For me it started with Mojavi, then Zend Framework and finally I settled on Symfony. To me, the approach and structure that Symfony gave me made so much sense that I started using Symfony for every project that I worked on. My first step would be to download (and later: install) Symfony. It made my life so much easier.

Using a framework does make a lot of sense for a lot of situations. And I personally do not really care what framework you use, although I see a lot of people saying "You use Laravel? You're such a n00b!" or "No, you have to use Symfony for everything" or "Zend Framework is the only true enterprise framework and you need to use it".

First of all: There is no single framework that is good for every situation. Second of all, why use a pre-fab framework when you can build your own?. And sometimes you really don't need a framework. Stop bashing other people's solutions and start worrying about solving your own problems. Pick the right tool for the job and fix stuff.

Event sourcing + CQRS

Event sourcing is a way of storing and retrieving data that does not hold a single truth. It uses events to communicate changes to your data. At any point in time, you can replay those events to get to the current state of your data, but it also allows you to look back into your history for other states of the data. It is a great concept for storing data where you need a paper trail (for instance for audit purposes) or where you need versioning of your data.

CQRS is a method of separating your C, R, U, and D. In most places where I've seen it applied it is a separation of reading data from the datastore and writing data to the data store.

Both are, like Domain-Driven Design, a big hit in the programming world at the moment. There's a lot of fanaticism around it. Of course, you should do event sourcing, preferably on all your data. Of course, you should use CQRS, it is such a great way of separating responsibilities.

And while I agree with the arguments, I don't think they should be applied to every situation. In many projects, a "traditional" relational database will work. Or the previous big hit, document databases, will work as well. And for your average project separating read and write are not a huge requirement either. Sure, it will add some structure to your code, but also some overhead while developing. As Martin Fowler puts it:

For some situations, this separation can be valuable, but beware that for most systems CQRS adds risky complexity.

Pair programming

Now here's a programming practice that I truly love: pair programming. Sit down with another developer and start coding. One developer is the "driver", they type the code and offer implementations of the road that the "navigator" lays out. The navigator sets next to the driver and comes up with ways of implementing the task at hand.

There is something about this way of working together that makes a lot of sense. My way of looking at a problem is probably different from the person sitting next to me, and by combining our approaches and picking the best of both world, the solution will be better than any solution our individual selves could've come up with.

Having said that, I don't think any developer would say "yes, let's do pair programming full-time". Or if they do, they're not like me.

Pairing full-time would exhaust me. When I do full-day pairing sessions (which I occasionally do) I am completely dead by the end of the day. When I do it a couple of days in a row, I need the full weekend just to recover from that, meaning I have very little time to actually do fun stuff. The amount of social interaction while pairing would kill me if I do it full-time. The intensity of pairing as well. Because pairing is intense. Instead of just having to think of your own solution, you now have to combine that with the input of the other half of the pair and together you have to decide what is the way to go. And there is such a thing as Decision Fatigue.

Instead, and I've done this several times to great success, you should combine pairing sessions with individual work time. Do pair programming for an hour, or maybe two hours, then split up and work on parts of the task individually, then come back together to combine your individual work. This still gives you the benefit of working together but won't burn you out in two weeks time.

Refactoring + Rewriting

Refactoring is the process of changing parts of your code while keeping the outward behavior the same. It improves the code quality without impacting the code that relies on your code.

Rewriting code is basically refactoring without giving a shit about backward compatibility. It's refactoring YOLO style. You completely replace the old code with new code, and the behavior of the code may change according to your wishes.

Depending on who you're talking to, every bit of legacy code should be refactored or rewritten, as soon as possible.

And while I agree on the fact that we should refactor or rewrite legacy code, I probably disagree on the definition of "as soon as possible".

Refactoring and rewriting code or great tools to improve the quality of your codebase and with that the quality of your application. They are extremely powerful tools, but with great power comes great responsibility. Given unlimited time and funds, I am of the belief that any developer in this world would continually keep refactoring and rewriting their code, and never ship a damn thing. Because as we develop our software and as we develop our skill set, we find out about new and different ways of solving the same problem. And every time we discover a fancy new way to solve a problem, all code we have written until then becomes instant legacy code. This is a never-ending cycle.

Legacy code is fine if it works, performs and is secure according to the business specifications and requirements. It is possible that, from a technical point of view, you may want to fix some issues that the code has, but there has to be a balance between delivering code improvements and delivering functionality. We should not refactor or rewrite parts of the code as we encounter them, but instead, keep track of what we have found in a central place and determine, in close collaboration with the business, what to fix at that time. If you really need a quick solution, you encapsulate the legacy with a small layer of better code. That way you can use the legacy while having a nice and "modern" interface to it.


All of the above examples are just examples of different best practices that you need to consider. When writing code, you should, of course, keep all the best practices in mind that you can think of. But there is no need to consider them all at the same time. Make a balance between code quality and speed of development, applying the best practices that apply to the situation you're in at that point. The best practices are best practices for a majority of the situations, but they are generalized so as to apply to a majority of the situations. This also means they may not apply to your situation, or there may be more important things you should weigh in. So read up on all the best practices, keep them in mind, but think before you do. Apply the best practices wisely after weighing all the factors that apply to your situation. And please, please use your common sense.

Code Kata Day

Today I was at the DomCode Code Kata Day in Utrecht. Over the course of the day, we were given 4 different code katas with about an hour each to solve the problem. You could either pick a programming language you wanted to learn more about or use the wheel of languages to get a random language. Here's a summary of the day and how it helped me become a better programmer and why I think it worked so well.

Kata 1: 99 bottles of Kotlin

The first kata we got to do was a relatively simple one (with some minor but nasty details): 99 bottles. The idea is to write a piece of code that would "sing" the 99 botles of beer song. I spun the wheel and got Kotlin as the language to use for this kata. Given my experience is about 99.999% PHP, any venture outside of PHP would be an interesting exercise I was quite curious about whether I could solve this. But the problem was relatively easy, so I set out to try it.

Kotlin is a relatively easy language to get started in, especially when you're used to languages such as PHP and Python. The syntax is quite similar and the documentation is pretty good. It didn't take long until I had my first attempt working. Well, almost working. As it turned out I had not taken into account that when there are 2 bottles of beer on the wall and you take one down, there is not 1 bottles of beer on the wall but there is 1 bottle of beer (thanks Ross for pointing that one out). After quickly fixing that, it worked like a charm. Ross gently nudged me to look at the when syntax, and since I had some time left I decided to refactor my first attempt into using when. Success! It looked a lot more readable and it worked like a charm.

Kata 2: OCR

In the second kata we went back to the days of yore when printing fancy stuff required ASCII art. In this case, we get a file with numbers in fancy ASCII and we need to parse it to PHP. I spun the wheel and was told to use Python for this, but to get my head around how to parse stuff like this I started out with a proof of concept in PHP. Unfortunately even getting this to work with PHP took me way too long, so I ended up not being able to finish a Python version. The PHP version works like a charm though. It turns out one of the biggest challenges I had was the fact that ASCII also relies heavily on spaces and my PHP IDE of choice (PHPStorm) strips spaces at the end of a file automatically. My code was working for a long time and I didn't realize the problem was with the file I was trying to parse!

Despite only having done this kata with my regular language of choice this was still a very good exercise in parsing unconvential data structures. Having to think about how to parse characters that are actually 3 lines high is pretty interesting, and I think I found a decent solution given the time constraints I had.

Kata 3: Pig Latin

After lunch it was time for the third kata of the day: Pig Latin. I had never heard of this one before so it was very interesting to first brainstorm about the best way to actually do this. An added difficulty was the fact that the wheel of languages gave me Elixir, a functional language in which everything is immutable. It was quite a paradigm shift for me, since I'm not used to functional programming, so it was quite interesting to combine these two unknowns.

Getting started with Elixer was quite hard. Having to think in such a different way made it extremely hard to get started by searching the web and reading the documentation I eventually got some code up and (nearly) running. Unfortunately I ran out of time before having a fully functional application, so this is one that I need to finish at a later date. I have saved my progress, so I can try and finish it.

Is it a problem that I didn't finish in time? Nope! Failure is the best way of learning, and I surely bumped my head a couple of times trying to implement this in Elixir. But I learned a lot from the experience, so it was definitely worth it.

Kata 4: Blackjack

After lunch Clara, whom I met at WeCamp 2017, sat down next to me and as Blackjack was presented we decided to do pair programming on the last kata of the day. And that last kata turned out to be Blackjack, which has some interesting challenges to solve. Clara spun Python, which I quite liked since I never got to solving kata 2 in Python, and we started implementing Blackjack in Python.

The idea was to write a piece of code that would get a set of "hands" and determine the winner of that round. For instance:

Clara: Q, J Skoop: 9, K, 5 Jopie: 7, 5

The code should determine that Clara has 20 points, Skoop has 24 points and Jopie has 12 points, leaving Clara the winner since she has the highest score that does not exceed 21.

We had a basic script up and running without too much hassle but also without the main challenge (how to handle the Ace that can be worth 1 or 11, depending on the choice of the player). Handling the Ace turned out to be a pretty big challenge. As with kata 3, we were very close to actually solving the problem but at the end of the give timeframe we didn't have a working solution yet. We've both got the file on our laptops and will solve the problem at a later date.

This kata gave us some nice challenges involving sorting and recursion. Python was pretty easy to pick up and thinking about how to solve this kata taught me a couple of things on how to work with datasets like these. Pairing on this problem was another good exercise which I quite enjoyed.

Why would one do a code kata?

There are many sites on the Internet that list code katas (coding exercises), but why would you actually do such a kata. Well, one of the most important reasons is to try your hand at a problem of a different type than the problems you try to solve in your day job. The challenge with katas however is not just in the type of problem, but also in the restrictions you place upon yourself when doing it. Restrictions you can think of are:

  • Use a different programming language
  • Limit the amount of time you can spend on the kata
  • Limit the amount of lines you should use in your solution
  • Determine the minimum speed of your code, i.e. your code needs to be finished in 100ms
  • Determine a maximum amount of memory your code can use

Restrictions like these will allow you to think in creative ways to not just get a working solution but also a solution that requires you to care about specific elements of your code. It forces you to think out of the box.

Why would I attend a code kata event?

Code katas are fun and useful, but attending a code kata event is even more useful. If you do code katas by yourself, you may set the restrictions with a bit of a bias on what you think you can solve. At an event, the restrictions are not set by yourself but by the organizers, causing you to be forced out of your comfort zone and having to think in ways you wouldn't otherwise do.

Another great aspect is the presence of other people, talking to them about their solutions and sometimes even pairing up with them to create a solution together.

It is for these reasons that I would highly recommend to visit a local code kata event if there is one close to you. I would like to thank the great people of DomCode and Infi for hosting the code kata event today. It was a great event!

Silex is (almost) dead, long live my-lex

SymfonyCon is happening in Cluj and on Thursday the keynote by Fabien Potencier announced some important changes. One of the most important announcements was the EOL of Silex in 2018.

EOL next year for Silex! #SymfonyCon ( -@gbtekkie)


Silex has been and is still an important player in the PHP ecosystem. It has played an extremely important role in the Symfony ecosystem as it showed many Symfony developers that there was more than just the full Symfony stack. It was also one of the first microframeworks that showed the PHP community the power of working with individual components, and how you can glue those together to make an extremely powerful foundation to build upon which includes most of the best practices.

Why EOL?

Now I wasn't at the keynote so I can only guess to the reasons, but it does make sense to me. When Silex was released the whole concept of taking individual components to build a microframework was pretty new to PHP developers. The PHP component ecosystem was a lot more limited as well. A huge group of PHP developers was used to working with full stack frameworks, so building your own framework (even with components) was by many deemed to be reinventing the wheel.

Fastforward to 2017 and a lot of PHP developers are by now used to individual components. Silex has little to prove on that topic. And with Composer being a stable, proven tool, the PHP component ecosystem growing every day and now the introduction of Symfony Flex to easily setup and manage projects maintaining a seperate microframework based on Symfony components is just an overhead. Using either Composer or Symfony Flex, you can set up a project similar to an empty Silex project in a matter of minutes.


I have been a happy user of Composer with individual components for a while now. One of my first projects with individual components even turned into a conference talk. I'll update the talk soon, as I have since found a slightly better structure, and if I can make the time for it, I'll also write something about this new and improved structure. I've used it for a couple of projects now and I'm quite happy with this structure. I also still have to play with Symfony Flex. It looks really promising and I can't wait to give it a try.

So the "my-lex" in the title, what is that about? It is about the choice you now have. You can basically build your own Silex using either Composer and components or Symfony Flex. I would've laughed hard a couple of years ago if you'd said to me that I would say this but: Build your own framework!

Is Silex being EOL'ed a bad thing?

No. While it is sad to see such an important project go I think by now the Symfony and PHP ecosystems have already gone past the point of needing Silex. Does this mean we don't need microframeworks anymore? I won't say that, but with Slim still going strong the loss of Silex isn't all that bad. And with Composer, Flex and the huge amount of PHP components, you can always build a microframework that suits your specific needs.

The only situation where Silex stopping is an issue is for open source projects such as Bolt (who already anticipated this that are based on Silex, as well as of course your personal or business projects based on Silex. While this software will keep on working, you won't get new updates of the core of those projects, so eventually you'll have to put in effort to rewrite it to something else.

One year without -m

One year ago I blogged about starting a new practice: Not using -m when committing something to Git. -m allows you to directly insert the commit message, which makes the whole process of committing faster, but not necessarily better.

Committing to Git

When you commit your work to Git, you not only make sure the code is in your version control, but you also have an opportunity to document that exact moment in the history of your software. When using the -m option, you're very likely to write a very short message. You're not really encouraged to actually document the current state of your code, because writing longer or even multi-line messages is harder in a console.

Not using -m anymore

So, about a year ago I stopped using the -m parameter when committing changes to Git. Has it really changed anything?

Yes and no.

Yes, it has changed something in that I now take more time to write the commit message and sometimes take the time to document what is in the change and why the change was made.

No, because all too often I'm still tempted to write a pretty short commit message.

It is still something that I need to focus on more, to take the time to write useful commit messages. Things that allow you to create a timeline of your development out of the list of commits. But most certainly, commit messages have improved since making this little change in my process.

Your unit test methods need docblocks too

If you've met me at any time in the previous 20 years and you discussed unit testing with me, chances are pretty big that I'd have told you that your test methods in your unit tests don't really need docblocks, because the test methods would be named in such a way that they were descriptive. In a unit test class you could have methods such as testCheckWithQueueThatHasMessagesButIsNotProcessingThemReturns500() or testCheckWithQueueThatHasCorrectValuesReturns200(). Those names give you a good idea of what it is testing, which makes it a lot easier to find the test that fails when you run your unit tests and get some red Fs in your field of dots.

Tests can be hard to real though, especially (but not exclusively) when testing legacy code. You may have lots of mocks to configure, for instance, or you may have several similar tests that are testing specific situations, edge cases or bugs you found along the way that you wanted to test before you them. Even when you wrote the tests yourself, in 6 months you may not realize what the context was when you wrote the test. Or you might have another developer join the team that is not aware of the context.

Documentation is important. It lowers the bus factor, makes it easier to on-board new developers (or temporary external developers, waves) and makes you think about your own code in a different way. We're (getting?) used to documenting our software, but why not document our tests by giving it a bit more context?

It fixed a bug

Earlier this week I was writing tests for the code I had just written. I usually write empty test methods first for every situation I want to test, and then fill them up one by one. As I came to the last empty test method I looked at the situation I wanted to test. I implemented the test as I thought I had meant it based on the name of the method. Then I started adding docblocks to give the tests a bit more context. As I was writing the docblock for the last method I paused: Something was wrong. The thing I was describing was not actually the thing I was testing. Looking closer at the test, it made no sense. Everything I tested in this method had been tested in other methods.

I ended up rewriting the test to actually cover the situation I had wanted to test, and tweeted:

I started adding docblocks above test methods to describe what I'm testing. I just caught myself writing a nonsensical test that way. WIN. (@skoop)

What to document?

The way I write the docblocks is that I describe, in actual human understandable language, which situation the test covers. For instance, for one of the above examples:

 * This test checks that the happy flow is correctly handled. 
 * If the queue returns the right data according to our
 * specifications, it should return a 200 response.

This will give you a lot of information about the test. But this one is for the standard happy flow, so it's still short. Let's have a look at another one.

 * This test checks the failure flow: Matching transactions 
 * fails. We also test whether database transactions are
 * used successfully: We should still commit the transaction
 * in this scenario

Here I don't just explain the flow I'm testing, but I also explain some additional things we test in this specific test: Many developers would assume that in a failure scenario the database transaction should be rolled back, but in this specific case, it fails to match information, but that is an expected outcome, so we should still commit the database transaction.

Assumptions are... well, you know the drill. I realize that as a developer I make assumptions all the time, but if I can minimize the assumptions I (or other developers) make with only a small bit of effort by documenting those details, that's a WIN in my book.

DDT: Docblock Driven Testing

So these days as I start writing my tests, I still create the empty test methods first, but they are now immediately accompanied by the docblocks, describing in a bit more detail which situation the method is going to be testing. That helps me make sure I don't accidentally miss any possible scenario, or accidentally write a test I had completely meant to be different.

PHPNW: Thank You

The past ten years, the PHP NorthWest conference in Manchester has had a huge impact on the Manchester PHP scene, but also on the rest of Europe (and perhaps the world). Last weekend during the closing of the conference, Jeremy Coates announced that PHPNW conference is going on a hiatus. They're not saying they're quitting, but for now, there will be no more PHPNW conference. A sad moment for sure, but I'm proud of all those involved in organizing PHPNW for 10 years.

Inspiration as a developer

PHPNW has ensured a constant inspiration for me as a developer. They've always had a nice and varried schedule both expanding on existing topics and bringing new topics that are interesting to developers. I've been incredibly inspired by many talks at PHPNW, for instance the keynote by Lorna Mitchell and Ivo Jansch and the keynote by Meri Williams.

Inspiration as an organizer

A little and perhaps unknown fact: When the Dutch PHP Usergroup merged with PHPBelgium to form PHPBenelux and we started considering organizing a conference, we contacted Jeremy and Priscilla about this. They were kind enough to give us a boatload of information about organizing a conference, ranging from how to pick the right schedule to how to try and get sponsors. Their help was invaluable in this early process of organizing a conference. Also, how PHPNW was set up, the atmosphere it had, was a huge inspiration for the early PHPBenelux Conference.

Friends, so many friends

I have met up with so many old friends and made so many new friends at PHPNW Conference. I couldn't list them all even if I wanted to, but for instance Lorna, Jenny, Jeremy, Priscilla, Mark, Matt, Kat, Mike and Rick. I've had countless conversations and discussions with people I know, people I did not know yet or people who were close friends already. PHPNW was also the conference where I finally met Khayrattee, who came over from Mauritius. That is one memory I will never forget.

A big thank you

So this is a big thank you to everyone involved in organizing PHPNW Conference those ten amazing years. You have given me and a lot of other people a lot of fun, opportunities and many lessons learned. You are an inspiration to me and I'm sure to countless others. I hope to see you at some point, somewhere in the future. Thank you.

Why I will pay more attention to game nights

WeCamp was last week and a lot was learned, and a lot of fun was had. Most lessons learned were good and nice, but I'm going to state here and now that I learned about something I should've known before WeCamp, but that I did not pay attention to: Which games to bring to game night.

Game night

Game night at WeCamp is all about leaving your electronics behind: It is meant as a moment of leaving work behind and play board- and card games together with other attendees. It is for having fun and relaxing.

Cards against humanity

During the first year of WeCamp, Cards Against Humanity was very popular in the PHP community at events. As such, I brought it to WeCamp. As the game night progressed, however, I started wondering whether this was a good choice. A majority of attendees were playing it, causing some people to be left out. The game can be quite offensive and not everyone enjoys playing such a potentially offensive game (even if all offensiveness is done in an atmosphere of joking, and is not meant seriously). After the first WeCamp I decided not to bring the game anymore. I wouldn't block it from being brought by other people, but would myself actively pursue playing other games.

Making the same mistake all over again

Earlier this year I received a new game I backed through Kickstarter: Secret Hitler. While the name feels offensive, the gameplay looked really good, and within the right context this should not be a problem. Excited as I was about this new game, I packed it into the box of games to bring to WeCamp, not thinking about the effects it might have on others. I was just really excited about this fun new party game I had purchased.

Context is important

What I had not considered is that with games like Cards Against Humanity and Secret Hitler, the context is very important. When played among friends it is clear to everyone that the game is not serious, that everyone is just having fun. When playing Cards Against Humanity, the purpose is to make jokes as offensive as possible without actually meaning offense. When playing Secret Hitler it is clear that nobody actually supports Hitler, that accusing someone of being a fascist is done jokingly and not serious.

However, when at an event such as a conference, or WeCamp, the context is different. While you may consider people friends, they are not good friends. And even when people are not playing the game, they may still be confronted with terms such as Hitler or fascist. I did not consider this when packing the game or setting it up, but even the usage of these terms, in whichever context, can be highly offensive to a lot of people.

Both during WeCamp and in the evaluation questionnaire we have received several comments about the fact that Secret Hitler was being played. This is what made me realize that I made a mistake. It was a big mistake in judgement on which games I could bring to the island, and for that I apologize to anyone who was offended by that.

There is so much to play

I have a huge stack of games that I could bring to any event, including a lot of fun (party) games. At WeCamp, for instance, Dixit was a very popular choice. We had a lot of fun playing it. Another popular choice was Bang! The Dice Game, Saboteur and 7 Wonders were also very popular. Other good choices could have been Exploding Kittens or the game I got recommended by friends: Bunny Bunny Moose Moose.

Considering it now, one of my other favorite games, the historically accurate World War II boardgame Escape From Colditz would be another good example of a game I should not bring to an event like WeCamp.


Looking back at WeCamp, I can say: Today I Learned that even though games may be associated with fun and may seem innocent, picking the right games for the right context is still important. I made a mistake by bringing Secret Hitler to WeCamp, and will give more thought to my choice of games for any future event that I attend.

Should we all stop playing these games? No. But we should remember that there is a time and a place for them. And this was not it.

Customizing Sculpin: Highlight image and Facebook

Over the past months I've been slowly customizing my Sculpin installation for this blog to fit my own liking a bit more. I've added a bit more styling including a beautiful background image and a transparent white background for the content column. Today I wanted to add a bit more. Two things specifically:

  • I wanted to control a bit more about how my blogposts are displayed when they are shared on Facebook
  • I wanted to have an optional image at the top of blogposts to make them look a bit better

It turns out this was actually quite easy, so here's a short description of what I did to make it work.


A quick search gave me the exact Facebook documentation I needed for setting up basic markup to make my site look better when shared on facebook. It basically means adding a couple of tags to the header of my HTML. Now that is easy! So in my source/_views/post.html I've added some lines to the head_meta block, which is the block in the layout that contains meta-data. I found this quite fitting.

    <meta property="og:url" content="{{ site.url }}{{ page.url }}" />
    <meta property="og:type" content="article" />
    <meta property="og:title" content="{{ page.title }}" />
    {% if page.social.summary %}
    <meta property="og:description" content="{{ page.social.summary }}" />
    {% else %}
    <meta property="og:description" content="{{ page.blocks.content|striptags|slice(0, 255) }}..." />
    {% endif %}
    {% if page.social.highlight_image %}
    <meta property="og:image" content="{{ site.url }}{{ page.social.highlight_image }}" />
    {% endif %}

Most of this seems pretty basic: I set the URL of the current article, I set the title to the title of the current article, the type is article (according to the Facebook documentation if you leave this out the default is website, which seems like an incorrect description of a blogpost). The description and highlight image meant I had to extend the standard blogpost format for the markdown file a bit more, I'll get back to that in a minute. But as you can see, I only add an image if I've set a highlight image, and I add a basic description unless a custom summary has been set in the blogpost.

Extending the Sculpin frontmatter

While I could just use a basic summary based on the blogpost and leave out the image, I wanted to have the flexibility to customize this a bit more. Luckily Sculpin allows you to extend the markdown frontmatter with your own custom tags. Basically, any tag (or hierarchy of tags) you add to the frontmatter in your blogpost markdown file automatically ends up in your data structure in the template. So now I can simple add some stuff to my blogpost, and I can use it in my template:

    highlight_image: /_posts/images/powertools.jpg
        name: Dorli Photography
        url: https://www.flickr.com/photos/dorlino/4946061042/
    summary: I've customized my Sculpin a bit more to fit what I want with the blog.

As you can see, if I use a deeper hierarchy, I can access that by concatenating with dots, for instance the page.social.highlight_image I use in the template comes from the above information.

Highlight image

Since I have a highlight image for Facebook anyway, I could actually use it to make my site look a bit nicer as well. So let's add the (optional) highlight image to the top of the blogpost as well. Since my (default, I think?) Sculpin template is split up into two templates, this required change in two places:

  • source/_layouts/default.html
  • source/_views/post.html

The first change is in the default layout: I need to add a block on top of the row that contains the blogpost to allow me to add custom HTML in my post template. This is a pretty simple task:

{% block topbanner %}{% endblock %}

The block will not contain anything by default, only if it gets overwritten by subtemplates. In our case, the template for the blogpost.

{% block topbanner %}
{% if page.social.highlight_image %}
<div class="row-fluid">
    <div class="span12">
        <img src="{{ site.url }}{{ page.social.highlight_image }}" style="width:100%" />
{% endif %}
{% endblock %}

In the source/_views/post.html I overwrite the block and add some content, but only if I've actually set a highlight_image for the blogpost. This ensures I can also blog without a highlight image, but also keeps backwards compatibility for the years and years of old blogposts that do not have a highlight image.

If the image is set, I simply add a new row-fluid with the image in it. Thanks to @jaspernbrouwer for helping me with the HTML here, I initially placed the HTML in the wrong place in the layout file. This will now add the highlight image at the top if it is present.

Credits where credits are due

Of course, if I use images of other people, I want to credit them. So I've added a bit of code to the sidebar as well to do exactly that:

{% if page.social.highlight_image and page.social.highlight_image_credits %}
    Image by
        {% if page.social.highlight_image_credits.url %}
        <a href="{{ page.social.highlight_image_credits.url }}">
        {% endif %}
            {{ page.social.highlight_image_credits.name }}
        {% if page.social.highlight_image_credits.url %}
        {% endif %}
    {% endif %}

I think this could is pretty self-explanatory: If there is an image and the credits are also set, add the credits to the sidebar. If I've also set a URL for the credits, make the name a link.

The result is what you're looking at right now.