Surface Book 2 Nvidia card not recognized

So I love my Surface Book 2. It is an amazing laptop and tablet hybrid. There's very little issues I have with it. But since a couple of weeks I keep having issues with the video card. The Surface Book 2 has 2 video cards: an Intel UHD 620 card for the tablet, and an Nvidia GeForce GTX 1050 when the Surface is connected to the keyboard section for stunning graphics. Especially when playing games I recently have a very low FPS, and I had no idea what could be causing it.

When I opened my device manager, I was shocked to see that the Nvidia card was not there. I started searching the Internet and apparently, this is an issue that has been plaguing more Surface Book users. Eventually I found a solution as posted by Philip Aaron that worked for me!

  1. Open the device manager so you can see which display adapters Windows sees
  2. Disconnect power from the laptop
  3. Detach the Surface from the keyboard base
  4. Wait until the device manager has reloaded its devices
  5. Attach the Surface back to the keyboard base
  6. Wait until the device manager has reloaded its devices
  7. Now the Nvidia card should be recognized again, connect the power again

Having to do this every time I start the computer is rather annoying, so I've contacted Microsoft to see if there is a permanent solution, but for now, this at least solves my FPS issues.


What I learned from the Zend/Rogue Wave acquisition (or: why I'm so excited about the Github/Microsoft deal)

When Zend was acquired in 2015 I was openly and vocally scared for the future of Zend (and PHP). I honestly thought that this deal would mean Zend would be absorbed within Rogue Wave and that would be the last of it. A friend DM'ed me to warn me that doing this so openly might make it a self fulfilling prophecy, not necessarily because of Rogue Wave but because companies could lose trust in PHP.

Fast-forward three years and PHP is still stronger than even and if anything we're getting more instead of less from Zend: Their products are still going strong and ZendCon has been turned into ZendCon & Open Enterprise, broadening the scope of the conference and thereby making it more interesting for developers.

I didn't know Rogue Wave back in those days, which is what made me a bit scared. Basically, I was doing what I always tell people not to be doing: Being scared of the unknown. Getting out of that comfort zone is a good thing (mostly). I shouldn't have reacted in this way.

About the Github acquisition

Now I'm seeing a lot of people being scared by Microsoft acquiring Github, and the funny thing is that I feel no fear. Part of that is probably because I know Microsoft and while they have a past of very bad behaviour when it comes to open source, they've changed a lot in the previous years. Sure, I also make jokes about Windows sometimes, although those jokes are always based on what Windows was years ago, but overall their current leadership seems very aware and open to the concept of open source.

Another reason I'm not scared is that at this point Github does not seem profitable. A good financial injection from a big company that has no problem investing some money is something like Github is what they may need right now. Sure, things will change after the acquisition has closed, but probably only to make Github profitable again. I trust Microsoft to understand what Github is about and how to run the company.

Moving to Gitlab

Now, there are enough reasons to move to Gitlab. For instance their great CI/CD tooling, their tight integration with Docker or one of their many other features. The fact that they run a very transparant and open company (including the Gitlab codebase itself) can be another good reason. My company has mostly migrated to Gitlab already because of our great Gitlab/Docker/Rancher setup. Moving because of the acquisition of Github is probably the worst reason though. Keep in mind that Gitlab has Google VC so moving to Gitlab does not mean you're now hosted by an independant Git hosting company.

Having said that, I hope that all those people that migrated their codebases to Gitlab will find out about the awesome features they offer that Github does not offer. You'll have to get used to their interface, but Gitlab is awesome.

Back to Github

Many people are predicting doom and gloom for Github. Open source repositories should be moved or else..., Microsoft would go and have a look at your private repositories. I see no reason why any of that will happen. Github will still be Github. Of course they will keep your codebase safe and won't look at the contents of your repository: That is their core business. If they would do stuff like that, 99% of their customers would be off their service in no-time.

So to all developers who are scared after the news of the acquisition broke I'd say: Give Microsoft a chance. The company has changed a lot and they deserve a chance to prove what they're worth. So if Github is working for you, there is no need to move away. As said, there are good reasons to move to Gitlab, but please move to Gitlab for the right reasons, not for your fear of Microsoft.


Surface Book 2 For Development

Over the past month and a half I've been trying to fully switch to a new work machine. Instead of my trusty MacBook Pro, I've mostly been working with a Microsoft Surface Book 2. Here's my lessons of this period.

Context

Let's start with a bit of context. Over the past 10+ years I've been using Apple laptops exclusively for work. It started when I started my job at Ibuildings and I got the opportunity to choose between a PC laptop and a Mac. I'd heard good things about Mac so I decided to give it a try. When I came home from work after that first day I told my wife "If I ever leave Ibuildings, I'm going to have to buy myself a Mac". I was impressed. The ease of use, the intuitiveness and the user experience were all so nice. So much better than Linux, which I'd been using in the years before. Or even Windows, which I'd been using until Windows ME came out and forced me into the stable hands of Linux.

I've been a full-on Apple fanboi ever since. Until a couple of years ago, there was nothing Apple did that would stop me from using their stuff. The platforms they built were stable and because they control both hardware and software, everything was tuned to each other.

But in the past couple of years I've been a bit more unhappy with Apple's decisions. Their platforms are becoming less stable, less reliable, and more than once I've felt their decisions were based mostly on economics, on money, and not on usability which had been their focus until then (or at least that's how it felt).

When Microsoft first announced the Surface (and later the Surface Book), I was intrigued. A tablet that is also a laptop. Everything in a single device. Powerful enough to work on, yet also easy to bring to a meeting and not have a laptop screen in front of your face. When the first rumours started that Apple was going to introduce an iPad Pro, I sincerely hoped it would be similar: An iPad device running macOS. That would be great!

The announcement of the iPad Pro was a disappointment to me. With it running iOS there was no chance I could do serious development on it. The specs were also disappointing. This was not an option anymore.

My MacBook has been slowly becoming a device that was frustrating me instead of a device that I felt happy to spend 8-12 hours a day on. So I've been looking around. In February I was visiting a Dutch Mediamarkt store to get some stuff, I noticed a Surface Book, and I started playing with it. A Mediamarkt employee came by to give me a demo of some of its features and I was pretty much convinced.

Borrowing a Surface Book

Switching platforms is a big decision however. I've got so much time, effort and money invested in the Apple platform that I'd (at least partially) lose when I switch to Windows, that I really wanted to test-drive the Surface Book before actually making a decision. So I started looking for companies that have Surface Book rental devices. I found several companies but they were all aimed at renting the device for a couple of days, for events and such. If I were to rent it for 2-3 months (which I'd need to really test-drive the device) I could just buy the device. The prices were much higher than anticipated, and long-term rental did not really seem to be a thing. So I reached out to Gerard, one of my contacts inside Microsoft Netherlands, and asked him if he knew of companies that do this. Gerard introduced me to Paul from Microsoft Netherlands, and Paul offered to lend me a device. For free. The only catch was that I'd share my experiences. Given that I was planning on sharing my experiences anyway if I'd find a rental device, I quickly agreed. This was a great opportunity!

Waiting is hard

After agreeing to lend the device, the waiting is probably the hardest. It felt just like that moment when you ordered your new MacBook Pro on the Apple website and then have to wait for it to be delivered. A shiny new device is coming your way. Luckily, the wait wasn't all that long, because a week later a parcel was delivered. Ooooohhh.

The first time

I unpacked the Surface Book 2 and booted it up. It definitely felt like I was unpacking and booting up a new MacBook in terms of experience. A nice wizard helped me set up the basics of the computer like the user, the wifi etc. The whole setup could also be done using speech with the Cortana software, but as fancy as it may seem, I somehow dislike microphones constantly listening to what I'm doing and saying, I quickly turned that off. All in all the initial setup was done in a couple of steps and a couple of minutes.

Now, to set this up as a development workstation I need some software. The initial list of things I thought I needed was:

  • Firefox
  • Docker
  • Git
  • PHPStorm

That should at least give me a basic setup for doing my development projects. Just like any new computer this is pretty straightforward. Download, run the installer, run the software. Nothing special about that. But I quickly found out I was missing some other things:

  • An SSH key for Github
  • 1Password for my passwords
  • A MySQL client

The first two were done, pretty quickly, the last took me a bit more time.

Sequel Pro

Replacing Sequel Pro took some time. Nothing works like Sequel Pro in terms of user experience. My first thought was to try MySQL Workbench, but I quickly concluded that it is not my thing. It just misses any form of user experience. After searching around the Internet for a bit I found HeidiSQL, a free software package for managing MySQL, PostgreSQL and MSSQL databases. It's not as good as Sequel Pro, but it comes really close. The interface is very clear and intuitive. I'd found my Sequel Pro replacement.

Connecting my headphones

The first real issue I ran into was when I wanted to connect my bluetooth headphones (JBL E65BTNC) to the Surface Book. At first, the Surface Book didn't even see my headphones, then when it recognized my headphones it wouldn’t connect to them. When I turned bluetooth and my headphones off and on again, they connected. I guess one would call this "the Windows way" jokingly, but it seems that after all these years, it actually still is the Windows way. As I used the Surface Book more and more bluetooth turned out to be the main weak spot: I tried to connect or reconnect several different bluetooth devices during my trial period. Eventually, most devices did connect, but it usually took several tries and turning off and on of devices and the Windows bluetooth functionality before it worked.

Docker

Another issue I had was with Docker networking. My initial playing around with Docker for Windows worked fine, but as soon as I wanted to start working on my client project I had major issues with networking in Docker. We have a pretty complex Docker setup which somehow did not want to work. Luckily, a co-worker who is also using Windows was able to help me out. In the Hyper-V Manager under Virtual Switch Manager, I needed to create a new external network. I used my wireless controller as the external network for this. Important was to tick the box 'Allow management operating system to share this network adapter'. Once I had done this and restarted Docker, it worked like a charm.

ConEmu

Another useful tip I got was to use ConEmu. ConEmu is an easy little application allowing you to have multiple Powershell tabs in a single window. I use shells a lot, and with Powershell it is impossible to have several tabs with different shells, but using ConEmu, you can do this. ConEmu is actually quite powerful, because you can configure several different shell configurations. This means you can easily open a new tab with a different configuration, if you have shells for several purposes. Quite useful!

Bash on Windows

At some point I was also pointed towards the option to Run an actual Bash on Windows. I tried it out and indeed it works quite nice. Since there is no (or I did not find any) less-like program in Powershell, the Bash shell makes it a lot easier to quickly check the contents of files or for instance tail -f a log file. I had quite a few issues though integrating the Bash with my Docker, because Bash actually runs inside a Linux subsystem on your Windows, so it is not fully integrated with Windows. I was pointed to a solution: You need to set EXPORT DOCKER_HOST=0.0.0.0 in your .bashrc in the WSL. Then in the settings for Docker for Windows, you need to tick the checkbox Expose daemon on tcp://localhost:2375 without TLS. Now you can use the docker commands in your WSL linux (after you installed the docker.io package and the docker-compose package using apt-get). Unfortunately, this did not fully solve the problem, so I've decided (given I only have a limited trial period at the moment) to let this rest and just use Docker from Powershell and use Bash for things like tailing, quick file access etc.

iMessage

Over the previous years I have invested quite heavily in Mac. Not just in terms of money, but also in terms of tooling. One of the major issues I've encountered is the fact that I am a heavy user of the Messages app on my MacBook. It allows to me to quickly type iMessages to other people with Apple devices. So a big question for my decision to switch to Windows is: Do I want to get rid of iMessage? Messages.app is only available for Apple platforms, so there is no way I can have the same integration with iMessage if I switch to Windows.

In the previous months I've solved this by using WhatsApp Web in Rambox. I was already using Rambox for access to Slack, Gmail/Inbox, Discord and Google Calendar, so it was easy to also add WhatsApp to that. This allowed me the same ease of sending messages, and actually it made it even easier, because I was now also able to communicate that same way with people that do not have Apple devices. Only downside is: It's WhatsApp.

Games

I don't use my MacBook purely for work. At home I also use Steam to install and manage the games that I play on my MacBook. The most important games for me at the moment are Orwell, Eternal, Prison Architect and Football Manager 2018. All of these ran extremely well, with much better graphics, on the Surface Book Pro. The touch screen was an added value for some games (like Eternal), because you could literally play by dragging cards onto the playing field. No mouse required anymore! Yay!

And there was more. I could now try out Fortnite, which doesn't really run on OSX. A great game, which I could easily play with all sliders cranked up to the maximum.

Switching was surprisingly easy

Now, this is not really a testament to Windows, Mac or anything like that, but more to the fact that over the previous years I'd switched to so many cloud-based services, but switching to Windows was extremely easy thanks to things like Dropbox, Todoist, Google Calendar, Google Drive, 1Password and OneNote. Thanks to applications like these switching to another platform is literally just installing the apps on the new platform, logging in and it works. I immediately had access to the majority of the important files that I had on my Mac, all my notes, my calendar and my todo-list (most of my life is dictated by my todo-list these days, I could not afford to lose this data).

The touch screen and the detachable screen

The touch screen on the Surface Book is quite nice. It works really well. Unfortunately many apps are not yet adapted to work with touch screen, which made it a bit more annoying. Luckily, I also got a Surface Pen with the Surface Book, which allows for more precise aiming.

Where the power of the touch screen really stood out was when I would detach the screen to use it as a tablet, for instance when I went into meetings. When using OneNote, I could use the Pen to write down notes, then select those notes and use the OneNote Ink to text feature to convert my written notes into actual text. The handwriting recognition is impressive! It makes some mistakes here and there but they're small and easy to correct.

As a developer, detaching the screen had some downsides as well though. When the screen is detached, the battery life is (understandably) a lot lower. If you're running several Docker containers, it's hard to even sit through an hour-long meeting without your battery running out. So using the screen as a tablet does add some effort: You'll have to shut down your Docker containers and IDE before going into the meeting. When you turn off such battery-slurping applications though, your battery problems disappear immediately and you can sit through meeting after meeting without ever fearing of running out of battery life.

Where Apple still wins

There are still some things Apple definitely wins in. Mostly this is the default toolset that is installed. When you get a Mac, you can easily open any file you receive. Whether those are images, PDF files, Word documents, spreadsheats. It all just opens. One of the main reasons for this is that every Mac is equipped with a great standard toolset: Preview, Pages, Keynote, Numbers, it's all there and can open just about any file you get. With Microsoft, you get the Office tools installed but you don't get the license by default, which means you get a read-only view with a very annoying popup asking you to get a license. PDF files are opened in Edge (for some reason), which seems to work but Edge is not as lightweight as Preview.

Another thing I've noticed is that it is a lot easier to find good apps with a good UX for MacOS. There's a lot of software you can find for Windows, but a lot of it simply doesn't seem to have been made with any sense of UX. Even things like the aforementioned Todoist is a lot less usable on Windows than it is on MacOS. In a blog they promised improvements, but if they released those improvements already, I'm not sure how bad it was before. Don't get me wrong, Todoist works well on Windows, but the interface is far from as smooth as it is in MacOS. Todoist is just an example, but I have many similar experiences with other apps.

Having said all that, all those experiences were with apps that already work well under Windows. They could just be better. It's definitely not a reason to stop my move to Windows.

Concluding

After just under 2 months of using the Surface Book 2 I'm very sad to let it go. This machine is amazing. I'm very positively surprised by Windows these days. When I "left" Windows (during the Windows ME time) it was a horrible operating system for power users, and the times since then that I had to work with Windows were not very good experiences. But since then, a lot has changed. Windows is a serious option again for development work, and with the Surface Book 2 Microsoft has a fantastic and very powerful machine that does well both as development machine and for your occasional gaming pleasure. I know what my next development laptop will be, and it's not a MacBook Pro. It's going to be a Surface Book 2.


Mental Health First Aid

It was TrueNorthPHP conference in 2013 that I first saw Ed Finkler speak. It was his Open Sourcing Mental Illness talk. This talk has meant so much to me on so many levels, but one of the things I took away from that talk was the existence of Mental Health First Aid. Mental Health First Aid is basically the mental illness counterpart of regular first aid. It gives you basic information about how to handle when you encounter someone with a mental illess, especially in crisis situations. This includes approaching the person, what to do and not to do when talking to them, and how to make sure people get the right help, including a hand-off to (mental health) medical professionals.

Fast forward a few years to about two years ago. Doing the MHFA course was still on my wishlist, but there still was no option in The Netherlands to do so. Just as I was looking into options of travelling to the UK for the course, I found out that one of the Dutch regional institutes for mental health (GGZ Eindhoven) was working on bringing MHFA to The Netherlands. I contacted them to see if I could be part of the trial group they were doing, but never heard back. I put my focus on some other things I wanted to do and put MHFA on hold again.

Earlier this year I decided to add a bit more structure to the training programs within my company Ingewikkeld to enable my employees a bit better to increase their knowledge and skills, but decided I definitely also should use this new structure for my own. I looked up the MHFA options in The Netherlands and found out that there were now many options for taking the course, including in my favorite city Utrecht. I signed up for the course, and over the past 4 weeks have had 4 3-hour sessions.

The goal

In the first session we were asked for our goals. My main goal was to understand more about mental illness and get handholds on how to act in case of talking to someone with a mental illness, be it a crisis situation or not. And over all sessions, this was indeed what happened. I learned a lot about depression, fear, psychosis, substance abuse and crisis such as suicidal tendencies, self-harm, panic attacks, aggression and more. About what it was and about how to handle such situations. As we were asked at the end of todays session to summarize our experience over the past 4 weeks, my answer was:

Goals achieved

It became personal

As I've got issues with depression myself, especially the first two sessions caused a lot of self-reflection as I learned more about what happens with depression. There was a lot of familiar situations in the course material, and it was very interesting to hear more background information on those situations. Our group was a very nice and diverse group, with people with lots of different backgrounds, which gave me a lot of insight into how different people experience different situations.

As the course progressed though, other mental illnesses were handled that I had no experience with. This was definitely eye opening. I now have so much more understanding of what can happen in peoples heads, and I hope that helps me in a more empathetic response to such situations, if I ever encounter them.

Why I recommend more people taking this course

Isn't it a bit weird that we find it very normal to take regular first aid courses, but we try to stay away of anything related to mental health? Somehow there is still a taboo on mental health related problems. And yet (at least here in The Netherlands) there are news items on a more regular basis about people with mental illness crises. It seems like this is a growing problem, yet nobody wants to know how to handle in such situations?

Taking this course will make you understand more clearly what happens when someone has a mental health issue. How it affects their life, and how to handle when you encounter a situation involving a mental health issue. It will help you be more empathetic, not just in crisis situations, but also when simply talking to someone with a mental illness. I also think it will look good on your resume for potential employers. It is still a rare skill to know how to handle in this situations, and employers will benefit from you having this knowledge. So Check which local organization offers the course and register. I'm pretty sure you won't regret it.


3 weeks without coffee

Three weeks ago I decided that was going to take a break from coffee. Every once in a while I take a break from certain things, or try to minimize their usage. Some months ago I minimized the amount of soft drinks I was drinking, and 3 weeks ago it was time to quit coffee. I wanted to break the habit and get rid of my caffeine dependence.

I'd done this before so I knew what to expect, and it was not very different this time around. The first day was fine except for the habit of getting coffee, and I accidentally got coffee when visiting a client, out of habit. "You want coffee?" I got asked, and just like that I said "sure". I didn't realize I had quit coffee until I already finished half of it. The second and third day I had some headaches and got a lot of urges to get coffee. I resisted the urges, got water or tea instead, and pretty much got off my addiction (or dependence, or habit). From day 4 onward I had pretty much no urge to get coffee anymore.

The effects? I sleep better. I wake up less tired (well, except when I go to bed really late of course). I am also less tense, feel more relaxed. Long story short, it just feels a bit better.

I intend to keep this up for a long time. Now that I'm used to not drinking coffee, it's not that hard anymore, and the urge to get coffee is gone. I'm also considering doing a similar habit-breaking experiment with alcohol.

If you've got similar experiences with experiments like these, I'd be happy to hear from you.


Installing Bolt extensions on Docker

I'm currently working on a website with the Bolt CMS. For this website, I am using an extension. Now, the "problem" with extensions is that they are installed using Composer by Bolt, and end up in the .gitignore'd vendor/ directory. Which is OK while developing, because the extension will just be in my local codebase, but once I commit my changes and push them, I run into a little problem.

Some context

Let's start with a bit of context: Our current hosting platform is a bunch of Digital Ocean droplets managed by Rancher. We use Gitlab for our Git hosting, and use Gitlab Pipelines for building our docker containers and deploying them to production.

The single line solution

In Slack, I checked with Bob to see what the easiest way was of getting the extensions installed when they're in the configuration but not in vendor/, and the solution was so simple I had not thought of it:

Run composer install --no-dev in your extensions/ directory

So I adapted my Dockerfile to include a single line:

RUN cd /var/www/extensions && composer install --no-dev

I committed the changes, pushed them, Gitlab picked them up and built the new container, Rancher pulled the new container and switched it on, and lo and behold, the extension was there!

Sometimes the simple solutions are actually the best solutions


A rant about best practices

I have yet to talk to a developer that has told me that they were purposefully writing bad software. I think this is something that is part of being a developer, that you write software that is as good as you can possibly make it within the constraints that you have.

In our effort to write the Best Software Ever (TM) we read up on all the programming best practices: design patterns, refactoring and rewriting code, new concepts such as Domain-Driven Design and CQRS, all the latest frameworks and of course we test our code until we have a decent code coverage and we sit together with our teammates to do pair programming. And that's great. It is. But it isn't.

In my lightning talk for the PHPAmersfoort meetup on Tuesday, January 9th, 2018, I ranted a bit about best practices. In this blog post, I try to summarize what I ranted about.

Test Coverage

Test coverage is great! It is a great tool to measure how much of our code is being touched by unit (and possibly integration) tests. A lot of developers I talk to tell me that they strive to get 100% code coverage, 80% code coverage, 50% code coverage or any other arbitrary percentage. What they don't mention is whether or not they actually look at what they are testing.

Over the years I have encountered so many unit tests that were not actually testing anything. They were written for a sole purpose: To make sure that all the lines in the code were "green", were covered by unit tests. And that is useless. Completely useless. You get a false sense of security if you work like this.

There are many ways of keeping track of whether your tests actually make sense. Recently I wrote about using docblocks for that purpose, but you can also use code coverage to help you write great tests. Generating code coverage can help you identify which parts of your code are not covered by tests. But instead of just writing a test to ensure the line turns green, you need to consider what that line of code stands for, what behavior it adds to your code. And you should write your tests to test that behavior, not just to add a green line and an extra 0.1% to your code coverage. Code coverage is an indication, not a proof of good tests.

Domain-driven design

DDD is a way of designing the code of your application based on the domain you're working in. It puts the actual use cases at the heart of your application and ensures that your code is structured in a way that makes sense to the context it is running in.

Domain-Driven Design is a big hit in the programming world at the moment. These days you don't count anymore if you don't do DDD. And you shouldn't just know about DDD or try to apply it here and there, no: ALL YOUR CODES SHOULD BE DDD!1!1shift-one!!1!

Now, don't get me wrong: There is a lot in DDD that makes way more sense than any approach I've used in the past, but just applying DDD on every bit of code you write does not make any sense. Doing things DDD is not that hard, but doing DDD right takes a lot of learning and a lot of effort. And for quite a few of the things that I've seen people want to use full-on DDD recently, I wonder whether it is worth the effort.

So yes, dig into DDD, read the blue book if you want, read any book about it, all the blog post, and apply it where it makes sense. Go ahead! But don't overdo it.

Frameworks

I used to be a framework zealot. I was convinced that everyone should use frameworks, and everyone should use it all the time. For me it started with Mojavi, then Zend Framework and finally I settled on Symfony. To me, the approach and structure that Symfony gave me made so much sense that I started using Symfony for every project that I worked on. My first step would be to download (and later: install) Symfony. It made my life so much easier.

Using a framework does make a lot of sense for a lot of situations. And I personally do not really care what framework you use, although I see a lot of people saying "You use Laravel? You're such a n00b!" or "No, you have to use Symfony for everything" or "Zend Framework is the only true enterprise framework and you need to use it".

First of all: There is no single framework that is good for every situation. Second of all, why use a pre-fab framework when you can build your own?. And sometimes you really don't need a framework. Stop bashing other people's solutions and start worrying about solving your own problems. Pick the right tool for the job and fix stuff.

Event sourcing + CQRS

Event sourcing is a way of storing and retrieving data that does not hold a single truth. It uses events to communicate changes to your data. At any point in time, you can replay those events to get to the current state of your data, but it also allows you to look back into your history for other states of the data. It is a great concept for storing data where you need a paper trail (for instance for audit purposes) or where you need versioning of your data.

CQRS is a method of separating your C, R, U, and D. In most places where I've seen it applied it is a separation of reading data from the datastore and writing data to the data store.

Both are, like Domain-Driven Design, a big hit in the programming world at the moment. There's a lot of fanaticism around it. Of course, you should do event sourcing, preferably on all your data. Of course, you should use CQRS, it is such a great way of separating responsibilities.

And while I agree with the arguments, I don't think they should be applied to every situation. In many projects, a "traditional" relational database will work. Or the previous big hit, document databases, will work as well. And for your average project separating read and write are not a huge requirement either. Sure, it will add some structure to your code, but also some overhead while developing. As Martin Fowler puts it:

For some situations, this separation can be valuable, but beware that for most systems CQRS adds risky complexity.

Pair programming

Now here's a programming practice that I truly love: pair programming. Sit down with another developer and start coding. One developer is the "driver", they type the code and offer implementations of the road that the "navigator" lays out. The navigator sets next to the driver and comes up with ways of implementing the task at hand.

There is something about this way of working together that makes a lot of sense. My way of looking at a problem is probably different from the person sitting next to me, and by combining our approaches and picking the best of both world, the solution will be better than any solution our individual selves could've come up with.

Having said that, I don't think any developer would say "yes, let's do pair programming full-time". Or if they do, they're not like me.

Pairing full-time would exhaust me. When I do full-day pairing sessions (which I occasionally do) I am completely dead by the end of the day. When I do it a couple of days in a row, I need the full weekend just to recover from that, meaning I have very little time to actually do fun stuff. The amount of social interaction while pairing would kill me if I do it full-time. The intensity of pairing as well. Because pairing is intense. Instead of just having to think of your own solution, you now have to combine that with the input of the other half of the pair and together you have to decide what is the way to go. And there is such a thing as Decision Fatigue.

Instead, and I've done this several times to great success, you should combine pairing sessions with individual work time. Do pair programming for an hour, or maybe two hours, then split up and work on parts of the task individually, then come back together to combine your individual work. This still gives you the benefit of working together but won't burn you out in two weeks time.

Refactoring + Rewriting

Refactoring is the process of changing parts of your code while keeping the outward behavior the same. It improves the code quality without impacting the code that relies on your code.

Rewriting code is basically refactoring without giving a shit about backward compatibility. It's refactoring YOLO style. You completely replace the old code with new code, and the behavior of the code may change according to your wishes.

Depending on who you're talking to, every bit of legacy code should be refactored or rewritten, as soon as possible.

And while I agree on the fact that we should refactor or rewrite legacy code, I probably disagree on the definition of "as soon as possible".

Refactoring and rewriting code or great tools to improve the quality of your codebase and with that the quality of your application. They are extremely powerful tools, but with great power comes great responsibility. Given unlimited time and funds, I am of the belief that any developer in this world would continually keep refactoring and rewriting their code, and never ship a damn thing. Because as we develop our software and as we develop our skill set, we find out about new and different ways of solving the same problem. And every time we discover a fancy new way to solve a problem, all code we have written until then becomes instant legacy code. This is a never-ending cycle.

Legacy code is fine if it works, performs and is secure according to the business specifications and requirements. It is possible that, from a technical point of view, you may want to fix some issues that the code has, but there has to be a balance between delivering code improvements and delivering functionality. We should not refactor or rewrite parts of the code as we encounter them, but instead, keep track of what we have found in a central place and determine, in close collaboration with the business, what to fix at that time. If you really need a quick solution, you encapsulate the legacy with a small layer of better code. That way you can use the legacy while having a nice and "modern" interface to it.

Consider ALL THE BEST PRACTICES

All of the above examples are just examples of different best practices that you need to consider. When writing code, you should, of course, keep all the best practices in mind that you can think of. But there is no need to consider them all at the same time. Make a balance between code quality and speed of development, applying the best practices that apply to the situation you're in at that point. The best practices are best practices for a majority of the situations, but they are generalized so as to apply to a majority of the situations. This also means they may not apply to your situation, or there may be more important things you should weigh in. So read up on all the best practices, keep them in mind, but think before you do. Apply the best practices wisely after weighing all the factors that apply to your situation. And please, please use your common sense.


Code Kata Day

Today I was at the DomCode Code Kata Day in Utrecht. Over the course of the day, we were given 4 different code katas with about an hour each to solve the problem. You could either pick a programming language you wanted to learn more about or use the wheel of languages to get a random language. Here's a summary of the day and how it helped me become a better programmer and why I think it worked so well.

Kata 1: 99 bottles of Kotlin

The first kata we got to do was a relatively simple one (with some minor but nasty details): 99 bottles. The idea is to write a piece of code that would "sing" the 99 botles of beer song. I spun the wheel and got Kotlin as the language to use for this kata. Given my experience is about 99.999% PHP, any venture outside of PHP would be an interesting exercise I was quite curious about whether I could solve this. But the problem was relatively easy, so I set out to try it.

Kotlin is a relatively easy language to get started in, especially when you're used to languages such as PHP and Python. The syntax is quite similar and the documentation is pretty good. It didn't take long until I had my first attempt working. Well, almost working. As it turned out I had not taken into account that when there are 2 bottles of beer on the wall and you take one down, there is not 1 bottles of beer on the wall but there is 1 bottle of beer (thanks Ross for pointing that one out). After quickly fixing that, it worked like a charm. Ross gently nudged me to look at the when syntax, and since I had some time left I decided to refactor my first attempt into using when. Success! It looked a lot more readable and it worked like a charm.

Kata 2: OCR

In the second kata we went back to the days of yore when printing fancy stuff required ASCII art. In this case, we get a file with numbers in fancy ASCII and we need to parse it to PHP. I spun the wheel and was told to use Python for this, but to get my head around how to parse stuff like this I started out with a proof of concept in PHP. Unfortunately even getting this to work with PHP took me way too long, so I ended up not being able to finish a Python version. The PHP version works like a charm though. It turns out one of the biggest challenges I had was the fact that ASCII also relies heavily on spaces and my PHP IDE of choice (PHPStorm) strips spaces at the end of a file automatically. My code was working for a long time and I didn't realize the problem was with the file I was trying to parse!

Despite only having done this kata with my regular language of choice this was still a very good exercise in parsing unconvential data structures. Having to think about how to parse characters that are actually 3 lines high is pretty interesting, and I think I found a decent solution given the time constraints I had.

Kata 3: Pig Latin

After lunch it was time for the third kata of the day: Pig Latin. I had never heard of this one before so it was very interesting to first brainstorm about the best way to actually do this. An added difficulty was the fact that the wheel of languages gave me Elixir, a functional language in which everything is immutable. It was quite a paradigm shift for me, since I'm not used to functional programming, so it was quite interesting to combine these two unknowns.

Getting started with Elixer was quite hard. Having to think in such a different way made it extremely hard to get started by searching the web and reading the documentation I eventually got some code up and (nearly) running. Unfortunately I ran out of time before having a fully functional application, so this is one that I need to finish at a later date. I have saved my progress, so I can try and finish it.

Is it a problem that I didn't finish in time? Nope! Failure is the best way of learning, and I surely bumped my head a couple of times trying to implement this in Elixir. But I learned a lot from the experience, so it was definitely worth it.

Kata 4: Blackjack

After lunch Clara, whom I met at WeCamp 2017, sat down next to me and as Blackjack was presented we decided to do pair programming on the last kata of the day. And that last kata turned out to be Blackjack, which has some interesting challenges to solve. Clara spun Python, which I quite liked since I never got to solving kata 2 in Python, and we started implementing Blackjack in Python.

The idea was to write a piece of code that would get a set of "hands" and determine the winner of that round. For instance:

Clara: Q, J Skoop: 9, K, 5 Jopie: 7, 5

The code should determine that Clara has 20 points, Skoop has 24 points and Jopie has 12 points, leaving Clara the winner since she has the highest score that does not exceed 21.

We had a basic script up and running without too much hassle but also without the main challenge (how to handle the Ace that can be worth 1 or 11, depending on the choice of the player). Handling the Ace turned out to be a pretty big challenge. As with kata 3, we were very close to actually solving the problem but at the end of the give timeframe we didn't have a working solution yet. We've both got the file on our laptops and will solve the problem at a later date.

This kata gave us some nice challenges involving sorting and recursion. Python was pretty easy to pick up and thinking about how to solve this kata taught me a couple of things on how to work with datasets like these. Pairing on this problem was another good exercise which I quite enjoyed.

Why would one do a code kata?

There are many sites on the Internet that list code katas (coding exercises), but why would you actually do such a kata. Well, one of the most important reasons is to try your hand at a problem of a different type than the problems you try to solve in your day job. The challenge with katas however is not just in the type of problem, but also in the restrictions you place upon yourself when doing it. Restrictions you can think of are:

  • Use a different programming language
  • Limit the amount of time you can spend on the kata
  • Limit the amount of lines you should use in your solution
  • Determine the minimum speed of your code, i.e. your code needs to be finished in 100ms
  • Determine a maximum amount of memory your code can use

Restrictions like these will allow you to think in creative ways to not just get a working solution but also a solution that requires you to care about specific elements of your code. It forces you to think out of the box.

Why would I attend a code kata event?

Code katas are fun and useful, but attending a code kata event is even more useful. If you do code katas by yourself, you may set the restrictions with a bit of a bias on what you think you can solve. At an event, the restrictions are not set by yourself but by the organizers, causing you to be forced out of your comfort zone and having to think in ways you wouldn't otherwise do.

Another great aspect is the presence of other people, talking to them about their solutions and sometimes even pairing up with them to create a solution together.

It is for these reasons that I would highly recommend to visit a local code kata event if there is one close to you. I would like to thank the great people of DomCode and Infi for hosting the code kata event today. It was a great event!


Silex is (almost) dead, long live my-lex

SymfonyCon is happening in Cluj and on Thursday the keynote by Fabien Potencier announced some important changes. One of the most important announcements was the EOL of Silex in 2018.

EOL next year for Silex! #SymfonyCon ( -@gbtekkie)

Silex

Silex has been and is still an important player in the PHP ecosystem. It has played an extremely important role in the Symfony ecosystem as it showed many Symfony developers that there was more than just the full Symfony stack. It was also one of the first microframeworks that showed the PHP community the power of working with individual components, and how you can glue those together to make an extremely powerful foundation to build upon which includes most of the best practices.

Why EOL?

Now I wasn't at the keynote so I can only guess to the reasons, but it does make sense to me. When Silex was released the whole concept of taking individual components to build a microframework was pretty new to PHP developers. The PHP component ecosystem was a lot more limited as well. A huge group of PHP developers was used to working with full stack frameworks, so building your own framework (even with components) was by many deemed to be reinventing the wheel.

Fastforward to 2017 and a lot of PHP developers are by now used to individual components. Silex has little to prove on that topic. And with Composer being a stable, proven tool, the PHP component ecosystem growing every day and now the introduction of Symfony Flex to easily setup and manage projects maintaining a seperate microframework based on Symfony components is just an overhead. Using either Composer or Symfony Flex, you can set up a project similar to an empty Silex project in a matter of minutes.

Constructicons

I have been a happy user of Composer with individual components for a while now. One of my first projects with individual components even turned into a conference talk. I'll update the talk soon, as I have since found a slightly better structure, and if I can make the time for it, I'll also write something about this new and improved structure. I've used it for a couple of projects now and I'm quite happy with this structure. I also still have to play with Symfony Flex. It looks really promising and I can't wait to give it a try.

So the "my-lex" in the title, what is that about? It is about the choice you now have. You can basically build your own Silex using either Composer and components or Symfony Flex. I would've laughed hard a couple of years ago if you'd said to me that I would say this but: Build your own framework!

Is Silex being EOL'ed a bad thing?

No. While it is sad to see such an important project go I think by now the Symfony and PHP ecosystems have already gone past the point of needing Silex. Does this mean we don't need microframeworks anymore? I won't say that, but with Slim still going strong the loss of Silex isn't all that bad. And with Composer, Flex and the huge amount of PHP components, you can always build a microframework that suits your specific needs.

The only situation where Silex stopping is an issue is for open source projects such as Bolt (who already anticipated this that are based on Silex, as well as of course your personal or business projects based on Silex. While this software will keep on working, you won't get new updates of the core of those projects, so eventually you'll have to put in effort to rewrite it to something else.


One year without -m

One year ago I blogged about starting a new practice: Not using -m when committing something to Git. -m allows you to directly insert the commit message, which makes the whole process of committing faster, but not necessarily better.

Committing to Git

When you commit your work to Git, you not only make sure the code is in your version control, but you also have an opportunity to document that exact moment in the history of your software. When using the -m option, you're very likely to write a very short message. You're not really encouraged to actually document the current state of your code, because writing longer or even multi-line messages is harder in a console.

Not using -m anymore

So, about a year ago I stopped using the -m parameter when committing changes to Git. Has it really changed anything?

Yes and no.

Yes, it has changed something in that I now take more time to write the commit message and sometimes take the time to document what is in the change and why the change was made.

No, because all too often I'm still tempted to write a pretty short commit message.

It is still something that I need to focus on more, to take the time to write useful commit messages. Things that allow you to create a timeline of your development out of the list of commits. But most certainly, commit messages have improved since making this little change in my process.