Inspire, innovate, ignite


HEX2017 – hackathon

HEX2017:Energized, surprised and exhausted

MLH is an international student hackathon league, with stops in several countries world-wide. We were thrilled to be the e-commerce partner for the Dutch stop in Eindhoven: HEX2017!

Can we inspire student teams to think on the problems that we are trying to solve? And what can we time-box in 24 hours for a diverse student scrum team?

One of the main topics is the usage of machine learning. We’re experimenting and implementing several algoritms, but one of the key challenges is how to integrate this into our shopping experience?

How can we get a personal connection with machine learning?

As we’re experimenting and exploring on machine learning, the difficulty that everyone is trying to overcome is how to get from a tool to a real connection. Wieden+Kennedy and others are epxerimenting on the field and we feel that this line fits with our view on the developments. The needybot is one of the prime examples of the practical experimenting that is going on now: www.needybot.io

Scrambling on our idea, we’re going throught the needybot kit. We re trying to find the right components, working on the operating system running and figure the configuration. Not an easy task, the libraries of the bot need updating. Also, the operating system is not so easy to run. We get the connection with the needybot eye, the os running and we have a base to start from.

We’re on our way!

SAT, 13:15 hrs We’ve given our pitch to the teams. The other tracks are also interesting and inspiring: VR, using satelite data and …. Who will pick up our challenge? What will their ideas be?

SAT, 14:00-20:00 During the afternoon, several teams approach us to get more insights, pitch their ideas and challenge our assumptions. We’ve identified four wehkamp teams so far… nice catch.

The nice thing to see is that we have four different teams in terms of setup, approach and problem solving. They’re all choosing a different approach, but more than that: they are taking a different approach than we expected. Cool!

We get a lot of interesting questions on our customers, our ideas and give input to their brain storm.

SAT, 20:00-01:00 The teams are prepping for a long night. Using all snacks and dextrose they can find, they all switch from plan mode to execution. Again, every team takes a different approach: one team has made a clear division and all members develop on a part. Other teams have made a different division on labor and have different roles: more design on another team and yet another chooses to pair program.

We leave the teams at their work and make a short round: One team is stuck on the tech details and we try to help without favoring them in regards to the other teams.

SUN, 08:00 The teams have made an all-nighter. Great to see their work and the progress so far. All teams have been able to turn their ideas into a workable base. We see some demos, but every team needs quite some work to get their ideas to a level that they have in mind. A short update and round and we leave the teams to their challenge.

We’re inspired and start working on the arduino’s that we’ve packed. Nice to do on a Sunday morning, but we come nowhere near the teams. But tinkering is a great way to spend your time.

SUN, 13:00 Jury-time! We make our round through the building and visit all teams. To our surprise, there is a fifth team that took up our challenge.

We get five enthousiastic demos. Every team has pushed and prepared a nice demo. All teams show that they’ve put a lot of effort in their ideas and the ideas give us great insights. Tough call to rate their ideas. all have put a lot of effort into it and have come up with great ideas and demos.

One of the teams has come up with several ideas to make needybot more human and more whkamp. They’ve created four different approaches that tie in to a more personalised experience of needy. Based on speech patterns, the bot mirrors words used. It also advises on discounts (even with competitors ;-) ). The team also demos a module that recognises the facial expressions of the people that needybot sees. Great work, the team taken the concept and added some great improvements and worked them out to demoable parts!

The second team took a radically different approach: They created a style platform. People can create a look, share their looks and give other looks points and rating. The demo and presentation are great. they’ve created the platform and the style generators using wehkamp articles. It also contains a style finder based on aI. All in all a cool presentation and great content.

The next team we visit, stayed on the idea of the needybot. They’ve put great effort in tring to get the bot to work on AWS. They’ve also put great effort in facial expressions and demo an implementation that recognises and returns the expression. Nice work in 24 hrs!

We then visit a team that has been working on a different field, but has a nice angle for wehkamp. They gamified the neighbourhood and points and badges are earned. They have an idea to integrate re-utilisation into their game. Nice idea and well worked out.

SUN, 16:00 One of ‘our’ teams get through to the final round. We were not the only judges that were enthousiastic about the teams.

In the end, one of the wehkamp teams wins the finale. Great work and a good idea. But we invited all teams to wehkamp’s warehouse for lunch and thank you for the ideas.

Kafka meetup on June, 8th, Utrecht

Wehkamp in Utrecht? No, not really, but we’ve learned a lot using kafka in our microservices platform and want to share our learnings.

Casper Koning (www.codestar.nl) will be present on this meetup, showing a behind-the-scenes look of some of our insights. We’re rolling out our microservices platform whilst running our business and we’re doing so in a gradual way. Interesting stuff and a lot of learnings, daily!

Join the meetup on: https://www.meetup.com/nl-NL/Kafka-Meetup-Utrecht/

And we’ll share the slides afterwards!

Cloudflare metrics using prometheus

The intro

We’ve been using Prometheus for well over a year now, making great use of it’s scraping capabilities within our microservices platform. In fact, the platform doesn’t even accept services without a metrics endpoint for prometheus to scrape. Services generate metrics and also most other parts of our stack get scraped, like Elasticsearch or Kafka. We use Grafana to visualize all these metrics, making them easily accessible to both developers and others who just like to gaze at fancy graphs. Last but not least, the Prometheus alertmanager is used to ping Pagerduty in case of issues somewhere in our stack.

To secure our online presence we chose Cloudflare and we like the graphs and insights they offer us through their web-frontend. But wouldn’t it be nice to have those graphs in our own Grafana instance? Wouldn’t it be nice if we could get alerts based on what happens in Cloudflare? Well, yes. Of course. But that would mean getting info in Prometheus, somehow, right? Yep! And here it is: prometheus cloudflare exporter.

Read More »

Docker and zombies

We’ve all heard about the incredible chaos monkey from Netflix, killing instances at scale. At Wehkamp we have something similar to attack our microservices clusters: Half-Life2.

Yes, that game from Valve. And using Garry’s mod, one can actually interact with entities inside the game. Spawning items, weapons, non-playing characters and even zombies. Another possibility is talking with HTTP endpoints, and that opens up a wide range of options.

Our Blaze microservices platform is build on Mesos/Marathon. Both expose REST API endpoints over HTTP, which the entire world should do, since it’s 2016.

Read More »

Applying Consul within the Blaze microservices platform

Introduction

Wehkamp.nl is the leading online fashion retailer in the Netherlands with 400,000 daily users who order over 7 million packages per year. In 2014 Wehkamp decided that their current Microsoft based platform, which was already 10 years old, had reached its limits and started working on a new platform.

This new platform is based a micro-services architecture using Docker and Apache Mesos  with the Marathon framework. Each micro-service is written in Scala/Akka, and uses Cassandra as a backend. One particularly tricky issue was dynamic service discovery which we solved using HashiCorp’s Consul.

Ansible with the haproxy-marathon-bridge was not sufficient for our needs

Originally we tried using Ansible using for orchestration and to hold all environment variables and hostnames. This works perfectly in largely static environments, but Wehkamp’s platform is becoming more dynamic every day. For example, when using Mesos the IP address and port of a service can change each time it is redeployed. The Ansible based orchestration just couldn’t deal with this; a live service registry was required.

Theoretically the Marathon Mesos-framework can deal with live service discovery via the haproxy-marathon-bridge which is basically a script that queries the Marathon API and generates a Haproxy configuration, but there were problems here too. The haproxy-marathon-bridge runs on each host so that querying haproxy always returns the correct location for a given service, but does not work for services running outside of the Mesos cluster which we required.

Consul to the rescue

Given the above problems, we looked for alternative service discovery solutions and decided that HashiCorp’s Consul would be ideal as it provided us with both a distributed key-value store for our deploy time configurations alongside a dynamic DNS backend to help deal with the increasingly dynamic workload.

In the new platform, we also wanted to comply to the 12 factor app manifesto made famous by Heroku. We expose configuration data using environment variables and during deployment these environment variables are substituted or expanded using the Consul service registry. During runtime, the applications are able to find each other using a dynamic DNS name provided by Consul. We chose to keep the applications platform agnostic, so we only provide the configuration data during the deployment.

Consul-High-Level Architecture

On every Mesos node we now run a Consul agent in a Docker container and these agents connect to a 3 node Consul master cluster which is bootstrapped using Ansible ahead of time. The Consul pre-deployment is necessary because at the time of writing Marathon doesn’t provide any anti-affinity options to ensure the 3 Consul masters are deployed on the different machines. On each Mesos node we also run a Registrator container, which adds the service details into the registry when a new container is spawned.

To locate other services, we use a HAproxy load balancer on each Mesos node. This proxy routes the incoming DNS name to the target service with the port that is assigned by Mesos. This way the whole setup is completely dynamic, and services only need to know the name of the services they are depending on.

For regular DNS lookups we use Consul as well. If we want to make a TCP connection to a port, we can simply setup the connection to the service name. This gets translated into the IP address of a healthy instance that can handle the request.

Consul-Lookup

For the incoming routing, we use a gateway together with consul-template. This is a tool, that regenerates a file (like haproxy.conf) based on a template and changed data in Consul. This way the gateway gets updated automatically when we change the number of instances for a running service. We built a custom gateway for this, that will likely be open sourced in the near future.

One thing to note is that we’re running everything inside Docker containers. This means we deploy infrastructure services almost the same way as we’re deploying application services. This keeps the underlying infrastructure really simple and uniform, and it keeps our orchestration tooling really simple.

Summary

Using Consul, we were able to create a completely dynamic environment, where every service can find its dependencies without any human administration. We were able to get rid of our big Ansible files with key/value pairs, so we simplified the deployment, orchestration and maintenance of the system.

By distributing our configuration data with Consul we’ve gained resilience, the real gain however is in developer productivity. Previously deploying a new service would take a junior developer 15-30 minutes to write an Ansible deployment scripts. This is now taken care of ‘for free’.

Rethinking the wehkamp.com grid

One stable grid

As a developer, every once in a while you get to the point where you have to completely rewrite your own code from scratch. Because of a new platform architecture, or due to new business needs.

When we began scaffolding the grid-system for the wehkamp.com interface, we started out with the same basic principle Twitter used with Bootstrap for their responsive grid: a fairly simple, twelve-column structure to accommodate a wide range of viewports. Columns could live in rows, and were housed in a central container spacing unit. Although we could’ve implemented the whole Bootstrap library, we chose to start with only their methodology and start writing a lightweight, scalable User Interface kit ourselves without all the bloat that comes with an existing UI framework.

At wehkamp.com, everything revolves around showcasing our products to our visitors in the best possible way, which of course depends on a robust yet flexible grid. On our search and category pages, we have a vertical navigation tree and filtering on the left hand side from which you drill down your search and view the products on the right, three in a row.

 

 

grid_12

 

 

At the largest supported width (1280 pixels — mostly displayed on desktop-like screens), the twelve column structure provides four sections covering three columns each: one section for the navigation tree and three sections for the products and other features like sorting, swap view and paging.

<div class="container">
  <div class="row">
    <div class="col-md-span3 nav-section">
      <!-- nav component -->
    </div>
    <div class="col-md-span9 product-section">
      <div class="row">
        <div class="col-md-span4">
          <!-- product -->
        </div>
        <div class="col-md-span4">
          <!-- product -->
        </div>
        <div class="col-md-span4">
          <!-- product -->
        </div>
      </div>
    </div>
  </div>
</div>

Nothing really fancy here. Everyone who worked with Bootstrap before can relate to the code, which was one of the main advantages when working with several teams in the same codebase.

col-” as component identifier, “md-” as breakpoint handler, and “span* ” to handle the eventual width being adopted.

 

 

Rethink your code

As features and business requirements of the project progressed, we found ourselves having quite a debate when this one user story came along:

As a customer
I want to see more products in a row
So I can get a better glance at the catalog

So basically it meant we had to add an extra product in to each row. This way products were shown big enough on desktop view but still had the perfect dimension on smaller devices. Sure, we could have changed the product width in the product section alone, from it’s original 33% to 25% and be done with it. But that also meant that the widths of the product columns in relation to the navigational column were not in sync.

In addition, when you stack other components below the productlist like ‘Last viewed products’ or general product recommendations which span the total container width, an unwanted misalignment would give the page a messy look & feel.

 

Grid wireframing

While staying structurally and visually sane, we had to think of a way to accommodate five columns and at the same time be flexible enough to answer four-three-two column needs on smaller devices. But although we have a clean and distinct project codebase, changing grid related css-classes in every template a year in production would definitely require some regression testing, both manually and automatically.

So to make things more clear, we made three sketches to visualize the change before we started altering all templates in the application: one which represents the current situation as a baseline, one that could be done with our current grid setup — but with its drawbacks, and one that what turned out to be the one that we build eventually: Multi-gridded!

 

grid_1
Original column setup

The original setup where product columns as well as navigational sections span equal widths.

 

 

grid_2
Unwanted quick-win

The most easy way to get the user story done: rename the product column css-class from col-xl-span4 to col-xl-span3 where needed, producing an unwanted side-effect: a global difference between columns and components.

Eventual column setup
Eventual column setup

The eventual column setup with the new, flexible multi-width-column grid-system, providing every possible solution while remaining consistent across the whole website.

Getting Sassy with it!

Since we had Sass as our css-preprocessor from the beginning of the project, the idea was to have the new grid requirements work for us in a simple function which loops over every variable.

 

Setup basic variables

First, we define a basic setup to handle all widths that can be used across the whole User Interface library for mixins, media-queries and containers. We already had these values in our settings file from day one, so we did not alter these when we started working on the new grid setup.


$width: 0;
$width-xs:  480px !default;
$width-sm:  640px !default;
$width-md:  768px !default;
$width-lg:  992px !default;
$width-xl: 1280px !default;

Thanks to the !default flag, a you can define an own $width variable before the library includes without seeing your value redefined.

 

 

Define grid types

Secondly, we define an array that is responsible for handling the total amount of columns and is being used in the iterator later on as multiplier in the main Sass function to parse the total grid. This was the first extension we made in comparison with the old grid setup.

$grid-types: (
 10,
 12
);

The setup is very easy to extend for when new grid-types should be created.

 

 

Arrayify labels and widths

To implement the eventual css-classes in our HTML markup we need an easy handler that represents a specific breakpoint. We define it as the first parameter in the array as label, and add the corresponding width variable right next to it, which we declared earlier.
We used this same Bootstrap naming concept with the old grid-system but that was mainly hard-coded and not very easily adjustable. Using a global array was the new way to go.

$breakpoints: (
 ""   $width,
 "xs" $width-xs,
 "sm" $width-sm,
 "md" $width-md,
 "lg" $width-lg,
 "xl" $width-xl
);

Creating arrays in Sass is very straight-forward. You can use nested lists without braces using different separators to distinguish levels, and add aliases to them in your loops and functions later on.

 

 

The media-query mixin

Since before our grid rebuild, along with the basic width variables, we had a simple mixin we could use within any css-selector.

/* Breakpoint mixin */
@mixin breakpoint($width, $type: min-width) {
  @if (str-index($type, max-)) {
    $width: $width - 1;
  }
  @media screen and ($type: $width) {
    @content;
  }
}
 
/* Usage */
.nav {
  padding: $base-spacing;
 
  @include breakpoint($width-md) {
    padding: $base-double-spacing;
  }
}

Most of the time we only use this to define things mobile-first, where your notation represents a specific width and up, but optionally you can add a second parameter to define a max-width where the ‘minus one’ is automatically calculated. The @content takes care of all declarations being injected in the new media-query.

 

 

The grid generator

So much for the setup. Let’s get calculating. We have a set of breakpoints with corresponding widths, a set of grid-types, and per grid type all column-spans to be calculated.

@each $breakpoint in $breakpoints {
   $alias: nth($breakpoint, 1);
   $width: nth($breakpoint, 2);
 }

First, we loop over the breakpoints array. This is the first handler we encounter after the “col-”. Within this array, two parameters are set, so we can access those by defining an alias. The nth(var, 1) notation represents the first parameter within the first level.

@include breakpoint($width) {
}

Second, we implement the breakpoint mixin so we can parse the $width from the loop over every breakpoint handler.

@each $grid-type in $grid-types {
}

Third, we loop over the grid-types. In our case, this is the 10 and 12 factor we declared earlier. This array can easily be extended as new widths could be designed in the future.

$breakpoint-handler: if( $alias != "", $alias + "-" , "" );

If an alias is empty — from 0 to the first breakpoint-handler xs, no additional dash for readability has to be inserted so we add an extra if-statement to evaluate this. The $breakpoint-handler takes the $alias we defined earlier and puts this in the eventual column declaration.

@for $i from 1 through $grid-type { }

To generate the correct amount of columns per grid-type per breakpoint segment, a for-loop with $grid-type as max value for the amount of columns is the ideal candidate. The $i variable from the iterator can be used as index for each column and as a multiplier in the width property.

.col-#{$breakpoint-handler}#{$i}-#{$grid-type} {
  width: #{(100 / $grid-type * $i) + '%'};
}

/* without breakpoint handler */
.col-1-10 {
  width: 10%;
}

/* with breakpoint handler */
.col-xs-1-10 {
  width: 10%
}

When all loops and aliases are in place, we can actually begin to write some lines that eventually render to real CSS. The main selector handles the width, as were the additional selectors are responsible for pulling, pushing or offsetting whitespace for flexibility. We use the Sass interpolation notation #{ } to make sure everything is calculated before it is finally rendered in to CSS.

 

 

The final function

Putting everything together, our Sass grid generator looks like this:

@each $breakpoint in $breakpoints {
  $alias: nth($breakpoint, 1);
  $width: nth($breakpoint, 2);
 
  @include breakpoint($width) {
    @each $grid-type in $grid-types {
      $breakpoint-handler: if( $alias != "", $alias + "-" , "" );
 
      @for $i from 1 through $grid-type {
        .col-#{$breakpoint-handler}#{$i}-#{$grid-type} {
          width: #{(100 / $grid-type * $i) + '%'};
        }
        .col-#{$breakpoint-handler}push#{$i}-#{$grid-type} {
          left: #{(100 / $grid-type * $i) + '%'};
        }
        .col-#{$breakpoint-handler}pull#{$i}-#{$grid-type} {
          right: #{(100 / $grid-type * $i) + '%'};
        }
        .col-#{$breakpoint-handler}offset#{$i}-#{$grid-type} {
          margin-left: #{(100 / $grid-type * $i) + '%'};
        }
      }
    }
  }
}

The column properties are an addition to the general properties declared in the base column selector before the grid generator function:

*[class*="col-"] {
  float: left;
  min-height: 1px;
  padding-left: $base-grid-spacing;
  padding-right: $base-grid-spacing;
  position: relative;
  width: 100%;
}

 

 

Sprint demo

The user story was reviewed and pushed to production in no-time. And although during the sprint demo not every stakeholder understood what was going on under the hood, it was nice to share some of the logic that was implemented in this story. Giving some extra context brings better understanding to everyone inside and outside the team that it is not always as simple as just putting an extra product in each row.

 

grid_10
Five equal columns in the new version of the desktop view for the search results ~ 1280px

 

Before
Fixed twelve-column structure only.

<div class="col-md-span4">One Third from 768px and up</div>

After
Self-explanatory multi-column structure. Scalable, maintainable, simple.

<div class="col-md-4-12">One Third from 768px and up</div>
<div class="col-xl-2-10">One Fifth from 1280px and up</div>

 

There you have it. A compact and extendable Sass function driven by two simple arrays and one list of width variables with a very flexible outcome: a robust and future-proof grid-system which can easily be extended when needed. Happy stakeholders, happy developers.

And some nice story points to add to our Jira burn down chart too.

 

The rebirth of the Wehkamp front-end

It feels like only yesterday when you sat down at home after school behind your computer, staring at the screen and just, patiently, waiting for this typical sound.

It was in that period of time – about 1995 – when Wehkamp, like many others, wanted to expand their offline business proposition to this new platform, promising infinite possibilities. Being this pioneer online and having to deal with all the hardware restrictions and lack of decent broadband internet connections we are used to nowadays, Wehkamp offered their customers only the slimmed-down interface online, while all catalog imaging was sourced from a CD-ROM that was ordered earlier.

 

A lot has changed since then.

 

That slimmed-down interface evolved into something that matches the need of a modern customer. A rich user experience with interactive marketing content and a high-performant and responsive platform to accommodate a wide range of devices.

Although the current platform is very successful, a new and more flexible way of developing for the future was needed. Things needed Blaze. So did the front-end.

 

Just working with HTML, CSS and jQuery
The front-end stack of the current .nl platform consists of ‘just’ working with HTML, CSS and jQuery. And the luxury of having one of the most extensive IDE’s you can ask for as a developer in the .NET stack: Microsoft Visual Studio. As all functionality was built on premise and being stored within Team Foundation Server, we were our own resource community.

Although we saw the need to componentize the front-end codebase in it’s current state, the size of the platform made it a herculean task to do. From several parts of the code-base, tens of stylesheets and even more javascript includes were done and CSS specificity was a real struggle.

Serving millions of users, you need to be fast. And since a stylesheet is on the critical path, we were aiming to reduce the amount of stylesheets drastically. Components were being rearranged, lots of code was combined, legacy declarations were removed and a new bundling tool powered by the Web Essentials plugin in Visual Studio was introduced. The result: less HTTP-request, less server-load, not twelve but only three separate stylesheets in the DOM, and a much easier way to work with the code-base provided.

 

tools

 

But that was not the end of it. When we started out with Blaze, we knew we had to stretch it even further. Modern web development is all about finding the right tools, and get the most out of every project. The choice of the toolset began very low-level. As the fundamentals with the Typesafe stack were a fact for the backend, and had Github as version control system, we chose the Yeoman scaffolding tool to set up the front-end basics.

 

AngularJS as weapon of choice
To accommodate user interaction in a flexible and scalable platform to handle our HTML rendering and DOM manipulation we had AngularJS as weapon of choice. In contrast to the old stack and doing everything manually with the jQuery library, AngularJS is a mature framework where a lot of functionality comes out-of-the-box like routing, two-way data binding and dependency management. Yes, the word ‘framework’ implies rigidity, but the fact that you are forced to have a certain project architecture, decreases the risk of clutter and personal code-style which could lead to a higher learning curve across the teams.

We needed a solid build-tool to deliver high-quality, concatenated, minified and versioned software. For us that was Grunt. Apart from the build tasks, day-to-day coding is like a breeze with all changes being automatically live-reloaded in the browser.

The mind-set of quality without compromise was incorporated within right from the start of the project. And with that criteria aiming to be met, a solid testing framework is a necessity. Karma – which is seamlessly integrated with Grunt – for the unit-testing, and Protractor and BrowserStack for the end-to-end and multi-browser testing.

 

Taming large-scale CSS
For the interface part, in which CSS played a major role, we did not have anything in place, other that the Sass preprocessor to streamline it. But what about project structure? How do we shape our stylesheets so that we can meet our business’ wishes?

In essence, the language of the Cascading Style Sheet is not very dynamic. It is quirky, it doesn’t let itself shape easily and every browser has its own way of treating it.

So we needed a plan. In the year prior to Blaze, the front-end team spent an awesome week together with Harry Roberts here at Wehkamp. We talked about the process of creating, maintaining and engineering large-scale CSS and how we could let it work for us, instead of against us. Harry introduced us to his latest methodology: ITCSS.

 

The inverted triangle
ITCSS, short for “Inverted Triangle CSS”, is a scalable, managed architecture for working with CSS at progressive scale. It is a methodology and school of thought which allows us to strictly group and order explicit types of CSS rules in a manner that makes them more useful, more manageable and more extensible. ITCSS is a meta-framework which aims to guide and outline a project’s architecture; it does not dictate anything as specific as naming conventions, or any other more opinionated ideas.

 

triangle

 

The Inverted Triangle aims to tame and guide the aspects of CSS that become problematic at scale—specificity, inheritance, and the cascade. Layers toward the top affect a lot of the DOM, are very far reaching, have a lower specificity, and have a lot of cascading rules. As we travel down the layers in the triangle, we find that rules affect less and less of the DOM, have a progressively higher specificity, and pass on less and less of their styling to subsequent layers.

The key to breaking code into these shearing layers is by structurally and visually decomposing UI features. If a new design is being made, we need to work out what aspects of it can be attributed to a certain layer. To make sure the component based architecture is being preserved, element repetition or the potential for element repetition must be addressed. Reusable interface features help maintaining the codebase and reduce its growth. And that automatically means working together closely with other team-members and stakeholders, because modern web development is still just a way to build the stuff right. But let’s not forget to build the right stuff.

 

Spiffy interface
As the fast-growing and ever changing technological landscape of the new e-commerce platform of Wehkamp is evolving, so does its user interface. We are constantly shaping, expanding and perfecting our components to provide not only a spiffy interface, but more importantly, one that just works intuitively and leaves no questions to the individual who uses it.

The past year at Wehkamp was a real rollercoaster of new tools and techniques. And let’s face it: a new JavaScript framework is released each month. A challenge, but one that grew with us and really made us accelerate. Choices about which tool or library to use on our project go hand in hand with the transition we face as a team. But we are not in it alone. Since Blaze and the new tech-stack, the community has our back too. And that evolution pays off.
The Blaze front-end has really been reborn.

Microservices in practice talk at XebiCon 2015

On the 4th of June, the XebiCon 2015 conference took place in the Westergasfabriek in Amsterdam. XebiCon is an IT-conference organized by Xebia. IT services provider Xebia is also our partner into helping to create our new e-commerce platform. The conference had multiple tracks, such as Internet of Things, Continuous Delivery and data center automation. All are very hot topics in the IT development community today.

One of the other hot topics was Microservices, which also had a lot of attention on this conference. There were multiple talks about this topic, including a talk I did together with Jan Toebes (Xebia consultant) about our microservices implementation at Wehkamp. In this talk we described why our new platform is based on a microservices architecture and how we managed to get this far. Using two theoretical principles of microservices which we explained in detail, we also talked about how we used these principles to create the platform. We also explored the cons and pros of a microservice architecture and in our case, the difficulties and opportunities we had.

During the presentation, the room was packed with developers who wanted to know how we managed to create this platform. It was very satisfying to have some positive reactions after the talk. It confirms that we are doing great things here at Wehkamp!

For people who didn’t attend the conference or our talk, the whole presentation was recorded. For those who are interested, the presentation can be experienced again on YouTube.