Inspire, innovate, ignite


Cloudflare metrics using prometheus

The intro

We’ve been using Prometheus for well over a year now, making great use of it’s scraping capabilities within our microservices platform. In fact, the platform doesn’t even accept services without a metrics endpoint for prometheus to scrape. Services generate metrics and also most other parts of our stack get scraped, like Elasticsearch or Kafka. We use Grafana to visualize all these metrics, making them easily accessible to both developers and others who just like to gaze at fancy graphs. Last but not least, the Prometheus alertmanager is used to ping Pagerduty in case of issues somewhere in our stack.

To secure our online presence we chose Cloudflare and we like the graphs and insights they offer us through their web-frontend. But wouldn’t it be nice to have those graphs in our own Grafana instance? Wouldn’t it be nice if we could get alerts based on what happens in Cloudflare? Well, yes. Of course. But that would mean getting info in Prometheus, somehow, right? Yep! And here it is: prometheus cloudflare exporter.

Read More »

Docker and zombies

We’ve all heard about the incredible chaos monkey from Netflix, killing instances at scale. At Wehkamp we have something similar to attack our microservices clusters: Half-Life2.

Yes, that game from Valve. And using Garry’s mod, one can actually interact with entities inside the game. Spawning items, weapons, non-playing characters and even zombies. Another possibility is talking with HTTP endpoints, and that opens up a wide range of options.

Our Blaze microservices platform is build on Mesos/Marathon. Both expose REST API endpoints over HTTP, which the entire world should do, since it’s 2016.

Read More »

Applying Consul within the Blaze microservices platform

Introduction

Wehkamp.nl is the leading online fashion retailer in the Netherlands with 400,000 daily users who order over 7 million packages per year. In 2014 Wehkamp decided that their current Microsoft based platform, which was already 10 years old, had reached its limits and started working on a new platform.

This new platform is based a micro-services architecture using Docker and Apache Mesos  with the Marathon framework. Each micro-service is written in Scala/Akka, and uses Cassandra as a backend. One particularly tricky issue was dynamic service discovery which we solved using HashiCorp’s Consul.

Ansible with the haproxy-marathon-bridge was not sufficient for our needs

Originally we tried using Ansible using for orchestration and to hold all environment variables and hostnames. This works perfectly in largely static environments, but Wehkamp’s platform is becoming more dynamic every day. For example, when using Mesos the IP address and port of a service can change each time it is redeployed. The Ansible based orchestration just couldn’t deal with this; a live service registry was required.

Theoretically the Marathon Mesos-framework can deal with live service discovery via the haproxy-marathon-bridge which is basically a script that queries the Marathon API and generates a Haproxy configuration, but there were problems here too. The haproxy-marathon-bridge runs on each host so that querying haproxy always returns the correct location for a given service, but does not work for services running outside of the Mesos cluster which we required.

Consul to the rescue

Given the above problems, we looked for alternative service discovery solutions and decided that HashiCorp’s Consul would be ideal as it provided us with both a distributed key-value store for our deploy time configurations alongside a dynamic DNS backend to help deal with the increasingly dynamic workload.

In the new platform, we also wanted to comply to the 12 factor app manifesto made famous by Heroku. We expose configuration data using environment variables and during deployment these environment variables are substituted or expanded using the Consul service registry. During runtime, the applications are able to find each other using a dynamic DNS name provided by Consul. We chose to keep the applications platform agnostic, so we only provide the configuration data during the deployment.

Consul-High-Level Architecture

On every Mesos node we now run a Consul agent in a Docker container and these agents connect to a 3 node Consul master cluster which is bootstrapped using Ansible ahead of time. The Consul pre-deployment is necessary because at the time of writing Marathon doesn’t provide any anti-affinity options to ensure the 3 Consul masters are deployed on the different machines. On each Mesos node we also run a Registrator container, which adds the service details into the registry when a new container is spawned.

To locate other services, we use a HAproxy load balancer on each Mesos node. This proxy routes the incoming DNS name to the target service with the port that is assigned by Mesos. This way the whole setup is completely dynamic, and services only need to know the name of the services they are depending on.

For regular DNS lookups we use Consul as well. If we want to make a TCP connection to a port, we can simply setup the connection to the service name. This gets translated into the IP address of a healthy instance that can handle the request.

Consul-Lookup

For the incoming routing, we use a gateway together with consul-template. This is a tool, that regenerates a file (like haproxy.conf) based on a template and changed data in Consul. This way the gateway gets updated automatically when we change the number of instances for a running service. We built a custom gateway for this, that will likely be open sourced in the near future.

One thing to note is that we’re running everything inside Docker containers. This means we deploy infrastructure services almost the same way as we’re deploying application services. This keeps the underlying infrastructure really simple and uniform, and it keeps our orchestration tooling really simple.

Summary

Using Consul, we were able to create a completely dynamic environment, where every service can find its dependencies without any human administration. We were able to get rid of our big Ansible files with key/value pairs, so we simplified the deployment, orchestration and maintenance of the system.

By distributing our configuration data with Consul we’ve gained resilience, the real gain however is in developer productivity. Previously deploying a new service would take a junior developer 15-30 minutes to write an Ansible deployment scripts. This is now taken care of ‘for free’.

Rethinking the wehkamp.com grid

One stable grid

As a developer, every once in a while you get to the point where you have to completely rewrite your own code from scratch. Because of a new platform architecture, or due to new business needs.

When we began scaffolding the grid-system for the wehkamp.com interface, we started out with the same basic principle Twitter used with Bootstrap for their responsive grid: a fairly simple, twelve-column structure to accommodate a wide range of viewports. Columns could live in rows, and were housed in a central container spacing unit. Although we could’ve implemented the whole Bootstrap library, we chose to start with only their methodology and start writing a lightweight, scalable User Interface kit ourselves without all the bloat that comes with an existing UI framework.

At wehkamp.com, everything revolves around showcasing our products to our visitors in the best possible way, which of course depends on a robust yet flexible grid. On our search and category pages, we have a vertical navigation tree and filtering on the left hand side from which you drill down your search and view the products on the right, three in a row.

 

 

grid_12

 

 

At the largest supported width (1280 pixels — mostly displayed on desktop-like screens), the twelve column structure provides four sections covering three columns each: one section for the navigation tree and three sections for the products and other features like sorting, swap view and paging.

<div class="container">
  <div class="row">
    <div class="col-md-span3 nav-section">
      <!-- nav component -->
    </div>
    <div class="col-md-span9 product-section">
      <div class="row">
        <div class="col-md-span4">
          <!-- product -->
        </div>
        <div class="col-md-span4">
          <!-- product -->
        </div>
        <div class="col-md-span4">
          <!-- product -->
        </div>
      </div>
    </div>
  </div>
</div>

Nothing really fancy here. Everyone who worked with Bootstrap before can relate to the code, which was one of the main advantages when working with several teams in the same codebase.

col-” as component identifier, “md-” as breakpoint handler, and “span* ” to handle the eventual width being adopted.

 

 

Rethink your code

As features and business requirements of the project progressed, we found ourselves having quite a debate when this one user story came along:

As a customer
I want to see more products in a row
So I can get a better glance at the catalog

So basically it meant we had to add an extra product in to each row. This way products were shown big enough on desktop view but still had the perfect dimension on smaller devices. Sure, we could have changed the product width in the product section alone, from it’s original 33% to 25% and be done with it. But that also meant that the widths of the product columns in relation to the navigational column were not in sync.

In addition, when you stack other components below the productlist like ‘Last viewed products’ or general product recommendations which span the total container width, an unwanted misalignment would give the page a messy look & feel.

 

Grid wireframing

While staying structurally and visually sane, we had to think of a way to accommodate five columns and at the same time be flexible enough to answer four-three-two column needs on smaller devices. But although we have a clean and distinct project codebase, changing grid related css-classes in every template a year in production would definitely require some regression testing, both manually and automatically.

So to make things more clear, we made three sketches to visualize the change before we started altering all templates in the application: one which represents the current situation as a baseline, one that could be done with our current grid setup — but with its drawbacks, and one that what turned out to be the one that we build eventually: Multi-gridded!

 

grid_1
Original column setup

The original setup where product columns as well as navigational sections span equal widths.

 

 

grid_2
Unwanted quick-win

The most easy way to get the user story done: rename the product column css-class from col-xl-span4 to col-xl-span3 where needed, producing an unwanted side-effect: a global difference between columns and components.

Eventual column setup
Eventual column setup

The eventual column setup with the new, flexible multi-width-column grid-system, providing every possible solution while remaining consistent across the whole website.

Getting Sassy with it!

Since we had Sass as our css-preprocessor from the beginning of the project, the idea was to have the new grid requirements work for us in a simple function which loops over every variable.

 

Setup basic variables

First, we define a basic setup to handle all widths that can be used across the whole User Interface library for mixins, media-queries and containers. We already had these values in our settings file from day one, so we did not alter these when we started working on the new grid setup.


$width: 0;
$width-xs:  480px !default;
$width-sm:  640px !default;
$width-md:  768px !default;
$width-lg:  992px !default;
$width-xl: 1280px !default;

Thanks to the !default flag, a you can define an own $width variable before the library includes without seeing your value redefined.

 

 

Define grid types

Secondly, we define an array that is responsible for handling the total amount of columns and is being used in the iterator later on as multiplier in the main Sass function to parse the total grid. This was the first extension we made in comparison with the old grid setup.

$grid-types: (
 10,
 12
);

The setup is very easy to extend for when new grid-types should be created.

 

 

Arrayify labels and widths

To implement the eventual css-classes in our HTML markup we need an easy handler that represents a specific breakpoint. We define it as the first parameter in the array as label, and add the corresponding width variable right next to it, which we declared earlier.
We used this same Bootstrap naming concept with the old grid-system but that was mainly hard-coded and not very easily adjustable. Using a global array was the new way to go.

$breakpoints: (
 ""   $width,
 "xs" $width-xs,
 "sm" $width-sm,
 "md" $width-md,
 "lg" $width-lg,
 "xl" $width-xl
);

Creating arrays in Sass is very straight-forward. You can use nested lists without braces using different separators to distinguish levels, and add aliases to them in your loops and functions later on.

 

 

The media-query mixin

Since before our grid rebuild, along with the basic width variables, we had a simple mixin we could use within any css-selector.

/* Breakpoint mixin */
@mixin breakpoint($width, $type: min-width) {
  @if (str-index($type, max-)) {
    $width: $width - 1;
  }
  @media screen and ($type: $width) {
    @content;
  }
}
 
/* Usage */
.nav {
  padding: $base-spacing;
 
  @include breakpoint($width-md) {
    padding: $base-double-spacing;
  }
}

Most of the time we only use this to define things mobile-first, where your notation represents a specific width and up, but optionally you can add a second parameter to define a max-width where the ‘minus one’ is automatically calculated. The @content takes care of all declarations being injected in the new media-query.

 

 

The grid generator

So much for the setup. Let’s get calculating. We have a set of breakpoints with corresponding widths, a set of grid-types, and per grid type all column-spans to be calculated.

@each $breakpoint in $breakpoints {
   $alias: nth($breakpoint, 1);
   $width: nth($breakpoint, 2);
 }

First, we loop over the breakpoints array. This is the first handler we encounter after the “col-”. Within this array, two parameters are set, so we can access those by defining an alias. The nth(var, 1) notation represents the first parameter within the first level.

@include breakpoint($width) {
}

Second, we implement the breakpoint mixin so we can parse the $width from the loop over every breakpoint handler.

@each $grid-type in $grid-types {
}

Third, we loop over the grid-types. In our case, this is the 10 and 12 factor we declared earlier. This array can easily be extended as new widths could be designed in the future.

$breakpoint-handler: if( $alias != "", $alias + "-" , "" );

If an alias is empty — from 0 to the first breakpoint-handler xs, no additional dash for readability has to be inserted so we add an extra if-statement to evaluate this. The $breakpoint-handler takes the $alias we defined earlier and puts this in the eventual column declaration.

@for $i from 1 through $grid-type { }

To generate the correct amount of columns per grid-type per breakpoint segment, a for-loop with $grid-type as max value for the amount of columns is the ideal candidate. The $i variable from the iterator can be used as index for each column and as a multiplier in the width property.

.col-#{$breakpoint-handler}#{$i}-#{$grid-type} {
  width: #{(100 / $grid-type * $i) + '%'};
}

/* without breakpoint handler */
.col-1-10 {
  width: 10%;
}

/* with breakpoint handler */
.col-xs-1-10 {
  width: 10%
}

When all loops and aliases are in place, we can actually begin to write some lines that eventually render to real CSS. The main selector handles the width, as were the additional selectors are responsible for pulling, pushing or offsetting whitespace for flexibility. We use the Sass interpolation notation #{ } to make sure everything is calculated before it is finally rendered in to CSS.

 

 

The final function

Putting everything together, our Sass grid generator looks like this:

@each $breakpoint in $breakpoints {
  $alias: nth($breakpoint, 1);
  $width: nth($breakpoint, 2);
 
  @include breakpoint($width) {
    @each $grid-type in $grid-types {
      $breakpoint-handler: if( $alias != "", $alias + "-" , "" );
 
      @for $i from 1 through $grid-type {
        .col-#{$breakpoint-handler}#{$i}-#{$grid-type} {
          width: #{(100 / $grid-type * $i) + '%'};
        }
        .col-#{$breakpoint-handler}push#{$i}-#{$grid-type} {
          left: #{(100 / $grid-type * $i) + '%'};
        }
        .col-#{$breakpoint-handler}pull#{$i}-#{$grid-type} {
          right: #{(100 / $grid-type * $i) + '%'};
        }
        .col-#{$breakpoint-handler}offset#{$i}-#{$grid-type} {
          margin-left: #{(100 / $grid-type * $i) + '%'};
        }
      }
    }
  }
}

The column properties are an addition to the general properties declared in the base column selector before the grid generator function:

*[class*="col-"] {
  float: left;
  min-height: 1px;
  padding-left: $base-grid-spacing;
  padding-right: $base-grid-spacing;
  position: relative;
  width: 100%;
}

 

 

Sprint demo

The user story was reviewed and pushed to production in no-time. And although during the sprint demo not every stakeholder understood what was going on under the hood, it was nice to share some of the logic that was implemented in this story. Giving some extra context brings better understanding to everyone inside and outside the team that it is not always as simple as just putting an extra product in each row.

 

grid_10
Five equal columns in the new version of the desktop view for the search results ~ 1280px

 

Before
Fixed twelve-column structure only.

<div class="col-md-span4">One Third from 768px and up</div>

After
Self-explanatory multi-column structure. Scalable, maintainable, simple.

<div class="col-md-4-12">One Third from 768px and up</div>
<div class="col-xl-2-10">One Fifth from 1280px and up</div>

 

There you have it. A compact and extendable Sass function driven by two simple arrays and one list of width variables with a very flexible outcome: a robust and future-proof grid-system which can easily be extended when needed. Happy stakeholders, happy developers.

And some nice story points to add to our Jira burn down chart too.

 

The rebirth of the Wehkamp front-end

It feels like only yesterday when you sat down at home after school behind your computer, staring at the screen and just, patiently, waiting for this typical sound.

It was in that period of time – about 1995 – when Wehkamp, like many others, wanted to expand their offline business proposition to this new platform, promising infinite possibilities. Being this pioneer online and having to deal with all the hardware restrictions and lack of decent broadband internet connections we are used to nowadays, Wehkamp offered their customers only the slimmed-down interface online, while all catalog imaging was sourced from a CD-ROM that was ordered earlier.

 

A lot has changed since then.

 

That slimmed-down interface evolved into something that matches the need of a modern customer. A rich user experience with interactive marketing content and a high-performant and responsive platform to accommodate a wide range of devices.

Although the current platform is very successful, a new and more flexible way of developing for the future was needed. Things needed Blaze. So did the front-end.

 

Just working with HTML, CSS and jQuery
The front-end stack of the current .nl platform consists of ‘just’ working with HTML, CSS and jQuery. And the luxury of having one of the most extensive IDE’s you can ask for as a developer in the .NET stack: Microsoft Visual Studio. As all functionality was built on premise and being stored within Team Foundation Server, we were our own resource community.

Although we saw the need to componentize the front-end codebase in it’s current state, the size of the platform made it a herculean task to do. From several parts of the code-base, tens of stylesheets and even more javascript includes were done and CSS specificity was a real struggle.

Serving millions of users, you need to be fast. And since a stylesheet is on the critical path, we were aiming to reduce the amount of stylesheets drastically. Components were being rearranged, lots of code was combined, legacy declarations were removed and a new bundling tool powered by the Web Essentials plugin in Visual Studio was introduced. The result: less HTTP-request, less server-load, not twelve but only three separate stylesheets in the DOM, and a much easier way to work with the code-base provided.

 

tools

 

But that was not the end of it. When we started out with Blaze, we knew we had to stretch it even further. Modern web development is all about finding the right tools, and get the most out of every project. The choice of the toolset began very low-level. As the fundamentals with the Typesafe stack were a fact for the backend, and had Github as version control system, we chose the Yeoman scaffolding tool to set up the front-end basics.

 

AngularJS as weapon of choice
To accommodate user interaction in a flexible and scalable platform to handle our HTML rendering and DOM manipulation we had AngularJS as weapon of choice. In contrast to the old stack and doing everything manually with the jQuery library, AngularJS is a mature framework where a lot of functionality comes out-of-the-box like routing, two-way data binding and dependency management. Yes, the word ‘framework’ implies rigidity, but the fact that you are forced to have a certain project architecture, decreases the risk of clutter and personal code-style which could lead to a higher learning curve across the teams.

We needed a solid build-tool to deliver high-quality, concatenated, minified and versioned software. For us that was Grunt. Apart from the build tasks, day-to-day coding is like a breeze with all changes being automatically live-reloaded in the browser.

The mind-set of quality without compromise was incorporated within right from the start of the project. And with that criteria aiming to be met, a solid testing framework is a necessity. Karma – which is seamlessly integrated with Grunt – for the unit-testing, and Protractor and BrowserStack for the end-to-end and multi-browser testing.

 

Taming large-scale CSS
For the interface part, in which CSS played a major role, we did not have anything in place, other that the Sass preprocessor to streamline it. But what about project structure? How do we shape our stylesheets so that we can meet our business’ wishes?

In essence, the language of the Cascading Style Sheet is not very dynamic. It is quirky, it doesn’t let itself shape easily and every browser has its own way of treating it.

So we needed a plan. In the year prior to Blaze, the front-end team spent an awesome week together with Harry Roberts here at Wehkamp. We talked about the process of creating, maintaining and engineering large-scale CSS and how we could let it work for us, instead of against us. Harry introduced us to his latest methodology: ITCSS.

 

The inverted triangle
ITCSS, short for “Inverted Triangle CSS”, is a scalable, managed architecture for working with CSS at progressive scale. It is a methodology and school of thought which allows us to strictly group and order explicit types of CSS rules in a manner that makes them more useful, more manageable and more extensible. ITCSS is a meta-framework which aims to guide and outline a project’s architecture; it does not dictate anything as specific as naming conventions, or any other more opinionated ideas.

 

triangle

 

The Inverted Triangle aims to tame and guide the aspects of CSS that become problematic at scale—specificity, inheritance, and the cascade. Layers toward the top affect a lot of the DOM, are very far reaching, have a lower specificity, and have a lot of cascading rules. As we travel down the layers in the triangle, we find that rules affect less and less of the DOM, have a progressively higher specificity, and pass on less and less of their styling to subsequent layers.

The key to breaking code into these shearing layers is by structurally and visually decomposing UI features. If a new design is being made, we need to work out what aspects of it can be attributed to a certain layer. To make sure the component based architecture is being preserved, element repetition or the potential for element repetition must be addressed. Reusable interface features help maintaining the codebase and reduce its growth. And that automatically means working together closely with other team-members and stakeholders, because modern web development is still just a way to build the stuff right. But let’s not forget to build the right stuff.

 

Spiffy interface
As the fast-growing and ever changing technological landscape of the new e-commerce platform of Wehkamp is evolving, so does its user interface. We are constantly shaping, expanding and perfecting our components to provide not only a spiffy interface, but more importantly, one that just works intuitively and leaves no questions to the individual who uses it.

The past year at Wehkamp was a real rollercoaster of new tools and techniques. And let’s face it: a new JavaScript framework is released each month. A challenge, but one that grew with us and really made us accelerate. Choices about which tool or library to use on our project go hand in hand with the transition we face as a team. But we are not in it alone. Since Blaze and the new tech-stack, the community has our back too. And that evolution pays off.
The Blaze front-end has really been reborn.

Microservices in practice talk at XebiCon 2015

On the 4th of June, the XebiCon 2015 conference took place in the Westergasfabriek in Amsterdam. XebiCon is an IT-conference organized by Xebia. IT services provider Xebia is also our partner into helping to create our new e-commerce platform. The conference had multiple tracks, such as Internet of Things, Continuous Delivery and data center automation. All are very hot topics in the IT development community today.

One of the other hot topics was Microservices, which also had a lot of attention on this conference. There were multiple talks about this topic, including a talk I did together with Jan Toebes (Xebia consultant) about our microservices implementation at Wehkamp. In this talk we described why our new platform is based on a microservices architecture and how we managed to get this far. Using two theoretical principles of microservices which we explained in detail, we also talked about how we used these principles to create the platform. We also explored the cons and pros of a microservice architecture and in our case, the difficulties and opportunities we had.

During the presentation, the room was packed with developers who wanted to know how we managed to create this platform. It was very satisfying to have some positive reactions after the talk. It confirms that we are doing great things here at Wehkamp!

For people who didn’t attend the conference or our talk, the whole presentation was recorded. For those who are interested, the presentation can be experienced again on YouTube.

Introducing The Testing Community at Wehkamp

Besides working on awesome projects, every person in our IT department is part of a community where others with same or similar job roles are united and spend time to work on their own profession and personal development. New technologies, self-improvement, knowledge sharing, code re-use, it all happens in the community time. To better explain this concept, I have developed a short presentation about our testing community.

Meet the community: Scala Days 2015

Scala DaysScala Days, the premier European Scala Conference, was held this year at the Beurs Van Berlage in Amsterdam from June 8th through 10th. Now that Wehkamp has embraced Scala and related technologies for our new Blaze platform, it was a good idea to go and meet the community. So, in addition to being a sponsor of Scala Days this year, we attended the event with a small group of people from our Marketing Technology / Development team and we also had a booth at the venue where we could meet&greet and show the community what we are doing at Wehkamp.

It was my first time at a conference like this, so I didn’t really know what to expect. As I’m pretty new to the Open Source community and the technologies involved, I was wondering if the talks wouldn’t be a bit over my head. I was happy to learn that Scala Days hosted talks in several categories: Enterprise, Cool stuff / IoT, Tools / Best practices and Core Scala, and within those categories there were talks aimed at different skill levels. This way there always was something for you to enjoy, regardless of your role in the dev team or the level of experience you had. In addition, the more advanced talks usually started out with a small introduction for people less educated about the subject.

The conference started with a keynote by Martin Odersky, the creator of Scala. This immediately struck me as one of the cool things about Scala Days: a lot of the creators and core contributors of the technology we use in our new Technology Stack were actually present and doing talks and mingling in the crowd. Be it Martin Odersky, Roland Kuhn (akka), Mathias Doenitz (spray), or the people from Typesafe. It all had a really approachable and relaxed feel to it, and it was nice to see how involved everybody is.

The next couple of days I submerged myself in what is hot and happening in the Scala community. I attended some very interesting talks about Akka typed actors, Spark Streaming with Kafka and Cassandra, Reactive Streams, ScalaJS, build tooling and Options in Futures in Scala (and how to unsuck them). And the more I heard about whats currently cutting edge in Scala-land, the more it was confirmed that we’re heading in the right direction with the new Blaze platform.

The talks varied from being very academic to very hands on, and from talking about whats is happening right now to what the future will bring. On of my personal favourite talks was the Tuesday keynote by Jonas Bonér, CTO of Typesafe, in which he used an analogy of time (and how we perceive it) to describe concepts of dealing with concurrency in software development. While it was in many ways a philosophical talk, it actually made some things ‘click’ for me as I could relate his story to what I am currently working on in the Blaze platform. Awesome.

IMG-20150610-WA0010

But Scala Days is not just about listening to talks. It is, of course, also very much a social event. It’s a means to connect the community with the people behind the technology, with each other, with future employers, with meet-up groups, etc. So outside the talks, the place to go was the main hall of the venue. Be it to have lunch, meet up with other people or just help yourself to a quick snack or drink. It was also the place several sponsors, including Wehkamp, had set up their booths. Most of them had some sort of gimmick to try and attract attention. One booth had an ice cream machine, others had a popcorn machine, a soccer table and one booth even had a professional coffee bar. Our weapon of choice was remote mini helicopters. Attendees could try to fly a mini helicopter from platform A to platform B, and when successful, would have a chance to win the helicopter! Despite the fact that the helicopters turned out to be almost uncontrollable, they did a remarkably good job at creating commotion at our booth, and we actually had some lucky winners at the end of the day.

All in all Scala Days has been a great experience. I learned a lot and got a taste of what the Scala community is all about. I also had a really fun time with the Wehkamp team, and it was great to spend some time together outside the office. 

Scala Days 2016, here we come!