Donnerstag, 18. Januar 2018

beyondtellerrand 2018 Munich

For the first time ever beyondtellerrand (https://beyondtellerrand.com / @btconf) came to Munich. Munich is the third location after Düsseldorf and Berlin where Marc Thiele (@marcthiele) holds his famous event.

The only reason I was aware of that was that my former colleague and friend Francesco Schwarz (@isellsoap a.k.a. the creator of the Specificity Visualizer), who I already joined for beyondtellerrand 2016 in Düsseldorf, pointed it out to me. So we both decided to go there.

The venue was located in the Künstlerhaus directly in the center of Munich next to the famous Karlsplatz. A lovely location with an inspiring atmosphere.

As always the event was comped by beyondtellerrand's DJ Tobi Lessnow (http://www.tobi-lessnow.com / @tobilessnow). For those who have not been at the conference yet: Tobi captures text portions during the talks and then arranges them in his DJ mix during the following break. Really a great experience especially with the passion and dedication Tobi is showing during his performance.

The conference videos are published on Vimeo and Youtube, currently (Jan 18th) only six of the Tuesday talks are online on Vimeo, but I am sure the rest will be posted soon.

Day 1

Sacrificing the Golden Calf of Coding - Chris Heilman (@codepo8)

This was an entertaining opening talk about not only the past but also the future of coding. Using his own life as an example he illustrated how people evolved from hobby coder to professional programmers and how this was reflected in their code. Hobby coders just try to get things working and make the code fast but very very clunky and hard to maintain. But the more people code or a living they have to start worrying about reuse, modularity and maintainability and usually the absolute maximum performance is not the primary concern anymore.

But there is also another kind of evolution going on. In the beginning people wrote code in crappy editors like Notepad, no IDE or anything, installations where done manually etc. Over the years tool support became better and better and now we have systems that can do simple tasks, generate code from templates and handle boring day to day work. Chris emphasizes that people should use the tools we have to their full extend, let the computers and algorithms do the boring and repetitive tasks. We should be creative and innovative, create new things by leveraging what the technology provides us with now.

Just because things where hard in the past, the don't have to be hard forever.

SVG Filters: The Crash Course - Sara Soueidan (@sarasoueidan)

Seriously if you ever need an analogy for drinking from the fire hose, this talk is what you are looking for. Marc Thiele's introduction was more than fitting: "She talks fast and knows a lot" ;-)

Sara gave us an impressive overview over what you can do with SVG filters, from rather easy things like creating drop down shadows to creating your own textures using simple effects like feFlood or complex ones like feComposite or feTurbulance. I was really impressed with what the browser can do without the need for imaging tools like Photoshop. 

All examples where explained by looking at the actual code that Sara walked us through step by step.

Was very interesting but when you watch the video, maybe try to play it at 80% speed :-)

Interactive Email - Mark Robbins (@m_j_robbins)

When I read that topic I was like "Ugh". There are few things I despise more than flashy html emails. And tbh, that is more or less what Mark is building but to be fair, the use cases he described seem sensible and obviously help his customers.

One example is something that a lot of us can surely relate to. You are on a shopping site, add stuff to your cart but then get distracted and forget about it. Now the site can send you an email, listing your cart and also adding the option to complete your purchase and even adjust the amount of items in your cart before doing so. All of that without ever leaving your email client.

There are a few challenges when you do something like this. For one email clients are incredibly diverse and testing for all combinations of email client, operating system and device is impossible.  Also you can't use javascript in those emails so instead everything has to be done using css.

This results in really creepy css selectors, e.g. selections are always implemented as radio buttons so you can use the "checked" attribute in your selectors and multiple radio button elements can be joined in selectors to then style other elements as desired.

The results are indeed impressive considering how limited the available options are and even more impressive was the demonstration of the "Wolfenstein 3D" shooter within an email using only css for the application logic. Okay, it was pretty buggy and by far not a real shooter but it is still an accomplishment.

From AI to Robots, From Apps to Wearables - Let's Design for Everyone, ok? - Robin Christopherson (@USA2DAY)

This talk was composed of rather large amount of videos that Robin strung together explaining certain aspects concerning accessibility of technology in general. What I liked was that he did not focus solely on the accessibility in regard to people with disabilities but also other impairments both permanent and temporary.

Robotics in our Everyday Lives: A Product Designer's Perspective - Carla Diana (@carladiana_)

Carla introduced us to the topic of social robotics, meaning research and projects on how robots can not only assist us in our everyday lives but also provide an interface to interact with them like a social or even human being.

A really cool example was the robot Poli that assists nurses in a hospital thus giving them more time for their patients. This robot is designed to vaguely resemble a human with a head and a torso, reacts to voice commands and replies to them. I think this is a great example of how cool tech can directly improve quality of life.

Another project of hers was the design of the robot Simon that is able to express emotions using basic mimics and is also able to react to his surrounding.

The talk was filled with short clips showing different robotic and interface design projects and I highly recommend to check out the video once it is available. 

The Internet of Natural Things - Simon Collison (@colly)

I must admit, I have a hard time summing up this talk since Simon touched various aspects of how nature and technology can be combined. On one side we have the research aspect where animals are tracked using e.g. chips so that researches can track the movement of birds, whales or other animals. Highly related to that is also how data mining techniques are used to analyze data gathered from those animals that is usually not limited to their GPS coordinates but can contain vital signs, environmental influences and many more.

On the other side you have apps or other pieces of technology that allow you to improve your experience when you are outside in "the wild". One example are nature guides that allow you to easily determine what animal you have just seen or even what bird you just heard by just sampling it's song.

But for me his basic message was that we should take a step back and try to appreciate nature more, to enjoy it fully and to take the time necessary. He underlined his message with stunning images and personal stories.

His last aspect to this topic was that we should also try to design our applications and even operating systems so that the allow for a more natural or intuitiv usage. For example a mobile device could sense where you are (home/work/gym/park) and give you simple access to apps appropriate to the current environment.

Even though the topic itself feels a bit esoteric to me, I still think there is a lot in it that we should think of more. It sure does not hurt to appreciate and actively experience nature more.

Sho Ha Hito Nari: Brushes, Strikes and Reflection of Self - Aoi Yamaguchi (@aoi_gm)

Calligraphy is, like most forms of art, something that I never really understood. And it would be a sham if I would claim that now this is totally different. I guess I am just a bit too ignorant to really appreciate art. 

But what I am able to appreciate and be truly humbled by is the incredible amount of dedication, practice and discipline that Aoi conveyed to us during her talk. I had no idea on the countless hours of not only practice but also preparation someone has to go through to become a grand master in this form of art.

Let alone how much patience is required to properly choose and prepare the ink for a painting, the incredible amount of different brushes and their uses. I also had no idea that ink blocks (I did not even think of ink as blocks until then) age over time and thus develop a unique character that also influences the paintings.

Evening Get Together

As usual the first conference day was closed by the traditional get together with free drinks. We had a few nice chats with different people including Marc Thiele (@marcthiele the boss himself) and a friend of his, Björn Odendahl (@nail7), who incidentally was the designer for the beyondtellerand shirt of 2016. 

Day 2

New Adventures in Responsive Web Design - Vitaly Friedman (@smashingmag)

First talk after drinking night, so we should be of to an easy start, right? Yeaahh... not on Vitaly's watch ;-)

This talk covered most of the topics on what you should consider if you want your website to be fast and responsive. It started of with the usage of compressions like gzip, zopfli or brotli. Then went to images and how you can speed up your website by using smart features or tricks, like responsive images, progressive jpegs. An interesting side note is, that it can be sometimes better to use a large image with low resolution and then have your browser to display it at a smaller size rather than using a small images with regular quality in order to safe bandwidths.

The second big section were, of course, webfonts and how they should be used and what dangers lie within when fonts can't be downloaded (browser timeouts etc.) Vitaly illustrated the different strategy options for webfont loading including fallback fonts and the optional attribute for the font-face annotation.

The last part covered the dreaded third party scripts. It is important that you understand what scripts your site is using, where they are hosted and most importantly how your site reacts if those scripts are unavailable. To find out more about that you can create request maps and use blackhole testing. It is also good to encapsulate those scripts in e.g. an iframe (even though most people hate it) or you can try to use SafeFrame.

Radically Accessible Internet Applications - Marcy Sutton (@marcysutton)

Another talk concerning accessibility in web applications but this time with focus on how you can a) find out where your app lacks accessibility and b) how to implement it.

Implementation is usually done using the various aria attributes and proper focus handling with the help of the inert attribute.

To find issues you it helps to navigate your application only with the keyboard as this is what screen reader users will do. Also activating voice over can help you find issues as well as the voice over router that gives an overview of all voice over portions in your application. Another great tool is the browser plugin aXe-Coconut that can find accessibility violations. Additionally aXe-core can be integrated into your build process to run tests on your application so find such issues on your build server.

Why Fast Matters - Harry Roberts (@csswizardry)

Why fast matters? Well, the obvious answer is money. Faster sites generate more revenue, reduced traffic saves bandwidth and thus costs. Easy, isn't it?

Easy yes, but not the whole truth. If we want to have a truly inclusive web we also have to make sure that our content can be accessed by as many people as possible. That also includes those in the so called third world countries and uprising economies. One major issue these countries face is the lack of high or even regular speed internet access. 

Many people in the world have to use 2G connections or even worse. For those a lot of websites that we use daily and take for granted are hard or even impossible to use. Harry shows us numbers on how many people are actually using mobile devices with such bad connections and also ways how we can test our applications on how they behave in such circumstances.

Tools like dareboost or speedcurve can analyse your website using different connection types, origins and stability parameters. CharlesProxy can be used to effectively throttle your device's connection to simulate access from e.g. other countries.

This talk summed up a lot of interesting aspects about website performance that go beyond our usual scope and is therefore highly recommended.

Data Sketches: A Year of Exotic Data Visualizations - Nadieh Bremer (@NadiehBremer)

Data Visualization is a virtue of it's own and I am usually perplexed by the results those people come up with.

Nadieh spoke about her insights and learnings from the past year of visualization projects showing the amazing results as well as the steps that led to them.

It was also interesting to see how and where you can find the data you are looking for but also what weird kind of data some people are gathering, e.g. all words spoken in the LotR movies by each character. I suggest you watch the video that explains the coherences way better than any summary I could put here. It is really worth watching.

A Tinker Story - dina Amin (@dinaaamin)

As I wrote above, I usually don't get art. But this one, this one is really cool. Maybe because dina does not consider herself an artist. She told the story of a young woman from Egypt that started to do what she like and by leveraging the internet coincidentally becomes famous.

The short description of what she does is simple. Take things apart, look at the parts, recreate something new from them and put all of this in a stop motion movie.

Sounds boring? Check out her video or look for #tinkerfriday You will not be disappointed. When I saw the first movie I also thought "so what?" but only until my jaw hit my knees..

Why Beauty Matters - Stefan Sagmeister (@sagmeisterwalsh

Okay, last talk of the conference. Traditionally a rather funny but still insightful and informativ talk. Stefan explains the concepts and misconceptions of beauty in the context of architecture. This one also relies heavily on the images and videos used. So it would be vain to describe it here. 

I had quite a few very good laughs and it was great closing talk.  

Donnerstag, 14. Dezember 2017

IT-Tage 2017 in Frankfurt

The last conference has been a while but finally I managed to get myself signed up. This is my first time at the IT-Tage in Frankfurt which is a conference hosted by the German magazine Informatik Aktuell.

Three days with 6 parallel tracks seemed like a lot to choose from even though 3 tracks mainly revolve around sql database technologies. Although those are not of that much interest to me I was still looking forward to this as on the other hand there are lots of talks about micro services, security and programming languages.

But before I go into detail about the talks, some general information. The venue is the Kap Europa Center in the heart of Frankfurt, easy to reach and rather classy looking. What strikes me as a bit odd is that all talks are held in German even though many use English slides. This seems to be intentional by the organizers and sure has some advantages but I think it should not be an issue for anyone working in IT to follow English talks and it would broaden the potential audience as well as pool of speakers.

A positive thing on the other hand is the concept of track hosts. Actually all conferences have someone being responsible for the hardware to work properly but here those people do quite some more. They give a quick introduction about the speaker and the topic, encourage discussion afterwards and give additional information about the following program.

Day 1

Harald Gröger - General Data Protection Regulation

This new regulation will become effective in May 2018 and does have some serious effects on how companies have to handle all aspects of personal data, gathering, processing and even deletion.

I will not repeat every detail here but the key changes that got stuck in my mind are:

you will have to explicitly ask for permission to gather any data
the user has to actively give his permission
it is not obvious what kind of information has to be considered being personal
if a user requests deletion of his information, that will have to happen immediately but only if the data is not needed for other purposes like order processing/billing or legal matters
Especially the last point will cause us developers serious headaches to handle all corner cases without breaking anything and not violating the law.

Harald is working with IBM and also lectures at the University of Würzburg about related topics. So he knows his way around data protection and you noticed it throughout the solid and entertaining talk. TBH I would have enjoyed it more if the topic would have been more fun but it was still a good start for the day.

Domink Schadow - Build Security

Since the first session was about as theoretical as it can get the second one had to be "nerd on". So I went into this talk about build security hoping for some hands on stuff that I might actually be able to use back at the office.

So what do we mean by "build security"? Your build process should verify that your code and the resulting application is as secure as it needs to be. Because that is what we all want, a working application that can't be hacked by script kiddies within 5 minutes ;-)

To help us developers achieve this our build pipeline should perform certain analysis steps. It should perform them as often as possible/feasible and as soon as possible.

The easiest approach to this is the well known static code analysis. A widely used tool here is FindBugs, which I consider pretty awesome. But FindBugs is not really suitable for security checks, it does have checks for some injections and programming errors but not in depth security analysis. Or does it...? There is a plugin to FindBugs called FindSecurityBugs or FindSecBugs or SpotBugs which does exactly that, scan explicitly for security issues. In combination with SonarQube this can be a really useful and powerful tool.

But what apart from that? What should we even check for? To have a starting point everyone should at least know of the OWASP checklist showing the top security issues and the recommended counter measurements. To support working with that there are two OWASP tools.

The first one is OWASP dependency check, this tools checks the dependencies you use to build and run your application and checks them for known vulnerabilities. Telling you which libraries have security flaws and to which version you should updated etc. Brilliant!

The second one is OWASP ZAP. This is a powerful tool and a large portion of the session was spent on showing how you can configure it and what is important to consider. For my taste it could have been some fewer jenkins config screens (it has a jenkins plugin, yeah) but it was still good to hear all of the pitfalls etc.

ZAP can be used as proxy between your app and the clients to manipulate and inspect requests and data. That is not that new. But it can also execute attacks on your application based on known real world security issues with actual working exploits. You can configure it on what endpoints it should attack or you can let it record your requests and then replay those in an altered way in the attack mode. If you think of using it, be sure to read the documentation very very carefully as this tool can easily break your neck if you screw up.

But as great as those tools are, they also have downsides:
each tool increased the build time, some only a bit like FindSecBugs (only seconds or maybe minutes) or a really really really shit load like ZAP which can cause a build of a few minutes to then take hours due to the amount of requests performed
sadly all tools are known to report false-positives, some more, some less. But there are configuration options to limit those to keep the pain rather low
Even though it was not exactly as I expected I think it was a pretty good talk from a tech guy showing us what can be done to improve the security of our software with tools available for everyone.

Keynote Pilipp Sandner - Blockchain and IoT

In this keynote we got a glimpse of what changes block chains, the concept behind things like bitcoin, might enable in the future.

Basically a block chain creates sequential pieces of information where every piece has data from it's predecessor encoded into it thus making the sure no element of a chain can be manipulated unnoticed.

Block chains would allow easy transactions not only between humans but also between humans and machines or even between machine, like a car transferring some kind of currency to a parking meter etc.

Most of the use cases shown were not that new and could have been implemented already without block chains. But what block chains add is the possibility to ingrain checks, processes and restrictions but also convenience functions into the them. That allows to setup easy ecosystems for things like smart contracts or IoT chains that users can hook up to easily.

Of course the note also covered the current state of the so called crypto currencies like bitcoin. I was surprised how many other crypto currencies are out there and how much bitcoin has been skyrocketing in the last months. But it is also scary that no one really knows if this will carry on or if there is a bubble just waiting to burst.

Going further on this subject the basic question is, does value always have to be couple to a physical good? I don't know and I don't think anyone else really does, so only time can tell.

As before it was clear the speaker knew about his subject and he was able to transport his enthusiasm to the audience.

Christoph Iserlohn - Security in Micro Service Environments

Aaaand again going from theory to real world hand on stuff, at least I thought I was.

The first 15 minutes were spent on defining what a micro service is and then about how it is difficult to get security know how into your teams and then into your development life cycle. Everything true and important in it self but I still fail to see why we needed it in this kind of depth, but ok.

In a world where most people are going from monolith to micro services we often forget that we have to secure our services. Sometimes that is considered done after slapping some OAuth on them but that is not enough.

By splitting your application up into several services you also open up new attack vectors. Services are talking over the network so you have to secure the network, make sure only authenticated clients are talking to your services and that a requests that is passed to other services has all credentials and context information it needs to verify that the request is valid.

Next was a short excursus about the history and relations of OAuth and OpenID, followed by how security tests should be part of the build pipeline. What I took from this part was, that the testing tools seem to be lacking and that you need to build a lot of what you need yourself.

The last part focused on secret management. When going all in on micro services architecture you have a lot of secrets to maintain, deploy and revoke like passwords, certificates, API keys and shared secrets. The recommendation here was to use Vault and if possible an HSM (Hardware Security Module) and services to have secrets change dynamically.

All in all I got the impression the speaker was rather gloom about this subject. Everything had the vibe of "this is not working and that is not working, we had problems here and problems there". This was emphasized by the fact that there were no real solutions presented only notions of what issues they ran into but not how they solved it.

Of course micro services are no silver bullet and sometimes it is better to build larger systems, but are they really that bad? I doubt it..

I think this was a real pity because I had the impression Christoph has a lot of know how in that area that he could share.

Lars Röwekamp - Serverless Architecture

Serverless is one of the latest buzz words (at least i think so) stirring around in the IT community. But what do people actually mean when they talk about serverless?

Lars started with a good definition "If nobody uses it, you don't have to pay for it" or the more widely known quote "don't pay idle". So is my heroku dyno serverless? ;-) Not quite as shown by the serverless manifesto. I am not going to write all of that down, you can look it up. I am just going over the most important things Lars pointed out.

In serverless we are talking about functions as deployment units (Faas), so a Unit much much smaller than your regular micro services, something that really only does one thing. Functions react to events, do something and by that trigger other events that trigger other functions. Well, this sounds like NodeJS in the cloud to me :-)

The idea is that functions are scaled on a request basis, so every time a new handler is needed it is started up and shut down when it is not needed anymore. The reality seems not to be quite there yet, but we might be getting there eventually.

What is also an interesting idea is that the developer does not care about the container the function will be running in. He just writes his code and the rest is done by the build and deployment processes.

This went on for a while as Lars covered the whole manifest and I felt like he gave a very objective overview of the topic with the occasional reality check thrown in, showing that theory and practice often differ.

After that we had a look at how such a function is build. We are actually not deploying a function but a lambda. A lambda consists of a few things. The actual code, that's what people usually mean when talking about the function. The function receives a context object and an event object. The event tells the function what has happened and the context gives information about the available resources. Additionally you should also always define security roles to make sure your function is only called by those who are actually allowed to do it and to also make sure your function only does what you want it to do. You would not to span e.g. EC2 instances when you don't need them etc.

On a first glance serverless sounds like awesome, because you hardly pay anything, right? Well, not quite. What you have to consider is that with serverless a lot of your development and testing also has to happen in the cloud. You can do tests locally but in the end you always have to check if it really runs as expected in the cloud environment.

Then followed a few examples of possible scenarios which I think you should best lookup on the slides.

And finally the obligatory pitfalls section. Again this was presented very objective showing that of course the complexity of applications cannot be simply removed by a new architectural pattern. All issues people have with micro services also apply to serverless and are on some cases even amplified due to the even higher degree of distribution and smaller granularity. A special mention has to go to the dreaded vendor lock, if you go to serverless your code will have dependencies to the vendor you use. But usually doing everything yourself the vendor does for you is magnitudes more work then changing those dependencies should you decide to change vendors.

For the rest I also recommend you check out the slides. As a summary I want to say that Lars so far gave the best talk on this conference and it really shows he knows what he is talking about.

Werner Eberling - My Name is Nobody – the Architect in Scrum

This will be a short summary. Basically this was an experience report of how a single architect has to fail in an environment with several scrum teams working on a single huge project.

I try to sum it up: If the architect is outside of the teams has no team support, his decisions will not be backed the developers and he does not have enough knowledge of the team domains to understand the issues a team might have with his design. If you make that one architect part of all teams he will have absolutely no time to tend to the individual teams he will be in meetings all the time and soon burn out. So the only and imo obvious solution is: you need architecture know how in all teams, the teams have to design the architecture themselves but to maintain the big picture all cross team concerns have to be discussed in a community of practice.

It seems that the scrum masters just tried to followed what they considered textbook instead of simply thinking what is needed to make the project work, which is what imo is the core idea of scrum. Identify issues and then work as a team to remove them. And somebody should have noticed that there is something weird when someone gets appointed or hired as architect but the scrum masters claim that this role does not exist and he is not needed. At least one side has to be wrong in that argument.

I might be in beneficial situation as this is how we worked for years and it struck me as odd that there are still organizations that do not understand how to handle this. It is nothing new and anyone with at least some brains should see that the single architect approach is prone to fail or at least notice that after a year.

But so far this report does not do justice to Werner. While I did not take much from the talk his presentation was entertaining and solid. So if he decides to give another talk about a more technical topic I will try to join.

Guido Schmutz - Micro Services with Kafka Ecosystem

We start off with a definition of micro services, again ;-)

But after that Guido got real on how to build software with an event bus system like Kafka even though the first part focused on the concepts without taking Kafka into consideration.

Even though this was very interesting the summary is rather short. There seem to be three best practices for event bus systems:

CQRS (Command Query Responsibility Segregation) 

There is also an article by Martin Fowler about this topic, so I try to keep it short: To allow for a large amount of transactions and responsive systems it is advisable to have read and write operations on your data separated, e.g. you have one service writing data to a data store (e.g. an event store) but have read operations performed by a different service working on a view of that data store. This view can have a completely different format and technology underneath. All you have to do is propagate write changes to the original data store to the view.

Event Sourcing: State is saved as a chain of events

The New York Times published an article how they are using this technique to save all their published data. All editorial changes are saved as an event in an event store in the order they occurred. So a client can simply resume from the last event it had seen without the bus system having to keep track of each client.

One Source of Truth

Only ever ever ever write into one data store, all other stores must be populated from that main data store. Otherwise in case of an error you will run into consistency issues as distributed transaction are not (well) supported (yet) and also have performance impact.

To round it up Guido gave a run down on the reasons why he used Kafka for this kind of systems. The main one being that Kafka is a slim implementation focusing only on this one purpose.

For me this was in interesting overview on event bus architectures and an impulse to look into them some more.

D. Bornkessel & C. Iserlohn: Quick Introduction to Go

This was exactly what I expected, a run down on where Go came from, what the design criteria where and a brief feature overview followed by a small tutorial on the basic programming tasks.

Here a short list of the topics I found most interesting:

  • Dependency Management: This seems to be rather broken/bad/cumbersome which would be a big issue for me but on the other hand most things are already shipped with Go, so maybe it is not that bad?
  • No package hierarchies: ok.. well if you don't need a lot of dependencies then maybe that works.. 
  • Compile breaks on unused variables: On the one hand this is good for finding bugs etc. but during development this could cause some frustration. Maybe you get used it..
  • Functions can return tuples: Yeah, that can be of great help in some cases
  • No Exceptions: Instead you have to check if the called function returned an error object together with the return value. So that is why the made it return tupels, eh? Causes a lot of if checks.. *yuck*..
  • Casting also returns error objects like a function call
  • Implicit Interfaces: You do not declare to implement an interface you just implement the methods of the interface and then your class implicitly is also of that interface type.. TBH I don't know yet if I think this is cool or horrible.. Something about this makes my skin itch.


Christian Schneider - Pen Testing using Open Source Tools

The final evening session, lasting two hours. Again security :-)

The first part was an exhaustive overview of the available tools you can use for different purposes

Web Server Fingerprinting & Scanning


  • Nikto: Scans a webserver for typical issues like default applications still deployed, server state pages, missing headers etc
  • testssl.sh: Does exactly that, checks your SSL configuration about supported ciphers etc
  • OWASP O-Saft: An alternative to testssl 
  • OWASP ZAP: This part took up most of the time which was good because the tool is very powerful but for me held not much new information because of Dominik Schadow's talk earlier. But for those who do not remember, ZAP is an intercepting proxy which can also run complete and exhaustive attack scenarios on your application.
  • Arachni: Allows to spider Javascript applications better than the other tools
  • sqlmap: Database take over tool. Man that looked impressive AND scary. This tool performs a massive load of very deep injection tests on a single request trying to get into your database and cause damage or extract information. The results are then available in a separate sql shell/browser for inspection.

OS Scanning


  • lynis: Checks your server for available root explotations like root crontabs being world writeable or dangerous sticky flags etc and necessary updates if run as root
  • linuxprivchecker: shows which root exploits exist in your kernel and libs

Whitebox Analysis


  • FindSecBugs/SpotBugs: Has also been handled in the talk of Dominik Schadow, so nothing new here for us
  • ESLint with ScanJS: Allows scanning of Javascript for security bugs
  • Brakeman: Same for Ruby
  • OWASP Dependency Check: Also same as before ;-)

Then we got a live demo on Christian's Maschine showing us how ZAP behaves in attack mode, what the output looks like and how to tweak the configuration.

We also saw the XEE vulnerability being exploited to read the passwd file of the server. Even though this is nothing new from a theoretical point of few it is something different seeing it actually happen.

This was followed by a discussion until we finally decided to call it a day... ufff....

Day 2

Thorsten Maier & Falk Sippach - Agile Architecture

TBH I did expect a different kind of talk here, something more along the lines how a flexible and agile architecture could look like to suit changing needs. But that is something that probably does not even exist, so I should not have been surprised that it was actually about how to create and maintain an architecture in a dynamic environment ;-)

The basic problem Thorsten and Falk try to approach is that a system architecture is usually something very fundamental that can't be changed easily afterwards while an agile environment often demands changes late in the project.

What the presented were 12 different tasks an architect or rather the team should perform in cyclic iterations to see if the architecture still fits the project needs.

This cycle consists of 4 phases, design, implement, evaluate, improve. Much like the typical scrum process. The details can be seen on the slides I'll just summarize the main points:
  • gather the actual requirements not just abstract wishes, make sure you know what your stakeholders need and who you are dealing with
  • try to compare different architecture approaches, e.g. using ATAM or other metrics
  • document your architecture (in words and graphics) and explain it to other people to get early feedback and sanity checks
  • try to have documentation generated during the build, maybe have automatic code checks to find violations (not really sure about that tbh, that is what code reviews are for imo)
  • check if the requirements have changed and if the architecture still suits the needs
This sounds all very well and it sure can help to come up with a good architecture in the start and enable your team to stick to it. But still the problem persists that if the requirements change drastically in a late phase you will have the wrong architecture. So I don't really get how that addresses the problem at hand.

What I take from the talk ist that it might be worth to check out arc42 and PlantUML for automated architecture documentation. 

Raphael  Knecht & Rouven Röhrig - App Developement with React Native, React and Redux

Well, what can I say. I have no clue about React or Redux and so this talk did contain a LOT of new information for me and I think I am still trying to digest all of it.

Raphael and Rouven gave an overview of what React, React-Native and Redux are, what you can do with it and what their experiences were. First of, with all of those tools you write Javascript or JSX which claims to be a statically typed Javascript. React is a view framework so it handles only rendering. React-Native compiles or transpiles or whateverpiles the Javascript (or JSX) code into native code for mobile devices. Redux handles everything apart from the view, it provides a state, that can be changed by using actions and reducers, where the action states what should be done and the reducer how it should be done. 

For details on how this is coupled I have again to refer you to the slides as the diagrams explain this better than I could. An important notion is that a reducer is a pure function so there are no side effects in those. But for some aspects you need side effects, like logging e.g. To handle those concerns React uses something called middleware.

There are two different middleware implementations (at least in this talk), Redux-Thunk and Redux-Saga, both with their own pros and cons. The speakers did end up using Saga as it seems to be more flexible in handling complex requirements, where as Thunk tends to end up with large actions that are not maintainable.

Reasons why React/React-Native/Redux is cool:
  • good tool support for dev and built
  • hot and live reload in the ide
  • remote debugging
  • powerful inspector
Reasons why React/React-Native/Redux is sometimes not so cool:
  • not everything can be done in JS/JSX, sometimes you need native code
  • currently version 0.51.0 and there can be breaking changes
  • sometimes not all dependencies are available in compatible versions due to different release cycles
Maybe, when I have time, I can try making an app with React. *dreams on*

Dragan Zuvic - Kotlin in Production: Integration into the Java Landscape

tl;dr - I think that was a great talk. I like kotlin, at least that what I have seen so far and it was good to have someone explain not only the basic but also the advanced features but also issues of this language.

I would say Dragan is a kotlin fanboy, he is very enthusiastic about the language features and the tool support, which indeed is really good. We started with the basic overview of the language features:
  • type inference
  • nullable types
  • extension functions
  • lambdas
  • compact syntax
    • no semi colons
    • it keyword in lambdas
    • null safety operator
    • elvis operator
To only name a few. What was also interesting to me is that by default kotlin code is compiled to JRE 6 byte code but you can still you use fancy new stuff like lambdas. Alternatively you can also compile to JRE 9 code with module support or even to native code.. wooot....

Most of the talk revolved around interoperability between kotlin and Java. To make it short, it works pretty well. Calling Java from kotlin seems to be no problem at all. The other way round has a few issues but usually is also not much of a problem.

Falk Sippach - Master Legacy Code in 5 Easy Steps

Live coding demo on what techniques you can use when dealing with legacy applications. There are already some books about this but most of them assume that you a) the legacy code is testable and b) you already have tests.

That is not always the case. So what do you do when you have code that can't be tested and that also does not have tests (which is not that unlikely since it is not testable.. d'oh). 

Before you actually start getting to work you need to make yourself aware of two necessities:
  1. only perform very small isolated steps, so that cannot break to much and can stop at any time
  2. do not start fixing bugs on the way, you do not know if those bugs are accepted behavior for clients or maybe they aren't bugs after all when you see the big picture - preserve the current behavior. 
The following techniques can help you to get control over your legacy code:
  • Golden Master: Even if you do not have tests you can usually in some record the system behavior for a certain event. That can be log output or the behavior of your UI when you enter certain data. You can record this behavior, make small changes and then perform the same action again to see if the behavior is still the same. This is no guarantee that you did not break anything but it is an indication that you did not break everything. So if your steps are small enough this can be a good way to change your code into testability.
  • Subclass to test: Some classes cannot be tested because they have certain dependencies that make tests hard or unpredictable. You can try to subclass those classes and then overwrite methods relying on those dependencies to return a static value allowing you to test the rest of the class.
  • Extract pure functions: Split your code up so that you can isolate pure function where possible, making your code better readable.
  • Remove duplication: That is usually good to remove lines of code.
  • Extract class: Similar to extract pure functions but on a larger scale. That way you can create cohesion by extracting small logic parts into separate classes.

N. Orschel & F. Bader: Mobile DevOps – From Idea to App-Store

This was also not quite what I was hoping for. The part that would have been interesting to me was rather short in this talk. How to handle the different build requirements for all platforms, what challenges are there to have the app rolled out to the different stores etc.

The main part of this talk revolved around tool selections like "do we use centralized or decentralized version control". Yes, that was an important for the guys in the project but in the end does not matter much for the way the app is built and deployed, not even much for development. Code reviews can be done with any VCS it is sometimes just more difficult. 

Anyways, there still have been some relevant infos for me, regarding on how to test the different applications.

To build and test the iOS app you still need a mac but you don't want to buy Macs for every developer. The solution shown here looks promising. Simply buy a Mac Mini with the maximum hardware profile and then install Parallels on it, that way you can have lots of virtual Macs for your dev and test environments.

With Android you have the issue that there are thousands of different devices out there and can't buy one of each just to test your app. Solutions for this can be found at Xamarin Testcloud or Perfecto Mobile.

And finally Appium provides an abstraction for test automation.

Franziska Dessart: Transclusion – Kit for Properly Structured Web Applications

I literally had NO clue what transclusion means. And the subtitle was also not that elusive to me. So what is it? A new framework? A weird forgotten technique? Well.. not really ;-)

Transclusion means that a resource can include other resources via links and those resources will then be embedded into the main resource.

Sounds familiar? Here a few examples of transclusions:

  • iframe
  • loading web page parts via ajax
  • SSI (Server Side Includes)
  • ESI (Edge Side Includes)
  • External XML Entities (remember the XEE exploit from yesterday? ;-))
The term is not new, it is just not that widely known. But with more and more micro service architectures it becomes more important.

Most of this talk targeted the different places a transclusion can occur and what considerations have to be made. Mainly you want to assemble the primary content of a website before the user sees it, so if you have to use transclusion for you do not want to do it in the client. Secondary content on the other hand is fine to be transcluded on client side. 

With every tranclusion choice you will have to take some aspects into account:
  • should the transcluded resource contain styling or scripting?
  • do you need/want to resolve transclusions recursively and if so for how many levels?
  • do you cache? if so the final resource or only certain transclusions or only the main resource?
All of those questions have to be answered separately for each use case. 

Thorsten Maier: Resilient Software Design Pattern

Resilient software has always been important. Micro services hold new challenges in this area though. 

If we want a high availability for our software we can try to increase the time until a failure will occur and we can also try to reduce the time for recovery in case of a failure. Ideal would be a system where a failure is not noticeable from the outside because the system can recover instantly, or at least seem to do so.

Thorsten not only described the well known patterns for these goals but gave specific implementation examples for most of them. e.g. using Eureka (from Netflix) as a service registry, have client side load balancing with Ribbon or Zuul as API-Gateway.

This was a nice change because usually those talks only cover the theoretical concepts but do not show you what tools you could use. But to be fair, usually the implementation was done by using an annotation within a Spring Boot application, so it was rather easy to show ;-)

Day 3

Mario-Leander Reimer: Versatile In Memory Computing using Apache Ignite and Kubernetes

Starting the day with some nerding, oh yeah!

Mario-Leander gave a great overview of the impressive capabilities Apache Ignite has. It seems to try to handle everything any middleware has ever done. While I am not a big fan of the "ultimate tool" pattern it does have some really interesting features.

Mario-Leander started of with his reasons why he started using Ignite. Micro services should not have state but applications usually do have state. So this state has to be put somewhere usually that is some shared database somewhere. When your landscape grows this can become a bottleneck or in some cases you needed to access your data in different ways. This is where Ignite can help.

Ignite provides you with distributed, ACID compliant key value store that is also accessible via SQL or even Lucene queries. That alone is quite a lot but the list of features goes on for some times including messaging, streaming and the service grid. The messaging component provides queues and topics just as JMS would do.

The service grid allows your software to deploy IgniteServices into your cluster, an IgniteService is some piece of code that gets distributed to your nodes. This service usually needs to access portions of the data, but depending on the cache mechanism not all data is available on all nodes. Ignite supports collated processing which means that processing takes place on the nodes where the data is to reduce data transfer to a minimum.

The streaming component was shown with a live demo where Mario-Leander attached his Ignite to Twitter querying for the conference hash tag and printing the tweet data to standard out. For most features the slides provide code examples that show a concise API.

The talk itself was very good and Mario-Leander was able to answer most questions in depth showing his experience with the topic. Was definitely worth it.

Andre Krämer: Exhausting CPU and I/O with Pipelines and Async

Uhh.. more nerding!!! Or not ;-) From the topic I was expecting some advance techniques for asynchronous programming or maybe some project report about what problems have been tackles by async paradigms. Actually it was more an introduction to async and why async is faster than sync.

The basics were that you should separate I/O and CPU intensive tasks in a consumer/producer pipeline so that the separate workers can be scaled independently to get the most our of your system.

Then we had a part about performance measuring I/O performance on a machine and what pitfalls can lurk there. There were some interesting aspects but focused a bit much on NTFS specialties which simply are not relevant to me. The conclusion was that you should be sure to measure the right thing if you want reliable results, e.g. small files are often cached in RAM and not on disc.

Christoph Iserlohn: Monolithen – Better than their Reputation

This topic sounds pretty controversial with the current micro services hype. 

A slow or cumbersome development cycles quite often is not the caused by the software but by the organizational structure. When different departments with different agendas are involved (e.g. Devs have different goals than Ops or DBAs - feature vs stability) then you will run into problems that you will not be able to solve with micro services.

On the other hand a monolith must not be bad per se. You can still structure your monolith into separate modules to mitigate most of the well known disadvantages of monoliths. There are even some aspects in which the monolith is superior to micro services:

  • Debugging: It is very hard to debug an application consisting of 10s or 100s of different micro services and requires a lot of work before hand to be actually possible
  • Refactoring across module boundaries: In a monolith you (usually) have all affected components in one place and can spot errors quite early. In a micro service landscape it can be quite hard to make sure all clients are adjusted to or can cope with your changes
  • Security: One micro services is more secure than a large monolith but if you consider the whole micro service application this is not necessarily the case as you have to take the communication paths into account which are often quite hard to secure
  • UI: With micro services a good UI is usually hard to build if the data comes from different services
  • Homogenous Technology: This can be an advantage as it reduces the complexity of the system and requires less skill diversity but also a disadvantage as you can't choose a different technology for a module
The bottom line is that monoliths or micro services are neither simply good or bad, it all comes down to your requirements and external parameters. So think before you build!

Nicholas Dille - Continuous Delivery for Infrastructure Services in Container Environments

This talk focused strongly on the Ops perspective of micro service platforms and, like many others, started off with the basic reasons and advantages of automation for software deployments, like reproducibility, ease of deployment and stability.

To achieve that the automation must be easy to use, well tested and standardized. That is why everything as code is so important because it allows us to create exactly that. According to Nicholas this is where Ops still can and need to learn from Devs' experience in software engineering.

Another method to improve the build process is micro labeling for containers. This means to label each build with a set of properties to make it identifiable and allows us to see what source code version it reflects. Those properties can be:
  • artifact name
  • repository name
  • commit id
  • timestamp
  • branch name
To further help with automation you should use containers to deploy your applications as that means you have one way of deployment. Also containers make monitoring easier as the containers can collect the monitoring data from the application and expose it to the monitoring infrastructure. That way you can focus on the actual evaluation instead of collecting data.

One issue that arises with containers is that a container should be stateless but most applications do require state to function properly. Instead of using host volumes mounted into docker Nicholas presented a different approach. Ceph is a distributed storage system that can also run in a container. That way applications can store their data in Ceph but of course if the Ceph container dies the data is lost. So they set up a cluster of Ceph containers so if one dies it gets restarted by your orchestration software (Rancher in this case) and the new container syncs to the existing cluster within minutes. 

I think this is an interesting solution but I am not sure I would feel comfortable knowing that all my data resides in memory. Maybe they only use it for non critical data that must not be persistent at all costs, but I can't remember anymore.

Even though I was not exactly the target audience I took quite a bit from this talk and think Nicholas showed a lot of experience in this area. Especially while answering all questions in depth, two thumbs up!

Christian Robert - Doctors, Aircraft Mechanics and Pilots: What can we learn from them?

Well the title really says it all. Other professions have harnessed techniques to function under pressure situations, so why not try to learn from them.
  • Pilots
    • All information in the cockpit -> Have all monitoring data aggregated in one place so that you can spot problems at once. 
    • Standardized language -> Have a common vocabulary and a common understanding of it, e.g. what does "Done" mean?
    • Clear responsibilities -> Communicate in a clear and expressive way so that you are sure your colleagues understand. Also vice versa tell your colleague that you did understand.
    •  Checklists -> Always use checklist for important or complicated tasks, no matter how often you have done them in the past. Routine is one of the greatest dangers for your uptime.
    •  Plan B -> Always make sure you have a plan when the shit hits the fan
  • Aircraft Mechanics
    • Prepare your workspace -> Well this analogue did not work that great. It refers to the fact that a aircraft mechanic would not repair a turbine that is still attached to the plane. He would dismount it and the bring it to the workshop. In software terms you could translate it to, build your software modular so you only have to patch an isolated component.
    • Tool box -> Use the tools you know best and that suit the task at hand
    • Visualize processes -> Use a paper Scrum/Kanban board so people can move the post its themselves. This is one point where I disagree.
    • Document your decisions -> Documentation for system architecture and other project decisions so everyone can understand them.
    • Bidirectional communication -> Make sure everyone can follow the progress by using issue trackers like JIRA.
  • Doctors
    • Profession vs Passion -> Doctors want to help people but sometimes they have to deal with bills and pharma consultants. In every job there are tasks that you have to do even though you don't like them, e.g. meetings.
    • Understand your patients -> When talking to non techies we have to explain things so that they can understand. No tech mumbo jumbo.
    • prepare as good as you can -> A doctor does a lot of preparation for an operation by talking to the patient, running tests and making sure he has all the information he needs. Similar we should prepare before implementing features by gathering all requirements and information from the stakeholders.
And one they all have in common: "Don't panic. Stay calm." It does not help to have 10 people fix the same bug or doing things just for the sake of it. Focus what the issue is, how bad it really is and what is needed to tackle it.

Uwe Friedrichsen - Real-world consistency explained

Disclaimer: To understand this part you should know about ACID, CAP and BASE.

I don't know how to say this differently: This talk scared the bejesus out of me. Why, you ask? You will see.

Relational databases with ACID, that is what most of us started with. A simple MySQL or maybe Oracle or something similar and we were happy. All of those had ACID and so we could simply code on and be sure that all of our database operations would complete and we would not lose and data or even have inconsistencies.

But unfortunately this was never the case. All has to do with the 'I' in ACID: Isolation. Uwe did a great job explaining the technical details but here is the short of it (especially since I can't repeat it ;-)): ACID in it's textbook definition can only be achieved when the database uses the highest isolation level called "serializable" but hardly any database uses this due to serious performance impacts. Usually a databases uses "read committed" or "repeatable reads", those do provide some isolation but not perfectly. That is why you are very likely to see some kind of anomalies in every database. YUCK!!

So what about NoSQL databases? We have to deal with the CAP theorem but that is what BASE is for, so they all provide eventual consistency, right? Turns that is also not the case but most people just don't know it. You have to configure your database properly to ensure eventual consistency. One problem, that is quite common, is that often a database is chosen because it is hyped or sounds interesting, but not because it suits the problem. But especially with NoSQL this is very important since almost every NoSQL database was designed with a special purpose. If you look at Cassandra e.g. this database was designed to be as available as possible. As long as one node is alive you can write data into it. But consistency? You wish...

Check out the slides. The research Uwe has done is impressive and I can't repeat half of it here.

Another problem with NoSQL is, since there is no ACID but instead BASE, our programs have to take that into account. So the programming models are not as simple anymore as they were with SQL databases. 

So are we stuck with either ACID that does not work as good as we thought or BASE that makes programming much more difficult? There are a lot of things between ACID and BASE like Casual Consistency oder Monotonic Atomic View. But there is still a lot of research going on in this area but in the end our applications will have to do their part to make sure our data is safe.

I really recommend to check out the slides as this is a very interesting and also very important topic.

Sven Schürmann & Jens Schumacher - Container Orchestration with Rancher

A hands on report by two guys who spent the last year working with Rancher. To keep it short: It was good overview over some key features of Rancher that I will try to summarize below.

But first: Why do you need Rancher? Can't you do orcherstration with Kubernetes? Yes you can but there is more to container platform than plain orchestration. You still need stuff like access control, handling deployment on different stages, defining how a cluster should look like exactly and so on and so forth. And this is where tools like Rancher come into play. For the plain orchestration you can use Kubernetes in Rancher or other tools like Docker Swarm or Rancher's own Cattle. As of 2.0 Rancher will use Kubernetes by default.

Rancher Features:
  • Machine Driver: With this you can create new hosts dynamically. It supports various cloud providers but can also include custom scripts.
  • Rancher Catalog: Used to define templates for each service. A template consists of a docker-compose.yaml and rancher-compose.yaml where rancher-composes follows the docker-compose mechanics. You can integrate it with VCS so you can always roll back to older templates. You can use variables in the docker-compose file (like spring profiles for an environment) and define the allowed values in rancher-compose that can be selected when applying the template.
  • Infrastructure Services: 
    •  Healthcheck: Knows all containers running on a host and monitors them closely
    • Scheduler: Works together with Healthcheck. When a node is down the scheduler notices that there are fewer containers running than configured and starts up a new one.
    • Network: You can define different network types for your containers depending on where you are running, e.g. if you need IPSec or not. Contains a simple load balancing and service discovery 
    • Storage: You can configure the storage type and Rancher will apply the appropriate plugins. So far you can choose between NFS, NetApp SAN and Amazon EBS
This was only a small insight into Rancher and it's features but it got me interested and it shows that for larger landscapes you need more than "just" Kubernetes.

Dr. Jonas Trüstedt - Docker: Lego for DevOps

The last talk of the conference. Once more containers ;-) Quite a lot of this talk has already been covered by others, so I will not repeat that. What I took from this talk was a short comparison between Rancher and OpenShift
  • Rancher is easier for a first test setup where as OpenShift requires quite more confiugration to get started
  • OpenShift has a higher integration with development tools, e.g. you can just tell it to checkout a git repo with ruby code in it, then OpenShift will create a new image based on a ruby image and push that new image to your registry
  • OpenShift uses Ansible, so you have to install Ansible and at least get aquainted with it
  • According to Jonas OpenShift is more powerful

Summary

Thank you for ready this far, or at least for scrolling down. I hope you found at least some useful information in this post. Please let me know if you spotted an error or any misunderstandings on my part. After three days and lot of different topics it is quite possible that i messed something up. And as always, any comments are welcome :-)

Montag, 13. November 2017

Cambridge Double Trouble 2017

After a few years of only going to continental and mainly German tournaments I decided it was time to head over to the island again. By browsing TFF for the next upcoming British tournaments I finally found the Cambridge Double Trouble. This tournament had a few selling points. It has a rather unique rule set (only double skills allowed with 4 tier system and a rather low amount of skills, at least for the tier 1-3 races), it is a team tournament for 2 player teams but you can just sign up and get assigned another random single sign up (so you are bound to meet new people!!) aaaaand it is run by Schmee and Purplegoo.

Especially the latter point was important to me, as both are lovely chaps and the combination promised the best of both worlds. Schmee coming up with the wacky ideas for rules and evening entertainment and Purplegoo running the actual tournament with his routine and calmness. And what can I say, both delivered!

The travel connections where great and my hotel was only 10 minutes walk from the venue and 20 minutes from the train station. So all was set for a smooth trip. On Friday evening we had a get together in a nice pub The Castle Inn closer to the city center so I had a nice evening walk through the living areas of Cambridge to get there. We were only a small group of 6 people in total and only 3 were actually involved in the tournament (Schmee, Pipey and myself), the others being Schmee's lovely significant other Kate and two friends Mike and Matthew. So apart from the traditional Blood Bowl nerd talk we also had "regular" conversations and quite a few entertaining games included "cricket trumps" and "take it away" rounded off by some beers and pub food. A perfect start to the weekend.

Saturday morning started slow due to the land lord not being up in time to let us in but as this was the same as every year (according to everyone involved) no one was either suprised or stressed and with a bit of delay we were able to get started. The first step was to meet my random team mate Wolimorb, a rather new player with a few tournaments under his belt. To our surprise we were both using Humans with a similar roster but different skills. While we both had an Ogre with Block I opted for the lame 4 Guard Linemen variant and he took more fancy skills on his Blitzers like Dodge, Sidestep and even Leap. Especially the Leap Blitzer was used several times to great success and caused some serious head aches for his opponents! Go Woli!!!

In round 1 we were paired against Howlinggriffon (Skaven, vs me) and Mr Frodo (Underworld, vs Wolimorb). The Skaven had a rather crappy start only producing one knock down from roughly 12 block dice against my Linemen. In the end everything was rather tied up and I was able to chain push the screening players away from the ball carrier and take him down. From there it was a few turns of back and forth but finally I was able to secure the ball and score ion Turn 8. In the second half attrition really hit the Skaven and there was not much to be done so I was able to get a safe 2-0 win. Wolimorb drew his game 1-1 so we went straight to the top dogs :-)

Round 2 saw two Underworld vs Human matches where I faced besters while Wolimorb took on VultureSquadron. besters had to kick and the ball landed exactly on LoS one square from the sides which lead to the inevitable Blitz! kick off result. The underworld moved Skitter Stab-Stab under the ball and managed to build a screen around him before he caught the ball. So much for my offense then. Even though I got the ball lose in the next turn the sewer scum managed to recover and retreat. The following turns where dominated by a tight defense from my end forcing lots of rolls on the underworld and trying to maximize damage. Unfortunately it did not pay off at first. Only in the very last turn was I able to regain the ball and score a touchdown after having to do a 3+ dodge, 2 gfi, 3+ pickup, 4+ pass, 4+ catch and 3+ dodge. All after the reroll was gone for taking down Skitter. In the second half the Underworld offense tried to advance but I kept up the pressure as before so it was a tight affair. In the end I was not able to do enough damage/enforce enough turn overs so the game ended in 1-1 draw. My team mate got a whooping 0-0 so, we were still unbeaten.

Round 3 paired us off against the later winners, Wolimorb had to play Pipey with his Undead, who should later finish the tournament as best individual with an impressive 4/2/0. I got to play peo2223 with an interesting Norse roster, consisting of Wilhelm Chaney, a Block Snow Troll and 11 Linemen containing 2 Guarders and a Leader. And I got the offence and again the kick off was a Blitz!. This time the kick was deep so the Norse broke through and tried to put as much pressure as possible on the ball. I barely managed to recover but the Norse refused to die and started to do some Damage themselves. Especially the Snow Troll was very reliable and caused some major head aches on my side. With some luck I could advance in the opposing half and was finally able to squeeze through the Norse screen and scored in Turn 7. The Norse threw everything they had into the 2 turn drive and Wilhelm went off with the ball. I got him down and tried to secure the ball as good as possible. But it was not enough and Wilhelm did show he is worth his pay check by making a dodge, pickup and 2 gfi to score the 1-1. In the second half the kick went way over to the sidelines and Nuffle struck on the Norse, failed pickup, ball get's thrown in and lands just behind my defense line. I got the ball up and tried to secure it as good as possible. peo on the other hand was determined to get it back and a few turns later my numbers started to crumble. It was massive back and forth around the LoS until I saw a small chance for an opening and advanced. Then in turn 7 when peo decided to go for the draw instead of trying to win (probably the right decision at this point) I was able to make a bold move and sent the ball carrier through the lines. Unfortunately the much needed screening failed due to a dodge and so the ball carrier stood there rather scared awaiting his doom. peo had to do some rolls but managed to get 2 dice on the ball which luckily for me where not enough. So in the last turn I cranked up the boldness some more and scored with a 4+ and 3+ dodge. Pewh... On the other table Pipey was not so cooperative and gave my team mate a rather good whooping, so we ended with a second draw. Still awesome!!

The Saturday evening saw a collective dinner at a nice curry place, Schmee had organized the order before hand and despite some confusion with the order and the bill afterwards everything turned out ok and we enjoyed our meal and some interesting conversations with tournament veterans and newbies. Afterwards we went to a close pub with live music, played some pool (or rather the english played and I tried not to scratch the table too much) before we got schooled by some local semi pros ;-) This was followed by more talking and drinking until the pub closed down and we headed off to our beds.

Sunday morning gave us a new matchup, I got to play Darkson with more Norse, this time with Dodge Blitzers and a Mighty Blow Runner but no stars. Wolimorb had to take on Moodygits Undead. This time I had to play defence from the start but with pouring rain the Norse failed the first pickup. So I tried to swarm the opposing half to put on some pressure, after having a guarder sent to the ko box. In the second turn the Norse got the ball up caused some stuns and a second ko that I apoed. Due to constant hihg AV rolls the Norse managed to advance. After a few turns I was able to put pressure on the ball but the Dodge Blitzer survived a total of 8 block dice and managed to dodge out a few times only to finally fail the very last dodge into the end zone. This caused some understandable  distress on the other side of the table ;-) In the second half I tried to punch through the Norse but only had moderate success. At one point I had to take a gamble to move around a Norse bulk. The orginal plan was to use the ball carrier for a chain push into the fans but since the first action resulted in a double skull I had to readjust a bit and so a lino had to perform that blitz which meant that the ball carrier was not completely safe. So Darkson got a 3+ dodge and 2 gfi for 1 die block on the ball carrier which worked. The real problem was that the ball bounced to the fans and was of course thrown way back into my half. From then it was 3-4 turns of knocking the ball loose, failing to pick up, putting tackle zones on the ball etc from both sides. At one point I got the ball back with my catcher but there where two Norse linos that could reach with one dodge each. The others where behind a screen. I figured it would be better if only one of them was standing so there would be no second player to pickup the ball afterwards. That's why I tried to block with my ogre which boneheaded. That way the Lino that should have been blocked was able to open the screen and the  Mighty Blow Runner could blitz the ball carrier. At first I thought it was a mistake to try the block but now I am not so sure as a knock down would have been really helpful. But in the end it fired back and so we kept on struggling for the ball. I got it back again, but had to many players out for protection so got knocked down again. In the last turn I then failed the pickup due to the rain. So it ended in a 0-0 draw. Luckily my team mate managed to hold a draw as well, so we did not loose this round either.. wooot wooot!!

After a tasty breakfast/lunch break round 5 paired Wolimorb against Lycos with his Halflings, he was very successful with them last year but this weekend the little ones had a hard time, so good omens for Woli! For my J_Bone brought some Chaos Pact with fully packed Big Guys, Claw on the Mino and Block on the other two plus a Chainsaw. After losing the coin toss I decided to sacrifice some Linos but Nuffle threw a Blitz! over the fence together with a kick in the middle of the opposing half. So I tried to capitalize on this, blitzed through the screen and tried to push a blitzer forward but failed. So there was not mich benefit. But J_Bones Big Guys and the Chainsaw did have some trouble getting started. A lot of failed nega traits and kickbacks where a dominant factor in this half. In the end I got the ball and managed to score at the end of the half. The second half started with.. another Blitz! This time for the Chaos Pact. J_Bone got quite more out of his Blitz! than I did and so it was pretty tough to get the ball save. After a bit of rolling and praying I managed to advance and finally scored in the middle of the half. So we got our third kick off and of course, the third Blitz! This put the final nail into J_Bones coffin and even though his Big Guys decided that the second half was the time to maim some humans he was not able to score. Had his player removal worked that well in the first half I would probably not have had any chance but the dice were on my side and so I got a lucky 2-0 win. On the other hand Lycos' luck seemed to have come around and he creamed Wolimorb with the little fellas. But still, undefeated!!

Final round!! It feels weird when you are 3/2/0 after five games and THEN get paired with Halflings. But deeferdan2383 was on 3/1/1 with his and so posed a serious threat. Wolimorb faced MrZay's Orcs. The coin toss sent me into defence and I lost 2 rerolls. I decided to put the Ogre on LoS despite the Dirty Player Halfling. The idea was that the Ogre will be the fouling bait with his Thick Skull which gives good odds of staying on the pitch compared to the fouler being sent off. Well, the first block killed the unskilled Lino and the foul also caused a cas on the Ogre. The Dirty Player was sent off and the apo saved my Ogre. After that I killed a Halfling so we were on 9 vs 9 and I started to swarm the Halflings to force dice rolls. Unfortunately the Halflings refused to fail their dodges or die from my blocks while none of the trees (including Deeproot) failed their rolls. So I really struggled to keep them at bay, and it did not help that dan played it very solid ;-) In turn 7 came the inevitable Throw Team Mate attempt, which worked perfectly and so the little ones went up 1-0. So the Humans set up for the two turn touchdown which is quite hard against a LoS full of trees. But the fans decided to create some havoc and so we lost a turn due to a riot. So in the second half the pressure was on. I wanted to take down the two regular trees and surround them so the stand up rolls would be hard enough for at least one to stay down. The problem was, that before I could do all of that I had to get the ball safe but my thrower screwed up. So I was in a rather bad position and the trees AND  Halflings took out more and more players. I tried to break through but a successful Throw Team Mate gave dan a chance for a 2 dice wrestle blitz which worked. The ball was thrown in but only 2 squares and so landed next to the halflings. In the end I was not able to recover. The trees and Halflings were on fire not giving me the least of chances in this game. Well done, Dan!! Since the Orcs were also not in the mood to give away points, this sealed our first and only team defeat :-(

Then it came to the awards. As expected from the match results Pipey and peo won the team competition and Pipey was best individual player. deeferdan came second with his Halflings and of course got the "Best Stunty" award. At this point something unexpected happened. Since Pipey and Dan both already got an award, the tropohy for "Best Individual Player" went to the 3rd place, which was me... This took completely by surprise as it seems a bit odd to give that award to someone who had not actually won it. But everyone seemed to be okay with that and so I gratefully accepted the trophy. I was even more surplused when I realized that with the award also came a "real" prize. In my case I got one of the new boxed Dwarf teams from Games Workshop.

As expected the crowd soon departed as everyone was eager to get home in time. So I decided to stay in the pub for dinner trying the traditional sunday roast (when in Rome..) and enjoy the atmosphere.

To sum it all up, this tournament was an awesome experience on and off the pitch. So if you fancy a unique ruleset with great competion, good sportsmen and a well rouned supporting programme you should definitely go to the Double Trouble!! I can only recommend it to everyone. Thanks again Schmee and Purplegoo for organizing this and everyone else for the great inclusion and entertaining conversations.

Freitag, 23. September 2016

Humans Painting

A lot of time has passed since I painted my last team. But as I was stuck the couch for the last weeks due to a juicy knee injury I finally got it done and painted my human team.

The minis are an Elfball team from Impact, so the positions were not exactly like those on the Blood Bowl Humans and I had to juggle them a bit.

So two of the five safety players, which kind of resemble the blitzers, ended up being linemen, as I liked the postures of a lino better for blitzers.

As usual the painting is not that great, but considering my lack of practice and the fact that most colors have been very old and pretty claggy, I am still rather happy about the outcome.














Sonntag, 7. August 2016

Devoxx UK 2016


Ambience and Background


Three years ago London saw the first installment of Devoxx UK which I had the pleasure to visit. This year I decided it was time to come back and see how the conference has evolved and if it's value for
money is still as good as it was.

To make it short, yes it is! For a very reasonable ticket price you get two days full of talks spread over five tracks a decent catering and free water supply. The Business Design Center is a nice
location and of course London is always worth a visit.

In order to keep the ticket prices low the organizers have to bring a lot of sponsors, so the lobby was packed with stands. IMO this is not necessarily bad, at least you cab get a lot of goodies ;-)

Another cool thing is, that all talks are taped on video and the videos are all uploaded to youtube. So if you want to watch the talks just go to the Devoxx UK Channel

Day 1


Dot Con - James Veitch


The opening talk was a great start to the conference. A very funny narrative of what happens when you actually reply to scammer emails (like Nigeria Connection etc.) but in a special way.

James gave a summary of this experiences when messing with the people behind those emails and by that almost driving them mad. Usually I am not a fan of harrassing people, but if it happens to
frauds that try to steal the savings of innocent people who do not really understand that there is a whole con industriy out there, I strongly approve!

I highly recommend you watch the video to this talk, as no summary would do justice to it. It was hilarious.

Unfortunately it seems this video was not uploaded to the channel.. real pitty there...

Embracing Failure - Mazz Mosely


I chose this talk, because I remembered Mazz from a talk at the first Devoxx which was petty good, so I gave her another shot. You noticed immediately that she was very nervous speaking in front of
a rather large audience but apart from that the talk was good. The topic itself was more about what bad management looks like, especially when the project is in a bad phase. She drew the picture of
the typical jerk boss stereotype (pressumably from her first job) who responds to delays and errors by putting on more pressure but taking all the credit for success.

The story seemd to take a turn for the better when she told us about a meeting where she, as a young developer, dared to speak up in a review meeting, suggesting some improvements. Against all
expectations the more expirienced team mates backed her up and it seemed that from there on all would turn out good. But no, afterwarsd she got snubbed by the manager and it was clear that
there would be no happy future for her in this job. To make everything even sadder, the manager got promoted for the work the team did and no credit was left for those who actually saved the project.

In the end her morale was that you should speak, you should try to make your work a better place, even if it is hard. But if it turns out, that there is no way to make progress and that you will not be
happy where you are, then move on and find people who share the same values as you do.

Where's my free Lunch? - Hadi Hariri


For me, this one, can be summed pretty short. There are a huge number of online services that are, at a first glance, free. You can use them and no one is charging you. Hadi shows in detail in what way
you actually are paying for things like Google search or Facebook, just to name a few big ones. The bottom line is, you should not just blindly use everything out there. But make sure you protect your
data and understand what the service you are using, is taking from you in return.

Dials to 11 - Moderne Exteme Programming - Benjí Weber


Cool talk, revisting the principiles of Extreme Programming and Agile Development plus a few ideas and principile bulding on that. The concept I found most intriguing was Mob Programming, using the
brain power of the complete team to code complex core modules sounds like a really good idea. Other concepts revovled around Continious Deployment and Test Driven Development which did not strike me
as too revolutionary. The final point that I had to agree with is, that projects have to be refactored on a regular basis, especially when maintenance, testing, monitoring etc are too complicated.

Cybercrime and the Developer. How to start defending against the darker side of the web - Steve Poole


Now that topic did sound interresting, I was looking forward to hear about best practices against cyber attacks, backgrounds of the business etc. But in the end this was just the same old story of
"Check your app for security issues" and "The bad guys are out there" combined with some numbers on cyber crime and links to checklists for security issues. Breaking news.. And the biggest wtf moment
was a short advertisment break for a sponsor. Well, I am not going to get that time back...

Arduino and Java with the Intel Galileo - Simon Ritter


For this one, I was probably not the intended audience. I think I would have got more from it if I had at least some practial expirience with Arduino. Over a while it felt like an advertisement for
the Intel Galileo but soon turned into a report of the issues the Azul guys encountered when trying to get Java working on it.

Overall the style of the talk was good and it was interresting to hear what problems can occure on embedded platforms.

Extreme Profiling: Digging into Hotspots - Nitsan Wakart


Okay, this one went right into the nerd stuff. Nitsan gave a great overview over different profiling tools like VisualVM or Java Mission Control compared to what FlameGraphs in Java 8 can do and how
to use perf to help find performance issues within the Java application. If you like this kind of stuff, just go and watch the video, it is really insightful.

On Polymer and Smileys... or Polysmileys - Carmen Popoviciu


Finally some hands on coding! Carmen introduced the different features of Polymer and how it enables you to create reusable webcomponents. This was one of the talks where you saw from the first
second that the speaker really loves the topic. Carmen's passion and enthusiasm were very catching and the code examples she showed were precise, clear and easy to follow. One of my favorite talks
of this conference.

Polymer is abstracting the browser differences for webcomponents, but of course the webcomponent standard is changing. So Polymer has to adjust to those changes once those are final.

Busy Java Developers Guide to Hacking in Java - Ted Neward


Another talk with 10/10 nerd points. Ted had prepared a variety of special topics concerning tweakings of the jvm and the jdk. Basically it was an in depth tour of some of the more exotic settings
you can use to alter compiler and runtime behaviours. I must admit, I can't restate them here, so again I recommend watching the video. Ted's expertise combined with his presentation skills and his
relaxed and easy attitude made this a very enjoyable talk.

Knowledge is Power: Getting out of trouble by understanding Git - Steve Smith


I must admit, Steve knows Git. He gave an indepth overview of the basic principles Git was built upon. This started with the structure of the .git directory and how objects are created/stored and did
end with the power that the reflog gives you. In essence, if you screw up your git project, check the reflog to find the has before it all went wrong and then revert to that. For the details how to do
that just watch the video.

One thing I do not agree with though, is his hate towards git-flow. It is true, that the schema looks highly complicated and weird, but once you understand the basic system, it is rather simple and
intuitivly most teams implement something very similar. I do agree with his opinion, that you should only add as much complexity to your development workflow as necessary.

Git in Practice - John Stevenson


The last slot of the day was another Git topic, this time a Bird of Feathers (more a guided group discussion). Steve threw in some rather generic questions about workflows and team structures and
the group shared their experiences and opinions. There were no real surprises there, teams usually have some git flow like development process with feature and release branches that get merged back to
master/develop. One thing shocked my though, someone explained how their team uses code reviews to ensure code quality before merging the branch and, as I expected, there was a lot of nodding and
agreement among the other participants. But one guy spoke up and said, that he considers that sad because "you obviously don't trust each other". That was a huge wtf moment for me, but I was relieved
that noone else shared this view. A team where this kind of attitude is common can just mean that there must be a highly inconsistent code base.. *shudder*..

Day 2


Microsoft and Open Source? Microsoft and Java? Really? - Giles Davies


Well, the title says it all, really. Like most IT guys I have some reservations when it comes to Microsoft, this is of course caused by their actions in the 80s, 90s and early 2000s. But seeing how
they are now trying to get to grips with IT reality (e.g. the development of Edge) I figured I give them a chance to convince me that they are not that bad anymore.

The picture Giles tried to draw was one of a "regular" IT company that uses different technologies to provide their customers with the best possible service. While that all sounded fair enough it was
more or less what was to be expected by this kind of talk. How much of this is really as good as it sounds only time can tell. But at least it is clear, that Microsoft tries to abandon it's old ways
and so they will probably again become a global player again who might rival with Apple or Google.

Composing Music in the Cloud - James Weaver


Mr JavaFX talking again. As always his experiences speaking style made the presentation itself worthwhile. The topic itself was nice but it actually did not blow my mind. Basically Jim did show how
he was using CloudFoundry to host a Spring-Boot application that can assist in different ways of music composition and how fast this can be done with modern technology.

If you are looking for some distraction after a work day with some kind of nerd flavour, this video is the way to go.

Java 9 Modularity in Action - Sander Mak, Paul Bakker


Oh yeah, some real bleeding edge stuff coming up. Sander and Paul gave a nice insight into the Java 9 modul concept that is intended to change how Java applications are written and run. One of the
main points for me is that this aweful classpath "thing" is meant to die with Java 9. Instead applications will be composed of moduels which expose certain classes and functions and can be combined
using provide and require declarations.

Does that sound familiar? Oh yes.. OSGI!!!! The concepts of OSGI are rather old, the last time I really had to do with that was back in 2003/2004. But so we meet again. Don't get me wrong, this is
nothing bad. I really like the concept of OSGI as it also allows you to have different versions of modules available at the same time etc. So I am excited to see how this works out when it comes to
the "regular" Java world.

Of course it is not so easy to upgrade to this new version, at least not when you want to follow the new paradigm. First of, the JDK and JRE are using the module structure too, so if your
application is doing something nasty with those resources, it is very lickely going to break with Java 9. And how do you migrate to the module structure? Every application has a lot of dependencies
and those have to be migrated as well. But that was taken care of, you can use jar files as so called automatic modules, that way you can upgrade your dependencies step by step.

The downside is, that in practice those upgrade will not happen soon at most companies, because they are expensive and risky. So we will have to wait and see when Java 9 will truely be taking over.

Refactoring to Java 8 - Trisha Gee


Not so much bleeding edge, but still a very interessting topic as Java 8 is, unfortunately, still new for most companies. In additon, a talk with Trisha is always worth listening to and so was this one.
What I liked most about her approach was, that she did not do any of that fanboy "omg, you have to use all the features asap because this is new and therefore better than, omg" stuff. Instead she
took some of the most popular Java 8 features and examined them closer in terms of: How easy is it to migrate? How much can go wrong? And how does the performance change?

The results were somewhat disillusioning, not only did some patterns prove pretty tricky to to migrate to but also the performance analysis showed that there was hardly any performance gain from
using the new features and in some cases peformance even dropped.

Bottom line is, you have to decide for each language feature and each use case independently if it worth to migrate and you always have to run performance tests on the code base to compare the
behaviour under high load. For everyone who is about to upgrade to Java 8, this video is a must watch!

Faster Java By Adding Structs (Sort Of) - Simon Ritter


And some more Java action! This time about the ObjectLayout project from the guys at Azul. The idea behind this is, to bring the performance advantages that C/C++ have compared to Java due to the use
of structs.

With a struct defintion the compiler knows exactly how large the struct object is and all objects do have the same size, as all elements are only pointers to the real data. That way the code can make
assumptions of where a certain object is within an array etc. and thus take short cuts during the execution.

This is what the ObjectLayout project tries to bring into Java. For details you better watch the video, as I am sure I will mix something up if I try to summarize it here. Pretty weird stuff.

Building multiplayer game using Reactive Streams - Michal Plachta


The last talk of the conference was again more hands on. Michal showed us, how Reactive Streams work and hwo those can be used to create a nice simple multiplayer game in a live coding session. Nice
topic and the coding was ratehr easy to follow, so for me this was a pretty good end to the conference.


Summary


As I said before, the value for money ratio on this conference is still great. Lots of talks and variety of topic to choose from with some very good and experienced speakers.

Of course there are some rather minor issues, or rather things that I would not need which I want to sum up just for completeness sake.

In comparission to the first year, there where less tables, seats and especially power plugs available. This makes it a but cumbersome for us techies with all our electronic devices. I understand
that the space is limited and the organizers preferred to put up more buffets for lunch but still it would be great to get some more set up in the future.

There are also more talk slots that I did not mention, for one the Ignite talks. A series of 5 Minutes talks about some small or humorous topics. I did not get much from them and I really don't
see the point of those. But obviously there is an audience for that. The other are 15 Minute talks during the lunch breaks. Usually you are in line waiting to get to the buffet and so don't have
time to listen to them. It seems to be pretty ungratifying to give such a talk. But again, this is in place since the first year, so some people seem to like it.

The one thing that really kinda bummed me out was the organization of the DevRoxx party. The party took place at the second evening, but it was not really advertised that much apart from the
closing talk of the conference. It as then that I took a closer look at the home page and finally found the section about it and then saw that you had to go to some of the sponsors to get a ticket
for the party. At that point it was of course to late, but we decided to give it a try and went to the location just to see that quite a lot of people did not know about the tickets. So it was
not just my problem ;-) So maybe that is something to improve for the years to come.

The last thing I want to criticize is that even during the closing talk not only the sponsor stands got removed but also the wifi was taken down, which can be a problem for foreigners who have no
data contract and who want to look up things, like e.g. the party location.

Even though this seems to be a lot to complain about I want to stress the point, that this is really good conference considering the cheap prizes and that I am very sure you will not regret coming
to this in the future.