Donnerstag, 14. Dezember 2017

IT-Tage 2017 in Frankfurt

The last conference has been a while but finally I managed to get myself signed up. This is my first time at the IT-Tage in Frankfurt which is a conference hosted by the German magazine Informatik Aktuell.

Three days with 6 parallel tracks seemed like a lot to choose from even though 3 tracks mainly revolve around sql database technologies. Although those are not of that much interest to me I was still looking forward to this as on the other hand there are lots of talks about micro services, security and programming languages.

But before I go into detail about the talks, some general information. The venue is the Kap Europa Center in the heart of Frankfurt, easy to reach and rather classy looking. What strikes me as a bit odd is that all talks are held in German even though many use English slides. This seems to be intentional by the organizers and sure has some advantages but I think it should not be an issue for anyone working in IT to follow English talks and it would broaden the potential audience as well as pool of speakers.

A positive thing on the other hand is the concept of track hosts. Actually all conferences have someone being responsible for the hardware to work properly but here those people do quite some more. They give a quick introduction about the speaker and the topic, encourage discussion afterwards and give additional information about the following program.

Day 1

Harald Gröger - General Data Protection Regulation

This new regulation will become effective in May 2018 and does have some serious effects on how companies have to handle all aspects of personal data, gathering, processing and even deletion.

I will not repeat every detail here but the key changes that got stuck in my mind are:

you will have to explicitly ask for permission to gather any data
the user has to actively give his permission
it is not obvious what kind of information has to be considered being personal
if a user requests deletion of his information, that will have to happen immediately but only if the data is not needed for other purposes like order processing/billing or legal matters
Especially the last point will cause us developers serious headaches to handle all corner cases without breaking anything and not violating the law.

Harald is working with IBM and also lectures at the University of Würzburg about related topics. So he knows his way around data protection and you noticed it throughout the solid and entertaining talk. TBH I would have enjoyed it more if the topic would have been more fun but it was still a good start for the day.

Domink Schadow - Build Security

Since the first session was about as theoretical as it can get the second one had to be "nerd on". So I went into this talk about build security hoping for some hands on stuff that I might actually be able to use back at the office.

So what do we mean by "build security"? Your build process should verify that your code and the resulting application is as secure as it needs to be. Because that is what we all want, a working application that can't be hacked by script kiddies within 5 minutes ;-)

To help us developers achieve this our build pipeline should perform certain analysis steps. It should perform them as often as possible/feasible and as soon as possible.

The easiest approach to this is the well known static code analysis. A widely used tool here is FindBugs, which I consider pretty awesome. But FindBugs is not really suitable for security checks, it does have checks for some injections and programming errors but not in depth security analysis. Or does it...? There is a plugin to FindBugs called FindSecurityBugs or FindSecBugs or SpotBugs which does exactly that, scan explicitly for security issues. In combination with SonarQube this can be a really useful and powerful tool.

But what apart from that? What should we even check for? To have a starting point everyone should at least know of the OWASP checklist showing the top security issues and the recommended counter measurements. To support working with that there are two OWASP tools.

The first one is OWASP dependency check, this tools checks the dependencies you use to build and run your application and checks them for known vulnerabilities. Telling you which libraries have security flaws and to which version you should updated etc. Brilliant!

The second one is OWASP ZAP. This is a powerful tool and a large portion of the session was spent on showing how you can configure it and what is important to consider. For my taste it could have been some fewer jenkins config screens (it has a jenkins plugin, yeah) but it was still good to hear all of the pitfalls etc.

ZAP can be used as proxy between your app and the clients to manipulate and inspect requests and data. That is not that new. But it can also execute attacks on your application based on known real world security issues with actual working exploits. You can configure it on what endpoints it should attack or you can let it record your requests and then replay those in an altered way in the attack mode. If you think of using it, be sure to read the documentation very very carefully as this tool can easily break your neck if you screw up.

But as great as those tools are, they also have downsides:
each tool increased the build time, some only a bit like FindSecBugs (only seconds or maybe minutes) or a really really really shit load like ZAP which can cause a build of a few minutes to then take hours due to the amount of requests performed
sadly all tools are known to report false-positives, some more, some less. But there are configuration options to limit those to keep the pain rather low
Even though it was not exactly as I expected I think it was a pretty good talk from a tech guy showing us what can be done to improve the security of our software with tools available for everyone.

Keynote Pilipp Sandner - Blockchain and IoT

In this keynote we got a glimpse of what changes block chains, the concept behind things like bitcoin, might enable in the future.

Basically a block chain creates sequential pieces of information where every piece has data from it's predecessor encoded into it thus making the sure no element of a chain can be manipulated unnoticed.

Block chains would allow easy transactions not only between humans but also between humans and machines or even between machine, like a car transferring some kind of currency to a parking meter etc.

Most of the use cases shown were not that new and could have been implemented already without block chains. But what block chains add is the possibility to ingrain checks, processes and restrictions but also convenience functions into the them. That allows to setup easy ecosystems for things like smart contracts or IoT chains that users can hook up to easily.

Of course the note also covered the current state of the so called crypto currencies like bitcoin. I was surprised how many other crypto currencies are out there and how much bitcoin has been skyrocketing in the last months. But it is also scary that no one really knows if this will carry on or if there is a bubble just waiting to burst.

Going further on this subject the basic question is, does value always have to be couple to a physical good? I don't know and I don't think anyone else really does, so only time can tell.

As before it was clear the speaker knew about his subject and he was able to transport his enthusiasm to the audience.

Christoph Iserlohn - Security in Micro Service Environments

Aaaand again going from theory to real world hand on stuff, at least I thought I was.

The first 15 minutes were spent on defining what a micro service is and then about how it is difficult to get security know how into your teams and then into your development life cycle. Everything true and important in it self but I still fail to see why we needed it in this kind of depth, but ok.

In a world where most people are going from monolith to micro services we often forget that we have to secure our services. Sometimes that is considered done after slapping some OAuth on them but that is not enough.

By splitting your application up into several services you also open up new attack vectors. Services are talking over the network so you have to secure the network, make sure only authenticated clients are talking to your services and that a requests that is passed to other services has all credentials and context information it needs to verify that the request is valid.

Next was a short excursus about the history and relations of OAuth and OpenID, followed by how security tests should be part of the build pipeline. What I took from this part was, that the testing tools seem to be lacking and that you need to build a lot of what you need yourself.

The last part focused on secret management. When going all in on micro services architecture you have a lot of secrets to maintain, deploy and revoke like passwords, certificates, API keys and shared secrets. The recommendation here was to use Vault and if possible an HSM (Hardware Security Module) and services to have secrets change dynamically.

All in all I got the impression the speaker was rather gloom about this subject. Everything had the vibe of "this is not working and that is not working, we had problems here and problems there". This was emphasized by the fact that there were no real solutions presented only notions of what issues they ran into but not how they solved it.

Of course micro services are no silver bullet and sometimes it is better to build larger systems, but are they really that bad? I doubt it..

I think this was a real pity because I had the impression Christoph has a lot of know how in that area that he could share.

Lars Röwekamp - Serverless Architecture

Serverless is one of the latest buzz words (at least i think so) stirring around in the IT community. But what do people actually mean when they talk about serverless?

Lars started with a good definition "If nobody uses it, you don't have to pay for it" or the more widely known quote "don't pay idle". So is my heroku dyno serverless? ;-) Not quite as shown by the serverless manifesto. I am not going to write all of that down, you can look it up. I am just going over the most important things Lars pointed out.

In serverless we are talking about functions as deployment units (Faas), so a Unit much much smaller than your regular micro services, something that really only does one thing. Functions react to events, do something and by that trigger other events that trigger other functions. Well, this sounds like NodeJS in the cloud to me :-)

The idea is that functions are scaled on a request basis, so every time a new handler is needed it is started up and shut down when it is not needed anymore. The reality seems not to be quite there yet, but we might be getting there eventually.

What is also an interesting idea is that the developer does not care about the container the function will be running in. He just writes his code and the rest is done by the build and deployment processes.

This went on for a while as Lars covered the whole manifest and I felt like he gave a very objective overview of the topic with the occasional reality check thrown in, showing that theory and practice often differ.

After that we had a look at how such a function is build. We are actually not deploying a function but a lambda. A lambda consists of a few things. The actual code, that's what people usually mean when talking about the function. The function receives a context object and an event object. The event tells the function what has happened and the context gives information about the available resources. Additionally you should also always define security roles to make sure your function is only called by those who are actually allowed to do it and to also make sure your function only does what you want it to do. You would not to span e.g. EC2 instances when you don't need them etc.

On a first glance serverless sounds like awesome, because you hardly pay anything, right? Well, not quite. What you have to consider is that with serverless a lot of your development and testing also has to happen in the cloud. You can do tests locally but in the end you always have to check if it really runs as expected in the cloud environment.

Then followed a few examples of possible scenarios which I think you should best lookup on the slides.

And finally the obligatory pitfalls section. Again this was presented very objective showing that of course the complexity of applications cannot be simply removed by a new architectural pattern. All issues people have with micro services also apply to serverless and are on some cases even amplified due to the even higher degree of distribution and smaller granularity. A special mention has to go to the dreaded vendor lock, if you go to serverless your code will have dependencies to the vendor you use. But usually doing everything yourself the vendor does for you is magnitudes more work then changing those dependencies should you decide to change vendors.

For the rest I also recommend you check out the slides. As a summary I want to say that Lars so far gave the best talk on this conference and it really shows he knows what he is talking about.

Werner Eberling - My Name is Nobody – the Architect in Scrum

This will be a short summary. Basically this was an experience report of how a single architect has to fail in an environment with several scrum teams working on a single huge project.

I try to sum it up: If the architect is outside of the teams has no team support, his decisions will not be backed the developers and he does not have enough knowledge of the team domains to understand the issues a team might have with his design. If you make that one architect part of all teams he will have absolutely no time to tend to the individual teams he will be in meetings all the time and soon burn out. So the only and imo obvious solution is: you need architecture know how in all teams, the teams have to design the architecture themselves but to maintain the big picture all cross team concerns have to be discussed in a community of practice.

It seems that the scrum masters just tried to followed what they considered textbook instead of simply thinking what is needed to make the project work, which is what imo is the core idea of scrum. Identify issues and then work as a team to remove them. And somebody should have noticed that there is something weird when someone gets appointed or hired as architect but the scrum masters claim that this role does not exist and he is not needed. At least one side has to be wrong in that argument.

I might be in beneficial situation as this is how we worked for years and it struck me as odd that there are still organizations that do not understand how to handle this. It is nothing new and anyone with at least some brains should see that the single architect approach is prone to fail or at least notice that after a year.

But so far this report does not do justice to Werner. While I did not take much from the talk his presentation was entertaining and solid. So if he decides to give another talk about a more technical topic I will try to join.

Guido Schmutz - Micro Services with Kafka Ecosystem

We start off with a definition of micro services, again ;-)

But after that Guido got real on how to build software with an event bus system like Kafka even though the first part focused on the concepts without taking Kafka into consideration.

Even though this was very interesting the summary is rather short. There seem to be three best practices for event bus systems:

CQRS (Command Query Responsibility Segregation) 

There is also an article by Martin Fowler about this topic, so I try to keep it short: To allow for a large amount of transactions and responsive systems it is advisable to have read and write operations on your data separated, e.g. you have one service writing data to a data store (e.g. an event store) but have read operations performed by a different service working on a view of that data store. This view can have a completely different format and technology underneath. All you have to do is propagate write changes to the original data store to the view.

Event Sourcing: State is saved as a chain of events

The New York Times published an article how they are using this technique to save all their published data. All editorial changes are saved as an event in an event store in the order they occurred. So a client can simply resume from the last event it had seen without the bus system having to keep track of each client.

One Source of Truth

Only ever ever ever write into one data store, all other stores must be populated from that main data store. Otherwise in case of an error you will run into consistency issues as distributed transaction are not (well) supported (yet) and also have performance impact.

To round it up Guido gave a run down on the reasons why he used Kafka for this kind of systems. The main one being that Kafka is a slim implementation focusing only on this one purpose.

For me this was in interesting overview on event bus architectures and an impulse to look into them some more.

D. Bornkessel & C. Iserlohn: Quick Introduction to Go

This was exactly what I expected, a run down on where Go came from, what the design criteria where and a brief feature overview followed by a small tutorial on the basic programming tasks.

Here a short list of the topics I found most interesting:

  • Dependency Management: This seems to be rather broken/bad/cumbersome which would be a big issue for me but on the other hand most things are already shipped with Go, so maybe it is not that bad?
  • No package hierarchies: ok.. well if you don't need a lot of dependencies then maybe that works.. 
  • Compile breaks on unused variables: On the one hand this is good for finding bugs etc. but during development this could cause some frustration. Maybe you get used it..
  • Functions can return tuples: Yeah, that can be of great help in some cases
  • No Exceptions: Instead you have to check if the called function returned an error object together with the return value. So that is why the made it return tupels, eh? Causes a lot of if checks.. *yuck*..
  • Casting also returns error objects like a function call
  • Implicit Interfaces: You do not declare to implement an interface you just implement the methods of the interface and then your class implicitly is also of that interface type.. TBH I don't know yet if I think this is cool or horrible.. Something about this makes my skin itch.


Christian Schneider - Pen Testing using Open Source Tools

The final evening session, lasting two hours. Again security :-)

The first part was an exhaustive overview of the available tools you can use for different purposes

Web Server Fingerprinting & Scanning


  • Nikto: Scans a webserver for typical issues like default applications still deployed, server state pages, missing headers etc
  • testssl.sh: Does exactly that, checks your SSL configuration about supported ciphers etc
  • OWASP O-Saft: An alternative to testssl 
  • OWASP ZAP: This part took up most of the time which was good because the tool is very powerful but for me held not much new information because of Dominik Schadow's talk earlier. But for those who do not remember, ZAP is an intercepting proxy which can also run complete and exhaustive attack scenarios on your application.
  • Arachni: Allows to spider Javascript applications better than the other tools
  • sqlmap: Database take over tool. Man that looked impressive AND scary. This tool performs a massive load of very deep injection tests on a single request trying to get into your database and cause damage or extract information. The results are then available in a separate sql shell/browser for inspection.

OS Scanning


  • lynis: Checks your server for available root explotations like root crontabs being world writeable or dangerous sticky flags etc and necessary updates if run as root
  • linuxprivchecker: shows which root exploits exist in your kernel and libs

Whitebox Analysis


  • FindSecBugs/SpotBugs: Has also been handled in the talk of Dominik Schadow, so nothing new here for us
  • ESLint with ScanJS: Allows scanning of Javascript for security bugs
  • Brakeman: Same for Ruby
  • OWASP Dependency Check: Also same as before ;-)

Then we got a live demo on Christian's Maschine showing us how ZAP behaves in attack mode, what the output looks like and how to tweak the configuration.

We also saw the XEE vulnerability being exploited to read the passwd file of the server. Even though this is nothing new from a theoretical point of few it is something different seeing it actually happen.

This was followed by a discussion until we finally decided to call it a day... ufff....

Day 2

Thorsten Maier & Falk Sippach - Agile Architecture

TBH I did expect a different kind of talk here, something more along the lines how a flexible and agile architecture could look like to suit changing needs. But that is something that probably does not even exist, so I should not have been surprised that it was actually about how to create and maintain an architecture in a dynamic environment ;-)

The basic problem Thorsten and Falk try to approach is that a system architecture is usually something very fundamental that can't be changed easily afterwards while an agile environment often demands changes late in the project.

What the presented were 12 different tasks an architect or rather the team should perform in cyclic iterations to see if the architecture still fits the project needs.

This cycle consists of 4 phases, design, implement, evaluate, improve. Much like the typical scrum process. The details can be seen on the slides I'll just summarize the main points:
  • gather the actual requirements not just abstract wishes, make sure you know what your stakeholders need and who you are dealing with
  • try to compare different architecture approaches, e.g. using ATAM or other metrics
  • document your architecture (in words and graphics) and explain it to other people to get early feedback and sanity checks
  • try to have documentation generated during the build, maybe have automatic code checks to find violations (not really sure about that tbh, that is what code reviews are for imo)
  • check if the requirements have changed and if the architecture still suits the needs
This sounds all very well and it sure can help to come up with a good architecture in the start and enable your team to stick to it. But still the problem persists that if the requirements change drastically in a late phase you will have the wrong architecture. So I don't really get how that addresses the problem at hand.

What I take from the talk ist that it might be worth to check out arc42 and PlantUML for automated architecture documentation. 

Raphael  Knecht & Rouven Röhrig - App Developement with React Native, React and Redux

Well, what can I say. I have no clue about React or Redux and so this talk did contain a LOT of new information for me and I think I am still trying to digest all of it.

Raphael and Rouven gave an overview of what React, React-Native and Redux are, what you can do with it and what their experiences were. First of, with all of those tools you write Javascript or JSX which claims to be a statically typed Javascript. React is a view framework so it handles only rendering. React-Native compiles or transpiles or whateverpiles the Javascript (or JSX) code into native code for mobile devices. Redux handles everything apart from the view, it provides a state, that can be changed by using actions and reducers, where the action states what should be done and the reducer how it should be done. 

For details on how this is coupled I have again to refer you to the slides as the diagrams explain this better than I could. An important notion is that a reducer is a pure function so there are no side effects in those. But for some aspects you need side effects, like logging e.g. To handle those concerns React uses something called middleware.

There are two different middleware implementations (at least in this talk), Redux-Thunk and Redux-Saga, both with their own pros and cons. The speakers did end up using Saga as it seems to be more flexible in handling complex requirements, where as Thunk tends to end up with large actions that are not maintainable.

Reasons why React/React-Native/Redux is cool:
  • good tool support for dev and built
  • hot and live reload in the ide
  • remote debugging
  • powerful inspector
Reasons why React/React-Native/Redux is sometimes not so cool:
  • not everything can be done in JS/JSX, sometimes you need native code
  • currently version 0.51.0 and there can be breaking changes
  • sometimes not all dependencies are available in compatible versions due to different release cycles
Maybe, when I have time, I can try making an app with React. *dreams on*

Dragan Zuvic - Kotlin in Production: Integration into the Java Landscape

tl;dr - I think that was a great talk. I like kotlin, at least that what I have seen so far and it was good to have someone explain not only the basic but also the advanced features but also issues of this language.

I would say Dragan is a kotlin fanboy, he is very enthusiastic about the language features and the tool support, which indeed is really good. We started with the basic overview of the language features:
  • type inference
  • nullable types
  • extension functions
  • lambdas
  • compact syntax
    • no semi colons
    • it keyword in lambdas
    • null safety operator
    • elvis operator
To only name a few. What was also interesting to me is that by default kotlin code is compiled to JRE 6 byte code but you can still you use fancy new stuff like lambdas. Alternatively you can also compile to JRE 9 code with module support or even to native code.. wooot....

Most of the talk revolved around interoperability between kotlin and Java. To make it short, it works pretty well. Calling Java from kotlin seems to be no problem at all. The other way round has a few issues but usually is also not much of a problem.

Falk Sippach - Master Legacy Code in 5 Easy Steps

Live coding demo on what techniques you can use when dealing with legacy applications. There are already some books about this but most of them assume that you a) the legacy code is testable and b) you already have tests.

That is not always the case. So what do you do when you have code that can't be tested and that also does not have tests (which is not that unlikely since it is not testable.. d'oh). 

Before you actually start getting to work you need to make yourself aware of two necessities:
  1. only perform very small isolated steps, so that cannot break to much and can stop at any time
  2. do not start fixing bugs on the way, you do not know if those bugs are accepted behavior for clients or maybe they aren't bugs after all when you see the big picture - preserve the current behavior. 
The following techniques can help you to get control over your legacy code:
  • Golden Master: Even if you do not have tests you can usually in some record the system behavior for a certain event. That can be log output or the behavior of your UI when you enter certain data. You can record this behavior, make small changes and then perform the same action again to see if the behavior is still the same. This is no guarantee that you did not break anything but it is an indication that you did not break everything. So if your steps are small enough this can be a good way to change your code into testability.
  • Subclass to test: Some classes cannot be tested because they have certain dependencies that make tests hard or unpredictable. You can try to subclass those classes and then overwrite methods relying on those dependencies to return a static value allowing you to test the rest of the class.
  • Extract pure functions: Split your code up so that you can isolate pure function where possible, making your code better readable.
  • Remove duplication: That is usually good to remove lines of code.
  • Extract class: Similar to extract pure functions but on a larger scale. That way you can create cohesion by extracting small logic parts into separate classes.

N. Orschel & F. Bader: Mobile DevOps – From Idea to App-Store

This was also not quite what I was hoping for. The part that would have been interesting to me was rather short in this talk. How to handle the different build requirements for all platforms, what challenges are there to have the app rolled out to the different stores etc.

The main part of this talk revolved around tool selections like "do we use centralized or decentralized version control". Yes, that was an important for the guys in the project but in the end does not matter much for the way the app is built and deployed, not even much for development. Code reviews can be done with any VCS it is sometimes just more difficult. 

Anyways, there still have been some relevant infos for me, regarding on how to test the different applications.

To build and test the iOS app you still need a mac but you don't want to buy Macs for every developer. The solution shown here looks promising. Simply buy a Mac Mini with the maximum hardware profile and then install Parallels on it, that way you can have lots of virtual Macs for your dev and test environments.

With Android you have the issue that there are thousands of different devices out there and can't buy one of each just to test your app. Solutions for this can be found at Xamarin Testcloud or Perfecto Mobile.

And finally Appium provides an abstraction for test automation.

Franziska Dessart: Transclusion – Kit for Properly Structured Web Applications

I literally had NO clue what transclusion means. And the subtitle was also not that elusive to me. So what is it? A new framework? A weird forgotten technique? Well.. not really ;-)

Transclusion means that a resource can include other resources via links and those resources will then be embedded into the main resource.

Sounds familiar? Here a few examples of transclusions:

  • iframe
  • loading web page parts via ajax
  • SSI (Server Side Includes)
  • ESI (Edge Side Includes)
  • External XML Entities (remember the XEE exploit from yesterday? ;-))
The term is not new, it is just not that widely known. But with more and more micro service architectures it becomes more important.

Most of this talk targeted the different places a transclusion can occur and what considerations have to be made. Mainly you want to assemble the primary content of a website before the user sees it, so if you have to use transclusion for you do not want to do it in the client. Secondary content on the other hand is fine to be transcluded on client side. 

With every tranclusion choice you will have to take some aspects into account:
  • should the transcluded resource contain styling or scripting?
  • do you need/want to resolve transclusions recursively and if so for how many levels?
  • do you cache? if so the final resource or only certain transclusions or only the main resource?
All of those questions have to be answered separately for each use case. 

Thorsten Maier: Resilient Software Design Pattern

Resilient software has always been important. Micro services hold new challenges in this area though. 

If we want a high availability for our software we can try to increase the time until a failure will occur and we can also try to reduce the time for recovery in case of a failure. Ideal would be a system where a failure is not noticeable from the outside because the system can recover instantly, or at least seem to do so.

Thorsten not only described the well known patterns for these goals but gave specific implementation examples for most of them. e.g. using Eureka (from Netflix) as a service registry, have client side load balancing with Ribbon or Zuul as API-Gateway.

This was a nice change because usually those talks only cover the theoretical concepts but do not show you what tools you could use. But to be fair, usually the implementation was done by using an annotation within a Spring Boot application, so it was rather easy to show ;-)

Day 3

Mario-Leander Reimer: Versatile In Memory Computing using Apache Ignite and Kubernetes

Starting the day with some nerding, oh yeah!

Mario-Leander gave a great overview of the impressive capabilities Apache Ignite has. It seems to try to handle everything any middleware has ever done. While I am not a big fan of the "ultimate tool" pattern it does have some really interesting features.

Mario-Leander started of with his reasons why he started using Ignite. Micro services should not have state but applications usually do have state. So this state has to be put somewhere usually that is some shared database somewhere. When your landscape grows this can become a bottleneck or in some cases you needed to access your data in different ways. This is where Ignite can help.

Ignite provides you with distributed, ACID compliant key value store that is also accessible via SQL or even Lucene queries. That alone is quite a lot but the list of features goes on for some times including messaging, streaming and the service grid. The messaging component provides queues and topics just as JMS would do.

The service grid allows your software to deploy IgniteServices into your cluster, an IgniteService is some piece of code that gets distributed to your nodes. This service usually needs to access portions of the data, but depending on the cache mechanism not all data is available on all nodes. Ignite supports collated processing which means that processing takes place on the nodes where the data is to reduce data transfer to a minimum.

The streaming component was shown with a live demo where Mario-Leander attached his Ignite to Twitter querying for the conference hash tag and printing the tweet data to standard out. For most features the slides provide code examples that show a concise API.

The talk itself was very good and Mario-Leander was able to answer most questions in depth showing his experience with the topic. Was definitely worth it.

Andre Krämer: Exhausting CPU and I/O with Pipelines and Async

Uhh.. more nerding!!! Or not ;-) From the topic I was expecting some advance techniques for asynchronous programming or maybe some project report about what problems have been tackles by async paradigms. Actually it was more an introduction to async and why async is faster than sync.

The basics were that you should separate I/O and CPU intensive tasks in a consumer/producer pipeline so that the separate workers can be scaled independently to get the most our of your system.

Then we had a part about performance measuring I/O performance on a machine and what pitfalls can lurk there. There were some interesting aspects but focused a bit much on NTFS specialties which simply are not relevant to me. The conclusion was that you should be sure to measure the right thing if you want reliable results, e.g. small files are often cached in RAM and not on disc.

Christoph Iserlohn: Monolithen – Better than their Reputation

This topic sounds pretty controversial with the current micro services hype. 

A slow or cumbersome development cycles quite often is not the caused by the software but by the organizational structure. When different departments with different agendas are involved (e.g. Devs have different goals than Ops or DBAs - feature vs stability) then you will run into problems that you will not be able to solve with micro services.

On the other hand a monolith must not be bad per se. You can still structure your monolith into separate modules to mitigate most of the well known disadvantages of monoliths. There are even some aspects in which the monolith is superior to micro services:

  • Debugging: It is very hard to debug an application consisting of 10s or 100s of different micro services and requires a lot of work before hand to be actually possible
  • Refactoring across module boundaries: In a monolith you (usually) have all affected components in one place and can spot errors quite early. In a micro service landscape it can be quite hard to make sure all clients are adjusted to or can cope with your changes
  • Security: One micro services is more secure than a large monolith but if you consider the whole micro service application this is not necessarily the case as you have to take the communication paths into account which are often quite hard to secure
  • UI: With micro services a good UI is usually hard to build if the data comes from different services
  • Homogenous Technology: This can be an advantage as it reduces the complexity of the system and requires less skill diversity but also a disadvantage as you can't choose a different technology for a module
The bottom line is that monoliths or micro services are neither simply good or bad, it all comes down to your requirements and external parameters. So think before you build!

Nicholas Dille - Continuous Delivery for Infrastructure Services in Container Environments

This talk focused strongly on the Ops perspective of micro service platforms and, like many others, started off with the basic reasons and advantages of automation for software deployments, like reproducibility, ease of deployment and stability.

To achieve that the automation must be easy to use, well tested and standardized. That is why everything as code is so important because it allows us to create exactly that. According to Nicholas this is where Ops still can and need to learn from Devs' experience in software engineering.

Another method to improve the build process is micro labeling for containers. This means to label each build with a set of properties to make it identifiable and allows us to see what source code version it reflects. Those properties can be:
  • artifact name
  • repository name
  • commit id
  • timestamp
  • branch name
To further help with automation you should use containers to deploy your applications as that means you have one way of deployment. Also containers make monitoring easier as the containers can collect the monitoring data from the application and expose it to the monitoring infrastructure. That way you can focus on the actual evaluation instead of collecting data.

One issue that arises with containers is that a container should be stateless but most applications do require state to function properly. Instead of using host volumes mounted into docker Nicholas presented a different approach. Ceph is a distributed storage system that can also run in a container. That way applications can store their data in Ceph but of course if the Ceph container dies the data is lost. So they set up a cluster of Ceph containers so if one dies it gets restarted by your orchestration software (Rancher in this case) and the new container syncs to the existing cluster within minutes. 

I think this is an interesting solution but I am not sure I would feel comfortable knowing that all my data resides in memory. Maybe they only use it for non critical data that must not be persistent at all costs, but I can't remember anymore.

Even though I was not exactly the target audience I took quite a bit from this talk and think Nicholas showed a lot of experience in this area. Especially while answering all questions in depth, two thumbs up!

Christian Robert - Doctors, Aircraft Mechanics and Pilots: What can we learn from them?

Well the title really says it all. Other professions have harnessed techniques to function under pressure situations, so why not try to learn from them.
  • Pilots
    • All information in the cockpit -> Have all monitoring data aggregated in one place so that you can spot problems at once. 
    • Standardized language -> Have a common vocabulary and a common understanding of it, e.g. what does "Done" mean?
    • Clear responsibilities -> Communicate in a clear and expressive way so that you are sure your colleagues understand. Also vice versa tell your colleague that you did understand.
    •  Checklists -> Always use checklist for important or complicated tasks, no matter how often you have done them in the past. Routine is one of the greatest dangers for your uptime.
    •  Plan B -> Always make sure you have a plan when the shit hits the fan
  • Aircraft Mechanics
    • Prepare your workspace -> Well this analogue did not work that great. It refers to the fact that a aircraft mechanic would not repair a turbine that is still attached to the plane. He would dismount it and the bring it to the workshop. In software terms you could translate it to, build your software modular so you only have to patch an isolated component.
    • Tool box -> Use the tools you know best and that suit the task at hand
    • Visualize processes -> Use a paper Scrum/Kanban board so people can move the post its themselves. This is one point where I disagree.
    • Document your decisions -> Documentation for system architecture and other project decisions so everyone can understand them.
    • Bidirectional communication -> Make sure everyone can follow the progress by using issue trackers like JIRA.
  • Doctors
    • Profession vs Passion -> Doctors want to help people but sometimes they have to deal with bills and pharma consultants. In every job there are tasks that you have to do even though you don't like them, e.g. meetings.
    • Understand your patients -> When talking to non techies we have to explain things so that they can understand. No tech mumbo jumbo.
    • prepare as good as you can -> A doctor does a lot of preparation for an operation by talking to the patient, running tests and making sure he has all the information he needs. Similar we should prepare before implementing features by gathering all requirements and information from the stakeholders.
And one they all have in common: "Don't panic. Stay calm." It does not help to have 10 people fix the same bug or doing things just for the sake of it. Focus what the issue is, how bad it really is and what is needed to tackle it.

Uwe Friedrichsen - Real-world consistency explained

Disclaimer: To understand this part you should know about ACID, CAP and BASE.

I don't know how to say this differently: This talk scared the bejesus out of me. Why, you ask? You will see.

Relational databases with ACID, that is what most of us started with. A simple MySQL or maybe Oracle or something similar and we were happy. All of those had ACID and so we could simply code on and be sure that all of our database operations would complete and we would not lose and data or even have inconsistencies.

But unfortunately this was never the case. All has to do with the 'I' in ACID: Isolation. Uwe did a great job explaining the technical details but here is the short of it (especially since I can't repeat it ;-)): ACID in it's textbook definition can only be achieved when the database uses the highest isolation level called "serializable" but hardly any database uses this due to serious performance impacts. Usually a databases uses "read committed" or "repeatable reads", those do provide some isolation but not perfectly. That is why you are very likely to see some kind of anomalies in every database. YUCK!!

So what about NoSQL databases? We have to deal with the CAP theorem but that is what BASE is for, so they all provide eventual consistency, right? Turns that is also not the case but most people just don't know it. You have to configure your database properly to ensure eventual consistency. One problem, that is quite common, is that often a database is chosen because it is hyped or sounds interesting, but not because it suits the problem. But especially with NoSQL this is very important since almost every NoSQL database was designed with a special purpose. If you look at Cassandra e.g. this database was designed to be as available as possible. As long as one node is alive you can write data into it. But consistency? You wish...

Check out the slides. The research Uwe has done is impressive and I can't repeat half of it here.

Another problem with NoSQL is, since there is no ACID but instead BASE, our programs have to take that into account. So the programming models are not as simple anymore as they were with SQL databases. 

So are we stuck with either ACID that does not work as good as we thought or BASE that makes programming much more difficult? There are a lot of things between ACID and BASE like Casual Consistency oder Monotonic Atomic View. But there is still a lot of research going on in this area but in the end our applications will have to do their part to make sure our data is safe.

I really recommend to check out the slides as this is a very interesting and also very important topic.

Sven Schürmann & Jens Schumacher - Container Orchestration with Rancher

A hands on report by two guys who spent the last year working with Rancher. To keep it short: It was good overview over some key features of Rancher that I will try to summarize below.

But first: Why do you need Rancher? Can't you do orcherstration with Kubernetes? Yes you can but there is more to container platform than plain orchestration. You still need stuff like access control, handling deployment on different stages, defining how a cluster should look like exactly and so on and so forth. And this is where tools like Rancher come into play. For the plain orchestration you can use Kubernetes in Rancher or other tools like Docker Swarm or Rancher's own Cattle. As of 2.0 Rancher will use Kubernetes by default.

Rancher Features:
  • Machine Driver: With this you can create new hosts dynamically. It supports various cloud providers but can also include custom scripts.
  • Rancher Catalog: Used to define templates for each service. A template consists of a docker-compose.yaml and rancher-compose.yaml where rancher-composes follows the docker-compose mechanics. You can integrate it with VCS so you can always roll back to older templates. You can use variables in the docker-compose file (like spring profiles for an environment) and define the allowed values in rancher-compose that can be selected when applying the template.
  • Infrastructure Services: 
    •  Healthcheck: Knows all containers running on a host and monitors them closely
    • Scheduler: Works together with Healthcheck. When a node is down the scheduler notices that there are fewer containers running than configured and starts up a new one.
    • Network: You can define different network types for your containers depending on where you are running, e.g. if you need IPSec or not. Contains a simple load balancing and service discovery 
    • Storage: You can configure the storage type and Rancher will apply the appropriate plugins. So far you can choose between NFS, NetApp SAN and Amazon EBS
This was only a small insight into Rancher and it's features but it got me interested and it shows that for larger landscapes you need more than "just" Kubernetes.

Dr. Jonas Trüstedt - Docker: Lego for DevOps

The last talk of the conference. Once more containers ;-) Quite a lot of this talk has already been covered by others, so I will not repeat that. What I took from this talk was a short comparison between Rancher and OpenShift
  • Rancher is easier for a first test setup where as OpenShift requires quite more confiugration to get started
  • OpenShift has a higher integration with development tools, e.g. you can just tell it to checkout a git repo with ruby code in it, then OpenShift will create a new image based on a ruby image and push that new image to your registry
  • OpenShift uses Ansible, so you have to install Ansible and at least get aquainted with it
  • According to Jonas OpenShift is more powerful

Summary

Thank you for ready this far, or at least for scrolling down. I hope you found at least some useful information in this post. Please let me know if you spotted an error or any misunderstandings on my part. After three days and lot of different topics it is quite possible that i messed something up. And as always, any comments are welcome :-)

Keine Kommentare:

Kommentar veröffentlichen