Automatic deployment

Hey guys,

we wanted to share our deployment process with you.

For our project we are using Jenkins.
After each push on any branch our Jenkins integration build our project and execute all the tests we have made.

If you can’t belive this, check it out here.

Futhermore, Jenkins sends a notification into one of our Slack-channels after each more or less succesfull build process from Jenkins. So we are always aware if a build is stable or not.

Slack notification:


For an more detailed overview of our project, we have integrated SonarQube, which gives us an better feedback of our codequality. If you want more information about that read our earlier blog post: Testing2.

Link to our SonarQube.

Testing 2

Hey Guys!

During this week we wrapped up our previously begun testing efforts. Our SonarQube looks pretty neat, with >50% test coverage and good grades in security, maintainability and reliability measures. SonarQube is updated by our Jenkins CI server whenever the development branch is being built.

Furthermore, we stepped up our GitHub badge game. If you’re still missing something there, please tell us.

There is no test plan yet because the template linked in the grading criteria is inaccessible. We’ll work on that as soon as we figure out how to get the template and keep you updated. Apparently this is the same document as last time. We updated it to include an entry for unit testing and decided to include basic load testing.

Anyone willing to take us up for the installation test? We’ve got a docker container and instructions which almost work.

We probably did more fun stuff this week, but I can’t tell you anything about it because at the time of writing JIRA is dead… Oh wait I can tell you something we learned this week: Never rely on other people’s infrastructure. JIRA has risen from the dead. While we worked hard on the web and android app this week, there are no new things to show. Sorry. But you can expect an Android APK in one or two weeks.


This week we integrated a new component in our testworkflow: Sonarqube, which calculates the code coverage and some fancy metrics for us. [take this way if you don’t believe]

For our homework we took a look at one of our classes called “TmdbRetriever”.

As you can see there were 13 code smells and after our refactoring there is only one left (this one is a TODO-comment, which acutally counts as code smell).

And we reduced the Duplicated Lines from 3.4% to 2.9%. [only for nonbelievers]
Here is one of the snippets we changed, as Sonarqube suggested:




But take care! Not every advice from Sonarqube is wise or helpful…









…we knew you won’t believe without any proof.
The following advice of Sonarqube is pretty useless, because there is no “hard-coded” password, but its marked as an vulnerable code issue. We wont “fix” it – there is nothing to fix.

You can see all the metrics we are using so far on the Sonarqubes Measures page [you should start to trust us].

Two Examples:

  • Maintainability includes concepts of modularity, understandability, changeability, testability, reusability, and transferability from one development team to another. Our Technical Debt Ratio of 4.0% seems to be good enough for Sonarqube to reward us with an “A” (the best category).
  • Security is an indicator how good your code is protacted against aussaults like SQL injection or cross-site scripting. In this case we were punished with the worst category “E”. That’s okay for the moment because nearly all of the responsible issues pointing to internal api which cannot be accessed from somebody outside.

Well, that was pretty much this week, but we hope you will come back for further updates!

Team SmartCinema

PS: Of course, we added information about using Sonarqube to our Testplan. [so sad, that you need this link]


This week, we refactored a piece of code to conform to a well-known design pattern.

We chose to refactor our data retrieval infrastructure to make use of the template method pattern.

Applying the pattern

Background: Our data is retrieved by scraping the websites of local cinemas. Each cinema we scrape has its own scraping code which used to implement basically the same algorithm (with some common steps extracted into methods on the superclass).

This is a relevant excerpt from our class diagram before the refactoring:

We then refactored this by putting the basic algorithm into the AbstractCinemaScraper class, basically inverting the control flow. Now our subclasses only contain the code that is necessarily different for each cinema, and the class diagram looks like this:

How useful is it?

We’re not so sure. The code duplication before was really minuscle and getting rid of it didn’t make the code any prettier. You could argue that the code is actually harder to understand now because the control flow is more complicated.


Closing Notes

Did you like this blog post? Give it a thumbs up, subscribe, and leave your thoughts in the comments below!


Hey there,

nice to see you again! You really start to love our project, dont you?

Sadly we have no project-related news for you this week, because we all dealt with some refactoring according to Martin Fowlers “Refactoring” (thats no ref-link, we promise). If you are into software engineering it is absolutely worth to read it!

We worked through chapter one and we ended with the polymorphism part. About that one we had a little discussion. We agreed that the “Price” naming for the subclasses are badly choosen, because we also calculate “FrequentRenterPoints” inside them. But we had different opinions if we wether should rename the subclasses to something like “State” which every Movie holds (Simons approach) or if the polymorphism should be done with the Movie objects themselves (Marcos approach). Do you have any opinion about that?

Anyway – here you can finde our homeworks:

Return next week for some project news!

Yours sincerely
Team SmartCinema


Dear interested reader,

this weeks it’s all about testing or rather a bunch of links like every week.

First of all we want to give you the link to our test code on github. [click me]

Here comes the link to our file. [i want to be clicked, too]

We are not quite sure what unit test could test that isnt already covered with our awesome cucumber tests. But we have an REST API to provide data for the web and android app and we wrote some test for this one. The following picture proofs that we can run the tests inside our IDE.


For continuous integration we set up a Jenkins-Server that creates a build for every branch in the corresponding GIT repository.
Everytime a commit is pushed to the server Jenkins execute the defined tests and display the results.
For example:

And last but not least the link to our RUP Test Plan. [i feel so unclicked]

That was it for today. Cya next week!

Team SmartCinema


Function Points

Hello there!

We want to share our newest project-management improvment, function points.

Function points are used for estimating the time that will be spent on a certain Use Case. They are calculated in reference on External Inputs, Outputs, Inquiries as well as Internal and External Logical Files from the User’s view, thereby they do not depend on the used technology.

To estimate how long it will take to implement a UC function points of UC that have been already implemented are put in relation to the time we spent on it.
We used the calculation for our function points from „TINY TOOL„.
For the calculation of the function points with the help of TINY TOOL you have to fill out a two single choice documents.
The big one is for the whole poject. Here you have to answer some simple questions about your project.

In our case it looks like this:


The second one is individual for every UC.

Here is an example of our manage favorites-UC (58.08 Function Points):


Below you can see the other completed Use Cases…

…and the new Use Cases.

With the calculated function points we generated a diagram which shows the interact of the function points and the person hours.


The green squares are the completed UC’s and the red ones are the the ones we want to implement soon.

Futhermore we updated our Time-Per-UC-Document. We added the function-points in there and added a column for the kind of transaction, the complexity and the number of RET’s, DET’s and FTR’s. If you want to know what exactly these are you can click here.

As you can see there is no outliner in our project, so hopefully we can plan our new UC’s with the help of TINY TOOL to get a good approximation.

Kind regards,