SlideShare a Scribd company logo
1 of 3
Download to read offline
Good morning,
My name is Nikita Kurpas and today I will be presenting the results of my professional practice in Tieto Czech
working as Software Developer intern.
During my presentation I will briefly describe my job, all the projects that I have worked on and my roles in them.
Then I will provide detailed description of 1 project that I think is worth mentioning. I will tell about the goals of
this project, it’s architecture, technologies used there, and results, after I have finished working on this project.
After, I will present some notes and my personal opinion about this practice.
Let us begin.
I began working in Tieto Czech in September 2014. At first I have applied for the System Engineer Internship
position, because back then I wanted to be a system administrator. I have worked on that position for 1 and a half
months and I didn’t receive any work I was hoping for. The only thing I got was some code fixes in Java applications
and scripting and writing frontend templates in Velocity language in the XWiki platform. That’s all I will tell about
this position because after working for 1 and a half months I switched to Software Engineer Internship. While
working at that position my primary role was Java developer, but I have also done a lot of JavaScript coding and
some UI engineering. During that time, I had the opportunity work on both internal projects funded by Tieto as well
as real live projects funded by Tieto’s clients.
Over the time of almost 2 years I have switched 4 projects, which I will discuss today. The first project I was
assigned to while working as a Software Developer intern was an internal project code named “Karellen” and it was
a monitoring system which I will discuss later in much detail. This project was basically started by me and 2 of my
colleagues, also interns, under close supervision of our mentor and project manager. On this project I was working
mainly as a backed developer and later also doing some fronted coding. I was responsible for a few backend
modules of this project as well as the GUI module.
After working on project “Karellen” for almost 6 months I was able to show myself and my skills so I was offered
a chance to change to one of the largest Java projects in whole Tieto, and I accepted the offer. Project “NEO” is a
backend service for a huge Scandinavian telecommunications provider and it is separated by the country of
operation. I was assigned to work for “NEO” Denmark. On this project at first I was doing minor bug fixes and
refactoring of non-critical components. Then I have proposed the idea and the technologies to rewrite the GUI of
one component and after approval I began working on fronted part of the component whilst doing some
refactoring and additional coding on backend.
After working for 8 months on project “NEO” Denmark I was again proposed to change the project to help one of
my colleagues. Along with him we have tried to integrate React JS framework into Liferay applications on project
“NEO” Norway. We were pioneers in that, because there was no material on how to do it on the Internet. It was
pretty interesting, but unfortunately I had to change again because of funding issues of that project.
I was offered to move to project TRIP, because it was the only project looking for Java developers at that time.
TRIP was a document search and indexing database and the team in Tieto was developing a web GUI for that
database. I was mainly doing bug fixes on backend on this project and implementing new features on backend as
well as on frontend.
This sums up the brief demonstration of the projects, so let’s go to more details now.
Let’s start with project “Karellen”.
This project started from an idea that Tieto should implement its own monitoring system, that would be highly
modular. Its goals were to monitor all sorts of software and hardware: web applications, web servers, web sites,
databases, JVM instances - anything that the client wanted. It should be able to store the history of gathered data
and to present this data to the user in an organized and concise way. If something went wrong, the user should be
notified about that by email or text message or some other system, like TONE. The user also should be able to
export selected data to different file formats. Reporting and analysis should be available for the user to be able to
perform analytics and to see historical data. And, of course, the system should be highly configurable by the user.
Before continuing any further, I would like to provide some definitions of the terms that I will use later on. An
Entity is anything that can be monitored: a web server, database, application server, etc. A Snapshot is the state of
an Entity at some point in time. Snapshot is decomposed into 2 objects: the main Snapshot object is holding
valuable data, while the additional SnapshotData object is holding data that can be deleted in the future in order to
save space in the database and reduce traffic usage while transferring Snapshots over the network.
Because this project was supposed to be highly modular, the microservices architecture was decided to be used
on the higher level of abstraction. The system was decoupled into quite a few modules, which communicated via
RESTish APIs over HTTP:
 The store module was responsible for storing snapshots in a database and providing an API layer for other
services to access those snapshots in a unified way.
 The configurator module was responsible for storing the configuration of the whole system.
 The status engine was responsible for retrieving snapshots from the store, computing their statuses and
passing those augmented snapshots further to the requester.
 The dashboard was responsible for presenting current status of user’s monitored entities.
 The collectors were responsible for collecting data from different sources and passing them to the store
module. Collectors were designed in such a way, that the collecting logic would be extracted to a
different submodule, called Sniffer and all the logic of sending snapshots to store and temporary storage
of Snapshots in case of network failure would remain the same across multiple collectors
Almost every module basically had a 3-tier architecture: the presentation tire, the logic tier, and the data tier. For
example, the already mentioned sniffers were the data tier of the collectors and the presentation tier was a few
endpoints to access stored snapshots in the collectors or to request a gathering on demand. I will talk about the
architecture of some modules in more detail a bit later.
The technologies that were chosen to implement all modules were Java based. Most of the modules have used
Java 8 as the main Java version. All of the modules were using Spring Framework in one way or another. Spring
Boot was used to speed up module development and configuration time, because it provided a boilerplate and
some reasonable defaults for the initial configuration of the Spring Applications as well as the ability to run those
applications standalone, meaning without an application server, like Wildfly or Tomcat. Spring Dependency
Injection was used for bean management because it was a part of the core of Spring Framework itself. Spring MVC
was extensively used for endpoint definitions and handling most of the presentation logic. On configurator and
store modules we had to use databases so we decided to use JPA and Hibernate as the JPA provider along with
Spring Data as an additional framework accompanying Hibernate. Spring Data provides some cool features, like
automated generation of Data Access Objects and query generation from method names. On dashboard module
we had to use a frontend framework so we decided to go with JSF and PrimeFaces as a JSF framework. This sums
up the technologies that we used. Now let’s see some modules.
As was mentioned earlier, store module was used to store snapshots and provide a RESTish API for other
modules to access them. The architecture was made as simple as possible but at the same time highly extendable.
The module was decoupled into 3 tiers: the presentation, logic and data tiers. Our mentor insisted that we use a
separate model for each tier, even though they were almost the same. The presentation tier was responsible for
providing REST APIs for other modules. Since there were not many endpoints, only 1 class was used to
accommodate them. The presentation tier had a single dependency on logic tier, which didn’t do much and in most
cases passed through calls from presentation tier to data tier. Though only 1 method existed that had any logic in
it: the snapshot query method. It had an extensive mechanism for querying snapshots designed with the use of JPA
Criteria API. The data tier was implemented with a single interface which defined all the methods we needed and
the subsequent implementation of this interface was provided by Spring during application runtime. So, in practice,
in data tier we never actually used Hibernate, as it was used only by Spring under the hood. Also, as I you see in this
diagram, the snapshot and snapshot data objects are split only in the data tier, to be stored separately. All other
tiers see the snapshot as a whole, with some properties NULLable. A mapping framework called Dozer was used to
automatically map models in-between inter-tier communication.
The dashboard module was used to present data to the user. The architecture was similar to that of store
module, though more complex. The presentation tier consisted of JSF views and managed beans. The logic tier
consisted of services that were retrieving configuration and doing transformation on snapshots based on that
configuration. The data tier consisted of connectors to status engine and configuration module. Here, on the top
image you can see the status of all monitored entities. The ones that are yellow are in warning state. That means
that something might be wrong with them. On the bottom 2 images you can see detailed historical information
about an entity.
The status engine module was used to compute status of entities and to rate snapshots on demand. It also had
an architecture similar to that of store module. The presentation tier had endpoints for requesting snapshots, the
logic tier had services, that computed status and ranked snapshots, and mappers, that mapped data objects
between tiers. The data tier consisted of connectors to store and configurator.
These were the modules that I was responsible of.
And now the results of this project and my 6 months’ work on it. After I’ve left, the team continued to developed
the project for the next 8 months and they finished at approximately 75% percent of planned functionality. They
decided not to use Java for GUI, because it didn’t suit their needs, so they have rewritten the GUI to use modern
JavaScript technologies and Material Design from Google. After some presentations, the client refused to buy this
project. After all, this project was built not out of client’s demand, but from the idea in Tieto. Currently the project
is being used to monitor internal tools and services inside Tieto. So, it is a successful project after all.
To sum up, I have prepared some final notes.
From my side, I was trying to do my work thoroughly, In the best possible way I was capable of. When I didn’t
know the technology I had to use, I quickly learned it and read some material on how to write good code using this
technology. This relates to Java and JavaScript. This can be proved by my promotion to 2 higher grade projects:
NEO Denmark and Norway – one of the biggest Java projects in Tieto.
Personally, I’ve advanced a lot in Java SE and EE, learned how to use Spring Framework and a lot of its benefits,
advanced my skills in JavaScript and learned the new JavaScript standard – ECMAScript 6, advanced my knowledge
in HTML, CSS and Software Design. I had extensive trainings in Java and JavaScript during the time in Tieto. I had
experience with old, bad and obsolete code. Also I was rewriting a project from scratch and doing complex
refactorings. And, of course, daily communicated in English.
So, to finalize, I would rank project by their impact on me and on my knowledge.

More Related Content

Viewers also liked

Viewers also liked (10)

Cogency test
Cogency testCogency test
Cogency test
 
EEI Strategic Issues Forum - Integration of DER: California and New York
EEI Strategic Issues Forum - Integration of DER: California and New YorkEEI Strategic Issues Forum - Integration of DER: California and New York
EEI Strategic Issues Forum - Integration of DER: California and New York
 
Moten Riise 20119084
Moten Riise 20119084Moten Riise 20119084
Moten Riise 20119084
 
Tommi Paalanen - Seksi normit ja etiikka
Tommi Paalanen - Seksi normit ja etiikkaTommi Paalanen - Seksi normit ja etiikka
Tommi Paalanen - Seksi normit ja etiikka
 
Our Province and Country - Hüseyin Gazi Ortaokulu
Our Province and Country - Hüseyin Gazi OrtaokuluOur Province and Country - Hüseyin Gazi Ortaokulu
Our Province and Country - Hüseyin Gazi Ortaokulu
 
Super high efficiency coalescing filters for compressed air and gas
Super high efficiency coalescing filters for compressed air and gasSuper high efficiency coalescing filters for compressed air and gas
Super high efficiency coalescing filters for compressed air and gas
 
OpenERP / Odoo Canada localization
OpenERP / Odoo Canada localizationOpenERP / Odoo Canada localization
OpenERP / Odoo Canada localization
 
How to use evernote
How to use evernoteHow to use evernote
How to use evernote
 
Tommi Paalanen - Aikuisen seksuaalisuus
Tommi Paalanen - Aikuisen seksuaalisuusTommi Paalanen - Aikuisen seksuaalisuus
Tommi Paalanen - Aikuisen seksuaalisuus
 
Bosques humedo tropical
Bosques humedo tropicalBosques humedo tropical
Bosques humedo tropical
 

Similar to Results of Software Developer Internship at Tieto Czech

Angular interview questions
Angular interview questionsAngular interview questions
Angular interview questionsGoa App
 
Managing Large Flask Applications On Google App Engine (GAE)
Managing Large Flask Applications On Google App Engine (GAE)Managing Large Flask Applications On Google App Engine (GAE)
Managing Large Flask Applications On Google App Engine (GAE)Emmanuel Olowosulu
 
SathishKumar Natarajan
SathishKumar NatarajanSathishKumar Natarajan
SathishKumar NatarajanSathish Kumar
 
Nt1310 Unit 3 Language Analysis
Nt1310 Unit 3 Language AnalysisNt1310 Unit 3 Language Analysis
Nt1310 Unit 3 Language AnalysisNicole Gomez
 
Company Visitor Management System Report.docx
Company Visitor Management System Report.docxCompany Visitor Management System Report.docx
Company Visitor Management System Report.docxfantabulous2024
 
How Android Architecture Components can Help You Improve Your App’s Design?
How Android Architecture Components can Help You Improve Your App’s Design?How Android Architecture Components can Help You Improve Your App’s Design?
How Android Architecture Components can Help You Improve Your App’s Design?Paul Cook
 
Onion Architecture with S#arp
Onion Architecture with S#arpOnion Architecture with S#arp
Onion Architecture with S#arpGary Pedretti
 
DOC-20210303-WA0017..pptx,coding stuff in c
DOC-20210303-WA0017..pptx,coding stuff in cDOC-20210303-WA0017..pptx,coding stuff in c
DOC-20210303-WA0017..pptx,coding stuff in cfloraaluoch3
 
Open shift and docker - october,2014
Open shift and docker - october,2014Open shift and docker - october,2014
Open shift and docker - october,2014Hojoong Kim
 
CV_Vasili_Tegza 2G
CV_Vasili_Tegza 2GCV_Vasili_Tegza 2G
CV_Vasili_Tegza 2GVasyl Tegza
 
2014_report
2014_report2014_report
2014_reportK SEZER
 
Nagarjuna Reddy_Java (1+ Experience)
Nagarjuna Reddy_Java (1+ Experience)Nagarjuna Reddy_Java (1+ Experience)
Nagarjuna Reddy_Java (1+ Experience)Nagarjun Reddy
 
Introduction to Docker and Containers- Learning Simple
Introduction to Docker and Containers- Learning SimpleIntroduction to Docker and Containers- Learning Simple
Introduction to Docker and Containers- Learning SimpleSandeep Hijam
 
Architecting an ASP.NET MVC Solution
Architecting an ASP.NET MVC SolutionArchitecting an ASP.NET MVC Solution
Architecting an ASP.NET MVC SolutionAndrea Saltarello
 

Similar to Results of Software Developer Internship at Tieto Czech (20)

Angular interview questions
Angular interview questionsAngular interview questions
Angular interview questions
 
Managing Large Flask Applications On Google App Engine (GAE)
Managing Large Flask Applications On Google App Engine (GAE)Managing Large Flask Applications On Google App Engine (GAE)
Managing Large Flask Applications On Google App Engine (GAE)
 
SathishKumar Natarajan
SathishKumar NatarajanSathishKumar Natarajan
SathishKumar Natarajan
 
Nt1310 Unit 3 Language Analysis
Nt1310 Unit 3 Language AnalysisNt1310 Unit 3 Language Analysis
Nt1310 Unit 3 Language Analysis
 
Company Visitor Management System Report.docx
Company Visitor Management System Report.docxCompany Visitor Management System Report.docx
Company Visitor Management System Report.docx
 
Gaurav agarwal
Gaurav agarwalGaurav agarwal
Gaurav agarwal
 
How Android Architecture Components can Help You Improve Your App’s Design?
How Android Architecture Components can Help You Improve Your App’s Design?How Android Architecture Components can Help You Improve Your App’s Design?
How Android Architecture Components can Help You Improve Your App’s Design?
 
Onion Architecture with S#arp
Onion Architecture with S#arpOnion Architecture with S#arp
Onion Architecture with S#arp
 
DOC-20210303-WA0017..pptx,coding stuff in c
DOC-20210303-WA0017..pptx,coding stuff in cDOC-20210303-WA0017..pptx,coding stuff in c
DOC-20210303-WA0017..pptx,coding stuff in c
 
Web engineering
Web engineeringWeb engineering
Web engineering
 
Open shift and docker - october,2014
Open shift and docker - october,2014Open shift and docker - october,2014
Open shift and docker - october,2014
 
CV_Vasili_Tegza 2G
CV_Vasili_Tegza 2GCV_Vasili_Tegza 2G
CV_Vasili_Tegza 2G
 
2014_report
2014_report2014_report
2014_report
 
Nagarjuna Reddy_Java (1+ Experience)
Nagarjuna Reddy_Java (1+ Experience)Nagarjuna Reddy_Java (1+ Experience)
Nagarjuna Reddy_Java (1+ Experience)
 
Pyramid tutorial
Pyramid tutorialPyramid tutorial
Pyramid tutorial
 
Resume
ResumeResume
Resume
 
Introduction to Docker and Containers- Learning Simple
Introduction to Docker and Containers- Learning SimpleIntroduction to Docker and Containers- Learning Simple
Introduction to Docker and Containers- Learning Simple
 
Oracle Data Integrator
Oracle Data Integrator Oracle Data Integrator
Oracle Data Integrator
 
Architecting an ASP.NET MVC Solution
Architecting an ASP.NET MVC SolutionArchitecting an ASP.NET MVC Solution
Architecting an ASP.NET MVC Solution
 
Distributed Tracing
Distributed TracingDistributed Tracing
Distributed Tracing
 

Results of Software Developer Internship at Tieto Czech

  • 1. Good morning, My name is Nikita Kurpas and today I will be presenting the results of my professional practice in Tieto Czech working as Software Developer intern. During my presentation I will briefly describe my job, all the projects that I have worked on and my roles in them. Then I will provide detailed description of 1 project that I think is worth mentioning. I will tell about the goals of this project, it’s architecture, technologies used there, and results, after I have finished working on this project. After, I will present some notes and my personal opinion about this practice. Let us begin. I began working in Tieto Czech in September 2014. At first I have applied for the System Engineer Internship position, because back then I wanted to be a system administrator. I have worked on that position for 1 and a half months and I didn’t receive any work I was hoping for. The only thing I got was some code fixes in Java applications and scripting and writing frontend templates in Velocity language in the XWiki platform. That’s all I will tell about this position because after working for 1 and a half months I switched to Software Engineer Internship. While working at that position my primary role was Java developer, but I have also done a lot of JavaScript coding and some UI engineering. During that time, I had the opportunity work on both internal projects funded by Tieto as well as real live projects funded by Tieto’s clients. Over the time of almost 2 years I have switched 4 projects, which I will discuss today. The first project I was assigned to while working as a Software Developer intern was an internal project code named “Karellen” and it was a monitoring system which I will discuss later in much detail. This project was basically started by me and 2 of my colleagues, also interns, under close supervision of our mentor and project manager. On this project I was working mainly as a backed developer and later also doing some fronted coding. I was responsible for a few backend modules of this project as well as the GUI module. After working on project “Karellen” for almost 6 months I was able to show myself and my skills so I was offered a chance to change to one of the largest Java projects in whole Tieto, and I accepted the offer. Project “NEO” is a backend service for a huge Scandinavian telecommunications provider and it is separated by the country of operation. I was assigned to work for “NEO” Denmark. On this project at first I was doing minor bug fixes and refactoring of non-critical components. Then I have proposed the idea and the technologies to rewrite the GUI of one component and after approval I began working on fronted part of the component whilst doing some refactoring and additional coding on backend. After working for 8 months on project “NEO” Denmark I was again proposed to change the project to help one of my colleagues. Along with him we have tried to integrate React JS framework into Liferay applications on project “NEO” Norway. We were pioneers in that, because there was no material on how to do it on the Internet. It was pretty interesting, but unfortunately I had to change again because of funding issues of that project. I was offered to move to project TRIP, because it was the only project looking for Java developers at that time. TRIP was a document search and indexing database and the team in Tieto was developing a web GUI for that database. I was mainly doing bug fixes on backend on this project and implementing new features on backend as well as on frontend. This sums up the brief demonstration of the projects, so let’s go to more details now. Let’s start with project “Karellen”. This project started from an idea that Tieto should implement its own monitoring system, that would be highly modular. Its goals were to monitor all sorts of software and hardware: web applications, web servers, web sites,
  • 2. databases, JVM instances - anything that the client wanted. It should be able to store the history of gathered data and to present this data to the user in an organized and concise way. If something went wrong, the user should be notified about that by email or text message or some other system, like TONE. The user also should be able to export selected data to different file formats. Reporting and analysis should be available for the user to be able to perform analytics and to see historical data. And, of course, the system should be highly configurable by the user. Before continuing any further, I would like to provide some definitions of the terms that I will use later on. An Entity is anything that can be monitored: a web server, database, application server, etc. A Snapshot is the state of an Entity at some point in time. Snapshot is decomposed into 2 objects: the main Snapshot object is holding valuable data, while the additional SnapshotData object is holding data that can be deleted in the future in order to save space in the database and reduce traffic usage while transferring Snapshots over the network. Because this project was supposed to be highly modular, the microservices architecture was decided to be used on the higher level of abstraction. The system was decoupled into quite a few modules, which communicated via RESTish APIs over HTTP:  The store module was responsible for storing snapshots in a database and providing an API layer for other services to access those snapshots in a unified way.  The configurator module was responsible for storing the configuration of the whole system.  The status engine was responsible for retrieving snapshots from the store, computing their statuses and passing those augmented snapshots further to the requester.  The dashboard was responsible for presenting current status of user’s monitored entities.  The collectors were responsible for collecting data from different sources and passing them to the store module. Collectors were designed in such a way, that the collecting logic would be extracted to a different submodule, called Sniffer and all the logic of sending snapshots to store and temporary storage of Snapshots in case of network failure would remain the same across multiple collectors Almost every module basically had a 3-tier architecture: the presentation tire, the logic tier, and the data tier. For example, the already mentioned sniffers were the data tier of the collectors and the presentation tier was a few endpoints to access stored snapshots in the collectors or to request a gathering on demand. I will talk about the architecture of some modules in more detail a bit later. The technologies that were chosen to implement all modules were Java based. Most of the modules have used Java 8 as the main Java version. All of the modules were using Spring Framework in one way or another. Spring Boot was used to speed up module development and configuration time, because it provided a boilerplate and some reasonable defaults for the initial configuration of the Spring Applications as well as the ability to run those applications standalone, meaning without an application server, like Wildfly or Tomcat. Spring Dependency Injection was used for bean management because it was a part of the core of Spring Framework itself. Spring MVC was extensively used for endpoint definitions and handling most of the presentation logic. On configurator and store modules we had to use databases so we decided to use JPA and Hibernate as the JPA provider along with Spring Data as an additional framework accompanying Hibernate. Spring Data provides some cool features, like automated generation of Data Access Objects and query generation from method names. On dashboard module we had to use a frontend framework so we decided to go with JSF and PrimeFaces as a JSF framework. This sums up the technologies that we used. Now let’s see some modules. As was mentioned earlier, store module was used to store snapshots and provide a RESTish API for other modules to access them. The architecture was made as simple as possible but at the same time highly extendable. The module was decoupled into 3 tiers: the presentation, logic and data tiers. Our mentor insisted that we use a separate model for each tier, even though they were almost the same. The presentation tier was responsible for providing REST APIs for other modules. Since there were not many endpoints, only 1 class was used to
  • 3. accommodate them. The presentation tier had a single dependency on logic tier, which didn’t do much and in most cases passed through calls from presentation tier to data tier. Though only 1 method existed that had any logic in it: the snapshot query method. It had an extensive mechanism for querying snapshots designed with the use of JPA Criteria API. The data tier was implemented with a single interface which defined all the methods we needed and the subsequent implementation of this interface was provided by Spring during application runtime. So, in practice, in data tier we never actually used Hibernate, as it was used only by Spring under the hood. Also, as I you see in this diagram, the snapshot and snapshot data objects are split only in the data tier, to be stored separately. All other tiers see the snapshot as a whole, with some properties NULLable. A mapping framework called Dozer was used to automatically map models in-between inter-tier communication. The dashboard module was used to present data to the user. The architecture was similar to that of store module, though more complex. The presentation tier consisted of JSF views and managed beans. The logic tier consisted of services that were retrieving configuration and doing transformation on snapshots based on that configuration. The data tier consisted of connectors to status engine and configuration module. Here, on the top image you can see the status of all monitored entities. The ones that are yellow are in warning state. That means that something might be wrong with them. On the bottom 2 images you can see detailed historical information about an entity. The status engine module was used to compute status of entities and to rate snapshots on demand. It also had an architecture similar to that of store module. The presentation tier had endpoints for requesting snapshots, the logic tier had services, that computed status and ranked snapshots, and mappers, that mapped data objects between tiers. The data tier consisted of connectors to store and configurator. These were the modules that I was responsible of. And now the results of this project and my 6 months’ work on it. After I’ve left, the team continued to developed the project for the next 8 months and they finished at approximately 75% percent of planned functionality. They decided not to use Java for GUI, because it didn’t suit their needs, so they have rewritten the GUI to use modern JavaScript technologies and Material Design from Google. After some presentations, the client refused to buy this project. After all, this project was built not out of client’s demand, but from the idea in Tieto. Currently the project is being used to monitor internal tools and services inside Tieto. So, it is a successful project after all. To sum up, I have prepared some final notes. From my side, I was trying to do my work thoroughly, In the best possible way I was capable of. When I didn’t know the technology I had to use, I quickly learned it and read some material on how to write good code using this technology. This relates to Java and JavaScript. This can be proved by my promotion to 2 higher grade projects: NEO Denmark and Norway – one of the biggest Java projects in Tieto. Personally, I’ve advanced a lot in Java SE and EE, learned how to use Spring Framework and a lot of its benefits, advanced my skills in JavaScript and learned the new JavaScript standard – ECMAScript 6, advanced my knowledge in HTML, CSS and Software Design. I had extensive trainings in Java and JavaScript during the time in Tieto. I had experience with old, bad and obsolete code. Also I was rewriting a project from scratch and doing complex refactorings. And, of course, daily communicated in English. So, to finalize, I would rank project by their impact on me and on my knowledge.