Most of the existing Big Data technologies are focused on managing large amount of static data (e.g. Hadoop, Hive, Pig). On the other hand, trending approaches try to deal with real time processing of dynamic data (e.g Storm, S4). Batch processing of massive static data provides strong results since they can take into account more information and, for example, perform better training of predictive models. But batch processing takes time and is not feasible for domains where the response time is a critical issue. Real time processing solves this issue, but it uses a weak approach where the analyzed information is limited in order to achieve low latency. Many domains require the benefit of both batch and real time processing approaches. It is not an easy issue to develop software architecture by tailoring suitable technologies, software layers, data sources, data storage solutions, smart algorithms and so on to achieve the good scalable solution. That is where Lambdoop comes in. Lambdoop is a software framework for easing developing Big Data applications by combining real time and batch processing approaches. It implements a Lambda based architecture that provide an abstraction layer to the developers. Developers do not have to deal with different technologies, configurations, data formats … They just use Lambdoop framework as the only needed API. Lambdoop also includes extra tools such as input/output drivers, visualization tools, cluster management tools and widely accepted AI algorithms. To evaluate the effectiveness of Lambdoop we have applied our framework to different real scenarios: 1) Analysis and prediction of data air quality information; 2) Social networks based identification of emergent situations and 3) Quantum Chemistry molecular dynamics simulations. Conclusions of the evaluations provide good feedback to improve the development of the framework.