Software architectures are changing. Monolithic applications are losing ground to decoupled, lightweight services that offer operational and cost efficiencies over their bulky and inefficient cousins.
For you to fully leverage the power of a BI & Analytics tool in these new architectures, it needs to be compatible with the way modern software is developed and delivered. This is particularly important for scenarios that involve BI & Analytics functionality that is embedded into other applications.
Join our session to learn about TIBCO JasperReports® IO, what’s coming and what is available now, and how it will address the need to run BI in today’s software architectures.
What are the key points to focus on before starting to learn ETL Development....
JasperReports IO: Reporting and data visualization in a world of cloud, microservices, and DevOps
1. A New Breed of Reporting and Data Visualization Engine
Reporting and data visualization in a world of cloud,
microservices, and DevOps
Introducing JasperReports IO
2. 2
Hello from Jaspersoft!
Shane Swiderek
Product Marketing Manager
San Francisco, CA
Jan Schiffman
VP Engineering, Jaspersoft &
TIBCO Data Science
San Francisco, CA
Teodor Danciu
Jaspersoft Founder / Architect
Bucharest, Romania
6. 66
The shift to embedded BI
What is Embedded BI?
• Embedded BI lives within the application as a
fully integrated element of user experience
• Right place - insight is delivered in the context
of the application
• Right time - presented at the time of action
• Integrates seamlessly into the user experience
7. 77
The shift to embedded BI
How is BI embedded in applications?
• Modern web-based applications embed BI using
HTML and JavaScript
• JavaScript and web service APIs allow for seamless
integration between the BI service and application
• Technologies like Visualize.js promote shorter TTM
• App developers focus on the application
• Information workers focus on embedded information
8. 88
The shift to embedded BI
• Gamification creates incentive for students
• Easy to understand indicators visualize
segmentation by student engagement
• Relevant detail reports are immediately
available
9. 99
The shift to embedded BI
Based on performance information provided,
relevant actions can be immediately taken
10. 1010
Trends in Software Development
Cloud Platforms
Microservices
Orchestration
DevOps
Containerization
11. 1111
Containerization
Consistency
• Consistent, reproducible environment
Isolation
• Multiple containers on the same VM are isolated
Portability
• Ability to run anywhere — Linux, OSX, Windows
Scalability
• Orchestration enables dynamic scaling and life cycle
management
13. 1313
The Rise of Microservices
• Independently implemented and deployed and scaled
• Communicate via APIs
• Can be written using different languages and different runtimes
• Node.js
• Java
• Python
• Ruby
• Integrates into CI/CD processes and pipelines
14. 1414
What is Orchestration?
• Built on top of container technology
• Vendor neutrality
• Provides deployment and management
• Scheduling
• Service deployment and exposition
• Durability and scaling
• Management APIs
16. 1616
BI in the Cloud
Traditional BI platforms (and vendors in general) don’t
integrate well into modern cloud architectures…
• Built for end users and integrators
• Generally “cloud compatible”
• High resource requirements
• Don’t scale with demand
18. 1818
Trends on a collision path?
45hr min
2
BI increasingly
embedded into apps
App architecture
moves to decoupled,
lightweight services
Small specialized BI
services in apps?
19. 1919
Introducing JasperReports IO
• Designed for the cloud
• Embed report and visualizations via web service and
JavaScript APIs
• Lightweight service
• Designed for containerized cloud architectures
• Integrates into DevOps processes and pipelines
20. 2020
Introducing JasperReports IO
Capabilities:
• Production reporting
• Scheduled document production
• Interactive reports
• Report bursting
• Embedding reports and visualizations
• Easily add data visualizations to any web application
• Integrate into your existing UX
23. 23
Introducing JasperReports IO
Availability & packaging
Downloadable archive with
Dockerfile for production
deployment
Amazon Machine Image
(AMI)
*Free trial through the end of the year.
Starting Jan 1st, it will be sold as a
subscription for $299 / year
*Starting at 19 cents / hour
24. 2424
Statement of Direction (6-12 months) –
JasperReports IO Microservice
• Production Reporting and Embedding at scale
• Dynamic Scaling
• High availability
• Resilience
• Full benefits of orchestration
Community (library only)
Community (library + server)
Commercial server
Not using Jaspersoft
Yes, we run containers in our production deployments
Yes, but we are using for testing / not currently deployed in production environments
No, but we’re looking to start leveraging soon
No plans
What is container technology?
The traditional BI server generally provided reports, dashboards, sometime OLAP cubes,
BI has moved from the separate server to the context of the application. No longer are we switching between apps to consume data and to take action. By embedding BI and analytic output in applications at the time of action, users have a more cohesive integrated, less error prone and ultimately more meaningful experience.
The traditional BI server generally provided reports, dashboards, sometime OLAP cubes,
BI has moved from the separate server to the context of the application. No longer are we switching between apps to consume data and to take action. By embedding BI and analytic output in applications at the time of action, users have a more cohesive integrated, less error prone and ultimately more meaningful experience.
The traditional BI server generally provided reports, dashboards, sometime OLAP cubes,
BI has moved from the separate server to the context of the application. No longer are we switching between apps to consume data and to take action. By embedding BI and analytic output in applications at the time of action, users have a more cohesive integrated, less error prone and ultimately more meaningful experience.
The traditional BI server generally provided reports, dashboards, sometime OLAP cubes,
BI has moved from the separate server to the context of the application. No longer are we switching between apps to consume data and to take action. By embedding BI and analytic output in applications at the time of action, users have a more cohesive integrated, less error prone and ultimately more meaningful experience.
Although the origins of cloud computing can be traced back to the 1960s, ARPANET and network-based computing. More recently Salesforce, Google and Amazon applied the concepts of cloud computing in the early 2000s to build their service platforms. It wasn’t until 2006 that amazon launched AWS and the concept of Infrastructure As A Service.
DevOps promotes automation and monitoring at all steps of software construction, from integration, testing, releasing to deployment and infrastructure management. DevOps enables shorter development cycles, increased deployment frequency, and more dependable releases.
Containers were a logical extension of the concept of “chroot jails” and first implemented as “containers” by Sun Microsystems as part of Solaris 10 in 2005. This was followed by Google’s process containers and led to LXC, which is the direct ancestor of Docker, the currently dominant container technology many of us are familiar with. Containers allow us to blah, blah, blah.
Containers allow you to package your application and its dependencies together into one succinct manifest that can be version controlled, allowing for easy replication of your application across developers on your team and machines in your cluster.
Just as how software libraries package bits of code together, allowing developers to abstract away logic like user authentication and session management, containers allow your application as a whole to be packaged, abstracting away the operating system, the machine, and even the code itself. Combined with a service-based architecture, the entire unit that developers are asked to reason about becomes much smaller, leading to greater agility and productivity. All this eases development, testing, deployment, and overall management of your applications.
Consistent Environment
Containers give developers the ability to create predictable environments that are isolated from other applications. Containers can also include software dependencies needed by the application, such as specific versions of programming language runtimes and other software libraries. From the developer’s perspective, all this is guaranteed to be consistent no matter where the application is ultimately deployed. All this translates to productivity: developers and IT Ops teams spend less time debugging and diagnosing differences in environments, and more time shipping new functionality for users. And it means fewer bugs since developers can now make assumptions in dev and test environments they can be sure will hold true in production.
Run Anywhere
Containers are able to run virtually anywhere, greatly easing development and deployment: on Linux, Windows, and Mac operating systems; on virtual machines or bare metal; on a developer’s machine or in data centers on-premises; and of course, in the public cloud. The widespread popularity of the Docker image format for containers further helps with portability. Wherever you want to run your software, you can use containers.
Isolation
Containers virtualize CPU, memory, storage, and network resources at the OS-level, providing developers with a sandboxed view of the OS logically isolated from other applications.
Containers allow you to package your application and its dependencies together into one succinct manifest that can be version controlled, allowing for easy replication of your application across developers on your team and machines in your cluster.
Just as how software libraries package bits of code together, allowing developers to abstract away logic like user authentication and session management, containers allow your application as a whole to be packaged, abstracting away the operating system, the machine, and even the code itself. Combined with a service-based architecture, the entire unit that developers are asked to reason about becomes much smaller, leading to greater agility and productivity. All this eases development, testing, deployment, and overall management of your applications.
Microservices is a software development architecture that structures an application as a collection of loosely coupled services. A Microservices approach allows the individual services to be deployed and scaled independently (typically via containers), worked on in parallel by different teams, built in different programming languages, and have their own continuous delivery and deployment stream.
Microservices - also known as the microservice architecture - is an architectural style that structures an application as a collection of loosely coupled services, which implement business capabilities. The microservice architecture enables the continuous delivery/deployment of large, complex applications. It also enables an organization to evolve its technology stack.
The microservice architectural pattern is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.
Kubernetes is a cluster and container management tool. It lets you deploy containers to clusters, meaning a network of virtual machines. It works with different containers, not just Docker.
Kubernetes Basics
The basic idea of Kubernetes is to further abstract machines, storage, and networks away from their physical implementation. So it is a single interface to deploy containers to all kinds of clouds, virtual machines, and physical machines.
Here are a few of Kubernetes concepts to help understand what it does.
Node
A node is a physical or virtual machine. It is not created by Kubernetes. You create those with a cloud operating system, like OpenStack or Amazon EC2, or manually install them. So you need to lay down your basic infrastructure before you use Kubernetes to deploy your apps. But from that point it can define virtual networks, storage, etc. For example, you could use OpenStack Neutron or Romana to define networks and push those out from Kubernetes.
Pods
A pod is a one or more containers that logically go together. Pods run on nodes. Pods run together as a logical unit. So they have the same shared content. They all share the share IP address but can reach other other via localhost. And they can share storage. But they do not need to all run on the same machine as containers can span more than one machine. One node can run multiple pods.
Pods are cloud-aware. For example you could spin up two Nginx instances and assign them a public IP address on the Google Compute Engine (GCE). To do that you would start the Kubernetes cluster, configure the connection to GCE, and then type something like:
kubectl expose deployment my-nginx –port=80 –type=LoadBalancer
Deployment
A set of pods is a deployment. A deployment ensures that a sufficient number of pods are running at one time to service the app and shuts down those pods that are not needed. It can do this by looking at, for example, CPU utilization.
Vendor Agnostic
Kubernetes works with many cloud and server products. And the list is always growing as so many companies are contributing to the open source project. Even though it was invented by Google, Google is not said to dominate it’s development.
Kubernetes works with Amazon EC2, Azure Container Service, Rackspace, GCE, IBM Software, and other clouds. And it works with bare-metal (using something like CoreOS), Docker, and vSphere. And it works with libvirt and KVM, which are Linux machines turned into hypervisors (i.e, a platform to run virtual machines).
Use Cases
So why would you use Kubernetes on, for example, Amazon EC2, when it has its own tool for orchestration (CloudFormation)? Because with Kubernetes you can use the same orchestration tool and command-line interfaces for all your different systems. Amazon CloudFormation only works with EC2. So with Kubernetes you could push containers to the Amazon cloud, your in-house virtual and physical machines as well, and other clouds.
Wrapping Up
So we have answered the question what is Kubernetes? It is an orchestration tool for containers. What are containers? They are small virtual machines that run ready-to-run applications on top of other virtual machines or any host OS. They greatly simplify deploying applications. And they make sure machines are fully-utilized. All of this lowers the cost of cloud subscriptions, further abstracts the data center, and simplifies operations and architecture. To get started learning about it, the reader can install MiniKube to run it all on one machine and play around with it.
Monolithic to Service Based Architecture
Containers work best for service based architectures. Opposed to monolithic architectures, where every pieces of the application is intertwined — from IO to data processing to rendering — service based architectures separate these into separate components. Separation and division of labor allows your services to continue running even if others are failing, keeping your application as a whole more reliable.
Componentization also allows you to develop faster and more reliably; smaller codebases are easier to maintain and since the services are separate, it's easy to test specific inputs for outputs.
Containers are perfect for service based applications since you can health check each container, limit each service to specific resources and start and stop them independently of each other.
And since containers abstract the code away, containers allow you to treat separate services as black boxes, further decreasing the space a developer needs to be concerned with. When developers work on services that depends on another, they can easily start up a container for that specific service without having to waste time setting up the correct environment and troubleshooting beforehand.