Une riche présentation de Mapping Objet Relationnel qui traite le standard JPA et l’implémentation Hibernate en les intégrant avec le frammework IOC spring.
Se support présente l'outil d'intégration Maven dans le processus d'industrialisation du génie logiciel. Tout ce qu'il faut savoir sur maven.
La deuxième partie de ce cours traite la mise en oeuvre de maven dans des projets basés sur JPA, Hibernate, Spring et Struts.
Bon apprentissage à tous
Les slides de ma présentation à Devoxx France 2017.
Introduite en Java 8, l'API Collector vit dans l'ombre de l'API Stream, ce qui est logique puisqu'un collecteur doit se connecter à un stream pour fonctionner. Le JDK est organisé de sorte que l'on utilise surtout les collectors sur étagère : groupingBy, counting et quelques autres. Ces deux éléments masquent non seulement le modèle de traitement de données des collectors, mais aussi sa puissance et ses performances.
Ces présentation parle des collectors qui existent et qu'il faut connaître, ceux que l'on peut créer, ceux dont on se doute que l'on peut les créer une fois que l'on comprend un peu les choses, et les autres, tant les possibilités offertes par cette API sont illimitées.
Une riche présentation de Mapping Objet Relationnel qui traite le standard JPA et l’implémentation Hibernate en les intégrant avec le frammework IOC spring.
Se support présente l'outil d'intégration Maven dans le processus d'industrialisation du génie logiciel. Tout ce qu'il faut savoir sur maven.
La deuxième partie de ce cours traite la mise en oeuvre de maven dans des projets basés sur JPA, Hibernate, Spring et Struts.
Bon apprentissage à tous
Les slides de ma présentation à Devoxx France 2017.
Introduite en Java 8, l'API Collector vit dans l'ombre de l'API Stream, ce qui est logique puisqu'un collecteur doit se connecter à un stream pour fonctionner. Le JDK est organisé de sorte que l'on utilise surtout les collectors sur étagère : groupingBy, counting et quelques autres. Ces deux éléments masquent non seulement le modèle de traitement de données des collectors, mais aussi sa puissance et ses performances.
Ces présentation parle des collectors qui existent et qu'il faut connaître, ceux que l'on peut créer, ceux dont on se doute que l'on peut les créer une fois que l'on comprend un peu les choses, et les autres, tant les possibilités offertes par cette API sont illimitées.
Here I discuss about reactive programming, observable, observer and difference between observable and promise.
Also discuss some of important operators like forkJoin, switchMap, from, deboucneTime, discardUntilChanged, mergeMap. I discuss some of observable creation function.
NestJS (https://nestjs.com/) is a Node.js framework for building server-side applications. This slide give you a brief introduction of Nest, and shows the examples like Service, Middleware, and Pipe, etc.
Asynchronous API in Java8, how to use CompletableFutureJosé Paumard
Slides of my talk as Devoxx 2015. How to set up asynchronous data processing pipelines using the CompletionStage / CompletableFuture API, including how to control threads and how to handle exceptions.
2016 is going to be the year of Virtual DOM. React.js one of the most popular implementation of Virtual DOM. But this time we won't focus on React.js. We will be focusing on what is the concept of Virtual DOM, what's the benefits, and how to use it without React.js. All of those concepts will help you understand this newest DOM manipulation technique and better work with any Virtual DOM implementations such React.js.
As presented at DevDuck #6 - JavaScript meetup for developers (www.devduck.pl)
----
Looking for a company to build your app? - Check us out at www.brainhub.eu
Ce cours vise à présenter le JDBC (Java Database Connectivity) et comment utiliser JDBC à travers des applications Java à d'accéder à des bases de données.
This session will be about maintaning the store on client side with redux, And will have more details about state management addressing single source of truth concept
Here I discuss about reactive programming, observable, observer and difference between observable and promise.
Also discuss some of important operators like forkJoin, switchMap, from, deboucneTime, discardUntilChanged, mergeMap. I discuss some of observable creation function.
NestJS (https://nestjs.com/) is a Node.js framework for building server-side applications. This slide give you a brief introduction of Nest, and shows the examples like Service, Middleware, and Pipe, etc.
Asynchronous API in Java8, how to use CompletableFutureJosé Paumard
Slides of my talk as Devoxx 2015. How to set up asynchronous data processing pipelines using the CompletionStage / CompletableFuture API, including how to control threads and how to handle exceptions.
2016 is going to be the year of Virtual DOM. React.js one of the most popular implementation of Virtual DOM. But this time we won't focus on React.js. We will be focusing on what is the concept of Virtual DOM, what's the benefits, and how to use it without React.js. All of those concepts will help you understand this newest DOM manipulation technique and better work with any Virtual DOM implementations such React.js.
As presented at DevDuck #6 - JavaScript meetup for developers (www.devduck.pl)
----
Looking for a company to build your app? - Check us out at www.brainhub.eu
Ce cours vise à présenter le JDBC (Java Database Connectivity) et comment utiliser JDBC à travers des applications Java à d'accéder à des bases de données.
This session will be about maintaning the store on client side with redux, And will have more details about state management addressing single source of truth concept
Performance Anti-Patterns in Hibernatee, by Patrycja WegrzynowiczCodemotion
This talk explains common mistakes or omissions related to mappings of complex structures. The focus is on efficient retrieval of collections and their sub-objects along with fetching strategies and efficient queries. The presented real code examples illustrate how the anti-patterns can decrease performance and how to implement the mappings to speed up execution times.
How To Get The Most Out Of Your Hibernate, JBoss EAP 7 Application (Ståle Ped...Red Hat Developers
The fifth major release of Hibernate sports contains many internal changes developed in collaboration between the Hibernate team and the Red Hat middleware performance team. Efficient access to databases is crucial to get scalable and responsive applications. Hibernate 5 received much attention in this area. You’ll benefit from many of these improvements by merely upgrading. But it's important to understand some of these new, performance-boosting features because you will need to explicitly enable them. We'll explain the development background on all of these powerful new features and the investigation process for performance improvements. Our aim is to provide good guidance so you can make the most of it on your own applications. We'll also peek at other performance improvements made on JBoss EAP 7, like on the caching layer, the connection manager, and the web tier. We want to make sure you can all enjoy better-performing applications—that require less power and less servers—without compromising on your developer’s productivity.
Spring Data is a high level SpringSource project whose purpose is to unify and ease the access to different kinds of persistence stores, both relational database systems and NoSQL data stores.
Amazon Webservices for Java Developers - UCI WebinarCraig Dickson
Amazon Web Services (AWS) offers IT infrastructure services to businesses in the form of web services - now commonly known as cloud computing. AWS is an ideal platform to develop on and host enterprise Java applications, due to the zero up front costs and virtually infinite scalability of resources. Learn basic AWS concepts and work with many of the available services. Gain an understanding of how existing JavaEE applications can be migrated to the AWS environment and what the advantages are. Discover how to architect a new JavaEE application from the ground up to leverage the AWS environment for maximum benefit.
How to define candidate level on the interview?
How to do that quickly and precisely?
How to know one's technical level? How to improve it and build successful IT career? I try to answer all those questions in my talk.
Java Persistence API is a collection of classes and methods to persistently store the vast amounts of data into a database which is provided by the Oracle Corporation.
Generally, Java developers use lots of code, or use the proprietary framework to interact with the database, whereas using JPA, the burden of interacting with the database reduces significantly. It forms a bridge between object models (Java program) and relational models (database program).
A Deep Dive into Query Execution Engine of Spark SQLDatabricks
Spark SQL enables Spark to perform efficient and fault-tolerant relational query processing with analytics database technologies. The relational queries are compiled to the executable physical plans consisting of transformations and actions on RDDs with the generated Java code. The code is compiled to Java bytecode, executed at runtime by JVM and optimized by JIT to native machine code at runtime. This talk will take a deep dive into Spark SQL execution engine. The talk includes pipelined execution, whole-stage code generation, UDF execution, memory management, vectorized readers, lineage based RDD transformation and action.
Oracle Application Express (APEX) is shipped with several JavaScript libraries, jQuery being the best known one of them. And on top of these libraries the APEX Development Team created their own. You probably used a couple of these API's already, like $s, $v etc.
But there are way more and some of them are extremely useful. But first you have to be aware they exists. And secondly you have to know how to use the properly.
This session will cover the most valuable JavaScript API's with some real world examples.
Most developers stick to the standard $s and $v functions - even without knowing there is also a $v2 and $s can have more parameters.
The focus will be on the namespaced apex API's, like apex.server.process and apex.event.trigger.
Testing and validating spark programs - Strata SJ 2016Holden Karau
Apache Spark is a fast, general engine for big data processing. As Spark jobs are used for more mission-critical tasks, it is important to have effective tools for testing and validation. Expanding her Strata NYC talk, “Effective Testing of Spark Programs,” Holden Karau details reasonable validation rules for production jobs and best practices for creating effective tests, as well as options for generating test data.
Holden explores best practices for generating complex test data, setting up performance testing, as well as basic unit testing. The validation component will focus on how to create reasonable validation rules given the constraints of Spark’s accumulators.
Unit testing of Spark programs is deceptively simple. Holden looks at how unit testing of Spark itself is accomplished and distills a number of best practices into traits we can use. This includes dealing with local mode cluster creation and tear down during test suites, factoring our functions to increase testability, mock data for RDDs, and mock data for Spark SQL. A number of interesting problems also arise when testing Spark Streaming programs, including handling of starting and stopping the streaming context, providing mock data, and collecting results, and Holden pulls out simple takeaways for dealing with these issues.
Holden also explores Spark’s internal methods for generating random data, as well as options using external libraries to generate effective test datasets (for both small- and large-scale testing). And while acceptance tests are not always thought of as part of testing, they share a number of similarities, so Holden discusses which counters Spark programs generate that we can use for creating acceptance tests, best practices for storing historic values, and some common counters we can easily use to track the success of our job, all while working within the constraints of Spark’s accumulators.
Everyday I'm Shuffling - Tips for Writing Better Spark Programs, Strata San J...Databricks
Watch video at: http://youtu.be/Wg2boMqLjCg
Want to learn how to write faster and more efficient programs for Apache Spark? Two Spark experts from Databricks, Vida Ha and Holden Karau, provide some performance tuning and testing tips for your Spark applications
This is the "Deep Dive" talk given at the first Apache Flink Meetup Stockholm. The talk describes three components of the Apache Flink Internals: (a) job life-cycle, (b) the batch optimizer and (c) native iterations.
Test strategies for data processing pipelines, v2.0Lars Albertsson
This talk will present recommended patterns and corresponding anti-patterns for testing data processing pipelines. We will suggest technology and architecture to improve testability, both for batch and streaming processing pipelines. We will primarily focus on testing for the purpose of development productivity and product iteration speed, but briefly also cover data quality testing.
The next version of JavaScript, ES6, is starting to arrive. Many of its features are simple enhancements to the language we already have: things like arrow functions, class syntax, and destructuring. But other features will change the way we program JavaScript, fundamentally expanding the capabilities of the language and reshaping our future codebases. In this talk we'll focus on two of these, discovering the the myriad possibilities of generators and the many tricks you can pull of with template strings.
GDG Jakarta Meetup - Streaming Analytics With Apache BeamImre Nagi
Google slide version of this slide can be accessed from: https://docs.google.com/presentation/d/1Ws73JxlVH39HiKiYuF3vW903j8wFzxPQihXz4CQ_HZM/edit?usp=sharing
Beyond parallelize and collect - Spark Summit East 2016Holden Karau
As Spark jobs are used for more mission critical tasks, beyond exploration, it is important to have effective tools for testing. This talk expands on “Effective Testing For Spark Programs” (not required to have been seen) to discuss how to create large scale test jobs without depending on collect & parallelize which limit the sizes of datasets we can work with. Testing Spark Streaming jobs can be especially challenging, as the normal techniques for loading test data don’t work and additional work must be done to collect the results and stop streaming. We will explore the difficulties with testing Streaming Programs, options for setting up integration testing, beyond just local mode, with Spark, and also examine best practices for acceptance tests.
Standardizing JavaScript Decorators in TC39 (Full Stack Fest 2019)Igalia
By Daniel Ehrenberg.
JavaScript decorators were created in 2014 as a collaboration among the JavaScript ecosystem, and you've been able to use them in TypeScript and Babel. But they didn't make it into the JavaScript standard yet: not ES6, or any of the later versions, so far. We're working on standardizing decorators in TC39, the JavaScript standards committee, but some changes are required from the initial version.
In this talk, Daniel will explain what TC39 is and how we work. We'll look at some newer language feature proposals, such as Temporal and immutable records and tuples, and follow how decorators have been proceeding through the TC39 process, including why and how they're changing. TC39 could use your help in moving JavaScript forward.
(c) Full Stack Fest 2019
Sitges, Barcelona
September 4—6, 2019
https://2019.fullstackfest.com/
Similar to Second Level Cache in JPA Explained (20)
Do you want to see live Kubernetes hacking? Come to see interactive demos where your newly registered accounts in a k8s application are hijacked.
This talk guides you through various security risk of Kubernetes, focusing on OWASP Kubernetes Top 10 list. In live demos, you will find out how to exploit a range of vulnerabilities or misconfigurations in your k8s clusters, attacking containers, pods, network, or k8s components, leading to an ultimate compromise of user accounts in an exemplary web application.
You will learn about common mistakes and vulnerabilities along with the best practices for hardening your Kubernetes systems.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
1. Second-‐Level
Cache
in
JPA
Explained
Patrycja
Wegrzynowicz
CTO,
Yonita,
Inc.
JavaOne
2016
2. About
Me
• 15+
professional
experience
• SoPware
engineer,
architect,
head
of
soPware
R&D
• Author
and
speaker
• JavaOne,
Devoxx,
JavaZone,
TheServerSide
Java
Symposium,
Jazoon,
OOPSLA,
ASE,
others
• Top
10
Women
in
Tech
2016
in
Poland
• Founder
and
CTO
of
Yonita
• Automated
detecXon
and
refactoring
of
soPware
defects
• Trainings
and
code
reviews
• Security,
performance,
concurrency,
databases
• TwiYer
@yonlabs
3. About
Me
• 15+
professional
experience
• SoPware
engineer,
architect,
head
of
soPware
R&D
• Author
and
speaker
• JavaOne,
Devoxx,
JavaZone,
TheServerSide
Java
Symposium,
Jazoon,
OOPSLA,
ASE,
others
• Top
10
Women
in
Tech
2016
in
Poland
• Founder
and
CTO
of
Yonita
• Automated
detecXon
and
refactoring
of
soPware
defects
• Trainings
and
code
reviews
• Security,
performance,
concurrency,
databases
• TwiYer
@yonlabs
4. Agenda
• Why
cacheing
is
important?
• 1st
Level
Cache
and
2nd
Level
Cache
• JPA
configuraXon
parameters
for
cache
• JPA
API
for
cache
• Hibernate
2nd
Level
Cache
• EclipseLink
2nd
Level
Cache
(a
bit)
14. Employee
Entity
@Entity public class Employee {
@Id @GeneratedValue
private Long id;
private String firstName;
private String lastName;
private BigDecimal salary;
@OneToOne @JoinColumn(name = "address_id")
private Address address;
@Temporal(TemporalType.DATE)
private Date startDate;
@Temporal(TemporalType.DATE)
private Date endDate;
@ManyToOne @JoinColumn(name = "manager_id")
private Employee manager;
// …
}
14
15. Sum
of
Salaries
By
Country
Select
All
(1)
TypedQuery<Employee> query = em.createQuery(
"SELECT e FROM Employee e", Employee.class);
List<Employee> list = query.getResultList();
// calculate sum of salaries by country
// map: country->sum
Map<String, BigDecimal> results = new HashMap<>();
for (Employee e : list) {
String country = e.getAddress().getCountry();
BigDecimal total = results.get(country);
if (total == null) total = BigDecimal.ZERO;
total = total.add(e.getSalary());
results.put(country, total);
}
15
16. Sum
of
Salaries
by
Country
Select
Join
Fetch
(2)
TypedQuery<Employee> query = em.createQuery(
"SELECT e FROM Employee e
JOIN FETCH e.address", Employee.class);
List<Employee> list = query.getResultList();
// calculate sum of salaries by country
// map: country->sum
Map<String, BigDecimal> results = new HashMap<>();
for (Employee e : list) {
String country = e.getAddress().getCountry();
BigDecimal total = results.get(country);
if (total == null) total = BigDecimal.ZERO;
total = total.add(e.getSalary());
results.put(country, total);
}
16
17. Sum
of
Salaries
by
Country
Projection
(3)
Query query = em.createQuery(
"SELECT e.salary, e.address.country
FROM Employee e");
List<Object[]> list = (List<Object[]>) query.getResultList();
// calculate sum of salaries by country
// map: country->sum
Map<String, BigDecimal> results = new HashMap<>();
for (Object[] e : list) {
String country = (String) e[1];
BigDecimal total = results.get(country);
if (total == null) total = BigDecimal.ZERO;
total = total.add((BigDecimal) e[0]);
results.put(country, total);
}
17
18. Sum
of
Salaries
by
Country
Aggregation
JPQL
(4)
Query query = em.createQuery(
"SELECT SUM(e.salary), e.address.country
FROM Employee e
GROUP BY e.address.country");
List<Object[]> list = (List<Object[]>) query.getResultList();
// already calculated!
18
19. Comparison
1-‐4
(Hibernate)
100000
Employees,
Different
DB
LocaXons
Local DB
(ping: ~0.05ms)
North California
(ping: ~38ms)
EU Frankfurt
(ping: ~420ms)
(1) Select All
(N+1)
26756ms 2-3 hours ~1 day
(2) Select Join
Fetch
(3) Projection
(4) Aggregation
JPQL
20. Comparison
1-‐4
100000
Employees,
Different
DB
LocaXons
Local DB
(ping: ~0.05ms)
North California
(ping: ~38ms)
EU Frankfurt
(ping: ~420ms)
(1) Select All
(N+1)
26756ms 2-3 hours ~1 day
(2) Select Join
Fetch
4854ms 18027ms 25096ms
(3) Projection
(4) Aggregation
JPQL
21. Comparison
1-‐4
100000
Employees,
Different
DB
LocaXons
Local DB
(ping: ~0.05ms)
North California
(ping: ~38ms)
EU Frankfurt
(ping: ~420ms)
(1) Select All
(N+1)
26756ms 2-3 hours ~1 day
(2) Select Join
Fetch
4854ms 18027ms 25096ms
(3) Projection 653ms 2902ms 5006ms
(4) Aggregation
JPQL
22. Comparison
1-‐4
100000
Employees,
Different
DB
LocaXons
Local DB
(ping: ~0.05ms)
North California
(ping: ~38ms)
EU Frankfurt
(ping: ~420ms)
(1) Select All
(N+1)
26756ms 2-3 hours ~1 day
(2) Select Join
Fetch
4854ms 18027ms 25096ms
(3) Projection 653ms 2902ms 5006ms
(4) Aggregation
JPQL
182ms 353ms 1198ms
23. Performance
Tuning:
Data
• Get
your
data
in
bigger
chunks
• Many
small
queries
=>
many
round-‐trips
=>
huge
extra
Xme
on
transport
=>
high
latency
• Move
your
data
closer
to
the
processing
place
• Large
distance
to
data
=>
long
round-‐trip
=>
high
latency
• Don’t
ask
about
the
same
data
many
Xmes
• Extra
processing
Xme
+
extra
transport
Xme
Cache
26. JPA
Spec
• “Persistence
providers
are
not
required
to
support
a
second-‐level
cache.”
• “Portable
applicaXons
should
not
rely
on
support
by
persistence
providers
for
a
second-‐level
cache.”
28. • Second
Level
Cache
• Persistence
Unit
• EnXtyManagerFactory
• Thread-‐safe,
shared
• Available
for
many
en1ty
managers
from
one
en1ty
manager
factory
• Provider
specific
support
JPA
Caches
• First
Level
Cache
• Persistence
Context
• EnXtyManager
• Not
thread-‐safe
• Available
for
many
transac1ons
on
one
en1ty
manager
• Always
30. Puzzle
#1
em.getTransaction().begin();
Employee employee = em.find(Employee.class, 2L);
employee.getAddress().size();
Employee another = em.find(Employee.class, 2L);
em.getTransaction().commit();
(A) No cache used
(B) First Level Cache Used
(C) Second Level Cache Used
(D) None of the above
30
32. Puzzle
#1
EntityManager em = emf.createEntityManager();
em.getTransaction().begin();
Employee employee = em.find(Employee.class, 2L);
employee.getAddress().size();
Employee another = em.find(Employee.class, 2L);
em.getTransaction().commit();
(A) No cache used
(B) First Level Cache Used
(C) Second Level Cache Used
(D) None of the above
32
33. Puzzle
#1
EntityManager em = emf.createEntityManager();
em.getTransaction().begin();
Employee employee = em.find(Employee.class, 2L);
employee.getAddress().size();
Employee another = em.find(Employee.class, 2L);
em.getTransaction().commit();
(A) No cache used
(B) First Level Cache Used
(C) Second Level Cache Used
(D) None of the above
33
34. Puzzle
#2
EntityManager em1 = emf.createEntityManager();
em1.getTransaction().begin();
Employee employee = em1.find(Employee.class, 2L);
employee.getAddress().size();
em1.getTransaction().commit();
EntityManager em2 = emf.createEntityManager();
em2.getTransaction().begin();
Employee employee = em2.find(Employee.class, 2L);
employee.getAddress().size();
em2.getTransaction().commit();
(A) No cache used
(B) First Level Cache used
(C) Second Level Cache used
(D) None of the above
34
35. Puzzle
#2
EntityManager em1 = emf.createEntityManager();
em1.getTransaction().begin();
Employee employee = em1.find(Employee.class, 2L);
employee.getAddress().size();
em1.getTransaction().commit();
EntityManager em2 = emf.createEntityManager();
em2.getTransaction().begin();
Employee employee = em2.find(Employee.class, 2L);
employee.getAddress().size();
em2.getTransaction().commit();
(A) No cache used
(B) First Level Cache used
(C) Second Level Cache used
(D) None of the above
35
36. Puzzle
#3
(2LC
Configured,
Hibernate)
EntityManager em1 = emf.createEntityManager();
em1.getTransaction().begin();
Employee employee = em1.find(Employee.class, 2L);
employee.getAddress().size();
em1.getTransaction().commit();
EntityManager em2 = emf.createEntityManager();
em2.getTransaction().begin();
Employee employee = em2.find(Employee.class, 2L);
employee.getAddress().size();
em2.getTransaction().commit();
(A) No cache used
(B) First Level Cache used
(C) Second Level Cache used
(D) None of the above
36
37. Puzzle
#3
(2LC
Configured,
Hibernate)
EntityManager em1 = emf.createEntityManager();
em1.getTransaction().begin();
Employee employee = em1.find(Employee.class, 2L);
employee.getAddress().size();
em1.getTransaction().commit();
EntityManager em2 = emf.createEntityManager();
em2.getTransaction().begin();
Employee employee = em2.find(Employee.class, 2L);
employee.getAddress().size();
em2.getTransaction().commit();
(A) No cache used
(B) First Level Cache used
(C) Second Level Cache used
(D) None of the above
37
38. Puzzle
#4
(2LC
Configured!)
em1.getTransaction().begin();
Employee employee = em1.find(Employee.class, 2L);
employee.getAddress().size();
em1.getTransaction().commit(); a
em2.getTransaction().begin();
TypedQuery<Employee> q =
em.createQuery("SELECT e FROM Employee e WHERE e.id=:id", Employee.class);
q.setParameter("id", 2l);
Employee employee = q.getSingleResult();
employee.getAddress().size();
em2.getTransaction().commit();
(A) No cache used
(B) First Level Cache used
(C) Second Level Cache used
(D) None of the above 38
39. Puzzle
#4
(2LC
Configured!)
em1.getTransaction().begin();
Employee employee = em1.find(Employee.class, 2L);
employee.getAddress().size();
em1.getTransaction().commit(); a
em2.getTransaction().begin();
TypedQuery<Employee> q =
em.createQuery("SELECT e FROM Employee e WHERE e.id=:id", Employee.class);
q.setParameter("id", 2l);
Employee employee = q.getSingleResult();
employee.getAddress().size();
em2.getTransaction().commit();
(A) No cache used
(B) First Level Cache used
(C) Second Level Cache used
(D) None of the above 39
42. JPA
Cache
Modes
• ALL
• All
enXty
data
is
stored
in
the
second-‐level
cache
for
this
persistence
unit.
• NONE
• No
data
is
cached
in
the
persistence
unit.
The
persistence
provider
must
not
cache
any
data.
• ENABLE_SELECTIVE
• Enable
caching
for
enXXes
that
have
been
explicitly
set
with
the
@Cacheable
annotaXon.
• DISABLE_SELECTIVE
• Enable
caching
for
all
enXXes
except
those
that
have
been
explicitly
set
with
the
@Cacheable(false)
annotaXon.
• UNSPECIFIED
• The
caching
behavior
for
the
persistence
unit
is
undefined.
The
persistence
provider’s
default
caching
behavior
will
be
used.
43. JPA
Cache
Modes
• ALL
• All
enXty
data
is
stored
in
the
second-‐level
cache
for
this
persistence
unit.
• NONE
• No
data
is
cached
in
the
persistence
unit.
The
persistence
provider
must
not
cache
any
data.
• ENABLE_SELECTIVE
• Enable
caching
for
enXXes
that
have
been
explicitly
set
with
the
@Cacheable
annotaXon.
• DISABLE_SELECTIVE
• Enable
caching
for
all
enXXes
except
those
that
have
been
explicitly
set
with
the
@Cacheable(false)
annotaXon.
• UNSPECIFIED
• The
caching
behavior
for
the
persistence
unit
is
undefined.
The
persistence
provider’s
default
caching
behavior
will
be
used.
46. Cache
Retrieval
and
Store
Modes
• Cache
Retrieval
Modes
• javax.persistence.CacheRetrieveMode
• USE
(default)
• BYPASS
• Cache
Store
Modes
• javax.persistence.storeMode
• USE
(default)
• the
cache
data
is
created
or
updated
when
data
is
read
from
or
commiYed
to
• when
data
is
already
in
the
cache,
no
refresh
on
read
• REFRESH
• forced
refresh
on
read
• BYPASS
47. Cache
Retrieval
and
Store
EntityManager em = ...;
em.setProperty("javax.persistence.cache.storeMode", “BYPASS");
Map<String, Object> props = new HashMap<String, Object>();
props.put("javax.persistence.cache.retrieveMode", "BYPASS");
Employee employee = = em.find(Employee.class, 1L, props);
TypedQuery<Employee> q = em.createQuery(cq);
q.setHint("javax.persistence.cache.storeMode", "REFRESH");
47
48. Programmatic
Access
to
Cache
EntityManager em = ...;
Cache cache = em.getEntityManagerFactory().getCache();
if (cache.contains(Employee.class, 1L)) {
// the data is cached
} else {
// the data is NOT cached
}
Cache interface methods:
boolean contains(Class cls, Object primaryKey)
void evict(Class cls)
void evict(Class cls, Object primaryKey)
void evictAll()
<T> T unwrap(Class<T> cls)
48
49. Forget
that!
• Don’t
use
cache
retrieval
and
store
modes
• Don’t
use
programmaXc
access
to
2LC
• Use
provider-‐specific
configuraXon!
51. Hibernate
Caches
• First
Level
Cache
• Second
Level
Cache
• hydrated
or
disassembled
enXXes:
EnXtyEntry
• collecXons
• Query
Cache
52. Hibernate
Cache
ConfiguraXon
• hibernate.cache.use_second_level_cache
• Enable
or
disable
second
level
caching
overall.
• Default
is
true
• hibernate.cache.region.factory_class
• Default
region
factory
is
NoCachingRegionFactory
• hibernate.cache.use_query_cache
• Enable
or
disable
second
level
caching
of
query
results.
• Default
is
false.
• hibernate.cache.query_cache_factory
53. Hibernate
Cache
ConfiguraXon
• hibernate.cache.use_minimal_puts
• OpXmizes
second-‐level
cache
operaXons
to
minimize
writes,
at
the
cost
of
more
frequent
reads.
Providers
typically
set
this
appropriately.
• hibernate.cache.default_cache_concurrency_strategy
• In
Hibernate
second-‐level
caching,
all
regions
can
be
configured
differently
including
the
concurrency
strategy
to
use
when
accessing
that
parXcular
region.
This
sevng
allows
to
define
a
default
strategy
to
be
used.
• Providers
specify
this
sevng!
54. Hibernate
Cache
ConfiguraXon
• hibernate.cache.use_structured_entries
• If
true,
forces
Hibernate
to
store
data
in
the
second-‐level
cache
in
a
more
human-‐friendly
format.
• Default:
false
• hibernate.cache.auto_evict_collecXon_cache
• Enables
or
disables
the
automaXc
evicXon
of
a
bidirecXonal
associaXon’s
collecXon
cache
entry
when
the
associaXon
is
changed
just
from
the
owning
side.
• Default:
false
• hibernate.cache.use_reference_entries
• Enable
direct
storage
of
enXty
references
into
the
second
level
cache
for
read-‐only
or
immutable
enXXes.
55. Cache
Concurrency
Strategy
• Global
cache
concurrency
strategy
• hibernate.cache.default_cache_concurrency_strategy
• Hibernate
@Cache
annotaXon
on
an
enXty
level
• usage:
defines
the
CacheConcurrencyStrategy
• region:
defines
a
cache
region
where
entries
will
be
stored
• include:
if
lazy
properXes
should
be
included
in
the
second
level
cache.
Default
value
is
"all",
so
lazy
properXes
are
cacheable.
The
other
possible
value
is
"non-‐lazy",
so
lazy
properXes
are
not
cacheable.
56. Cache
Concurrency
Strategies
• read-‐only
• ApplicaXon
read-‐only
data
• Allows
deletes
• read-‐write
• ApplicaXon
updates
data
• Consistent
access
to
a
single
enXty,
but
not
a
serializable
transacXon
isolaXon
level
• nonstrict-‐read-‐write
• Occasional
stale
reads
• transacXonal
• Provides
serializable
transacXon
isolaXon
level
57. Example
–
Entity
and
Collection
Cache@Entity
@Cacheable
@org.hibernate.annotations.Cache(usage = CacheConcurrencyStrategy.READ_WRITE)
public class Employee {
@OneToMany(mappedBy = "employee", cascade = CascadeType.ALL)
@org.hibernate.annotations.Cache(usage =
CacheConcurrencyStrategy.NONSTRICT_READ_WRITE)
private Set<Phone> phones = new HashSet<>();
}
// cache used!
Person person = entityManager.find(Employee.class, 1L);
// cache used!
Person person = entityManager.find(Employee.class, 1L);
person.getPhones().size();
57
58. Example
-‐
Query
Cache
List<Employee> employees = entityManager.createQuery(
"select e " +
"from Employee e " +
"where e.firstName = :firstName", Employee.class)
.setParameter( "firstName", "John")
.setHint("org.hibernate.cacheable", "true")
.getResultList();
58
60. find,
no
collecXon
caching
Local DB
(ping: ~0.05ms)
North California
(ping: ~38ms)
EU Frankfurt
(ping: ~420ms)
No cache 87ms 186ms 1164ms
Cached 62ms 127ms 995ms
61. find,
collecXon
caching
Local DB
(ping: ~0.05ms)
North California
(ping: ~38ms)
EU Frankfurt
(ping: ~420ms)
No cache 82ms 162ms 1178ms
Cached 3ms 98ms 941ms
64. Infinispan
Configuration
(Local)
<!-- This configuration is suitable for non-clustered environments, where only single instance accesses the DB -->
<cache-container name="SampleCacheManager" statistics="false" default-cache="the-default-cache" shutdown-hook="DEFAULT">
<jmx duplicate-domains="true"/>
<local-cache-configuration name="the-default-cache" statistics="false" />
<!-- Default configuration is appropriate for entity/collection caching. -->
<local-cache-configuration name="entity" simple-cache="true" statistics="false" statistics-available="false">
<transaction mode="NONE" />
<eviction max-entries="10000" strategy="LRU"/>
<expiration max-idle="100000" interval="5000"/>
</local-cache-configuration>
<!-- A config appropriate for query caching. Does not replicate queries. -->
<local-cache-configuration name="local-query" simple-cache="true" statistics="false" statistics-available="false">
<transaction mode="NONE" />
<eviction max-entries="10000" strategy="LRU"/>
<expiration max-idle="100000" interval="5000"/>
</local-cache-configuration>
<local-cache-configuration name="timestamps" simple-cache="true" statistics="false" statistics-available="false">
<locking concurrency-level="1000" acquire-timeout="15000"/>
<!-- Explicitly non transactional -->
<transaction mode="NONE"/>
<!-- Don't ever evict modification timestamps -->
<eviction strategy="NONE"/>
<expiration interval="0"/>
</local-cache-configuration>
<!-- When providing custom configuration, always make this cache local and non-transactional.
64
65. Infinispan
Configuration
(Clustered)<jgroups>
<stack-file name="hibernate-jgroups" path="${hibernate.cache.infinispan.jgroups_cfg:default-configs/default-jgroups-tcp.xml}"/>
</jgroups>
<cache-container name="SampleCacheManager" statistics="false" default-cache="the-default-cache" shutdown-hook="DEFAULT">
<transport stack="hibernate-jgroups" cluster="infinispan-hibernate-cluster"/>
<jmx duplicate-domains="true"/>
<local-cache-configuration name="the-default-cache" statistics="false" />
<!-- Default configuration is appropriate for entity/collection caching. -->
<invalidation-cache-configuration name="entity" mode="SYNC" remote-timeout="20000" statistics="false" statistics-available="false">
<locking concurrency-level="1000" acquire-timeout="15000"/>
<transaction mode="NONE" />
<eviction max-entries="10000" strategy="LRU"/>
<expiration max-idle="100000" interval="5000"/>
</invalidation-cache-configuration>
<!-- A config appropriate for query caching. Does not replicate queries. -->
<local-cache-configuration name="local-query" statistics="false" statistics-available="false">
<locking concurrency-level="1000" acquire-timeout="15000"/>
<transaction mode="NONE" />
<eviction max-entries="10000" strategy="LRU"/>
<expiration max-idle="100000" interval="5000"/>
</local-cache-configuration>
<!-- A query cache that replicates queries. Replication is asynchronous. -->
<replicated-cache-configuration name="replicated-query" mode="ASYNC" statistics="false" statistics-available="false">
<locking concurrency-level="1000" acquire-timeout="15000"/>
<transaction mode="NONE" />
<eviction max-entries="10000" strategy="LRU"/>
<expiration max-idle="100000" interval="5000"/>
</replicated-cache-configuration>
<!-- Optimized for timestamp caching. A clustered timestamp cache
is required if query caching is used, even if the query cache
itself is configured with CacheMode=LOCAL. -->
65
66. Guidelines
• Cache
as
much
as
you
can
• As
much
RAM
you
have
• Do
it
wisely
• Read-‐only
data
• Almost
read-‐only
data
• More
reads
than
writes
• Hit
raXo
67. EclipseLink
• 2LC
enabled
by
default
• CollecXon
caching
• Query
caching
• @org.eclipse.persistence.annotaXons.Cache
• type:
type
of
the
cache
(FULL,
WEAK,
SOFT,
SOFT_WEAK,
HARD_WEAK)
• size:
number
of
objects
• isolaXon:
shared,
isolated,
protected
• expiry
• expiryTimeOfDay
• alwaysRefresh
• refreshOnlyIfNewer
• disableHits
• coordinaXonType
• databaseChangeNoXficaXonType