3. Disclaimer
I will talk only about:
● Hibernate + Spring
● Postgres DB
● Isolation level = read commited
● You can lock resources in many ways.
● There are other locking methods in SQL
4. @Lock - how to use in JpaRepository
@Repository
public interface EntityRepository extends JpaRepository<Entity, Long> {
@Lock(LockModeType.PESSIMISTIC_WRITE)
Entity findOne(Long id);
}
@Repository
public interface EntityRepository extends JpaRepository<Entity, Long> {
@Lock(LockModeType.PESSIMISTIC_WRITE) // IT WILL NOT WORK
default Card findOneAndLock(String id) {
return findOne(id);
}
}
6. LockModeType.NONE
● select entity_.created as created2_2_ from entity entity_
where entity_.id=?
update entity_without_version set created=?,
description=? where id=?
● The row would have value set by transaction 2.
7. LockModeType.OPTIMISTIC (synonym READ)
● Add @Version field in your entity. Supported types:
int, Integer, short, Short, long, Long, java.sql.Timestamp
● If there is a different version of @Version field in the database,
ObjectOptimisticLockingFailureException is thrown.
● select entitywith0_.id ... from entity entity0_ where entity0_.id=?
● update entity set created=?, description=?, version=? where id=? and
version=?
● The version has been updated:
○ at the end of transaction
○ every time we call saveAndFlush
10. LockModeType.PESSIMISTIC_READ
● Hibernate: select entity0_.id as id1_2_0_, entity0_.created as created2_2_0_,
entity0_.description as descript3_2_0_ from entity entity0_ where
entity0_.id=? for share
● Other transactions may concurrently read the entity, but cannot concurrently
update it (if both use “select for share”).
● There is a deadlock if two transactions use PESSIMISTIC_READ and try to
modify the row.
Random transaction interrupted with this exception (no matter which one
started first).
● You can still read data using classical repository (without any locking
strategy).
12. LockModeType.PESSIMISTIC_WRITE
● Other transactions cannot concurrently read or write the entity (if both use
“select for update”).
● You can still read data using classical repository (without any locking
strategy).
● select entity0_.id as id1_2_0_, entity0_.created as created2_2_0_,
entity0_.description as descript3_2_0_ from entity entity0_ where
entity0_.id=? for update
14. MIX - PESSIMISTIC_WRITE + OPTIMISTIC
● You can mix PESSIMISTIC_WRITE + OPTIMISTIC
● Add version field and use PESSIMISTIC_WRITE
15. Concurrent calls case
We wanted:
● Insert account and operations associated with it.
● If account exist, just insert operations.
● We don’t want to fail if two parallel requests come and
there is no account.
17. INSERT ON CONFLICT DO NOTHING
INSERT INTO Accounts (id, … ) VALUES(...)
ON CONFLICT (id) DO NOTHING
Query will not fail if account.id existed before.
18. Concurrent calls solution
BEGIN TRANSACTION
SELECT * FROM ACCOUNTS FOR UPDATE WHERE ID = {id}
IF(result_of_select == null) {
INSERT INTO ACCOUNTS (id, ..) VALUES (...) ON
CONFLICT DO NOTHING
COMMIT
BEGIN TRANSACTION
SELECT * FROM ACCOUNTS FOR UPDATE WHERE ID = {id}
}
INSERT OPERATIONS
COMMIT
19. There was an account in DB
BEGIN TRANSACTION
SELECT * FROM ACCOUNTS FOR UPDATE WHERE ID = {id}
INSERT OPERATIONS
COMMIT
20. There was no account in DB
BEGIN TRANSACTION
SELECT * FROM ACCOUNTS FOR UPDATE WHERE ID = {id}
INSERT INTO ACCOUNTS (id, ..) VALUES (...) ON CONFLICT DO NOTHING
COMMIT
BEGIN TRANSACTION
SELECT * FROM ACCOUNTS FOR UPDATE WHERE ID = {id}
INSERT OPERATIONS
COMMIT
22. Serializable isolation level
● Puts transactions in order
● If not possible SQLException is thrown and
transaction need to be repeated.
● Serializable isolation level has other
disadvantages.
23. Read committed isolation level + locking table
● Table account_lock was created
● At the beginning of transaction insert was made
● Other transaction was blocked
● At the end of transaction row was deleted
● Vacuuming used a lot of resources