Long thought to be relegated to the domain of fast, multithreaded desktop applications, race conditions have made their way into web applications. These bugs are often difficult to test for, and are becoming increasingly prevalent due to faster and faster clients, while server-side languages like Node.js and PHP are struggling to keep up. Race conditions are no longer just bugs- when they are found in critical components of web applications, they become a serious security vulnerability. If the proper checks and defensive measures are not in place, databases get confused, “one-time-use” becomes a relative term, and “limited” becomes “unlimited”. This talk will detail specific examples where malicious users could cause damage or profit from a race-condition flaw in a web application. A custom open-source tool will also be introduced to help security researchers and developers easily check for this class of vulnerability in web applications.
2. Security Consultant at Security Compass
Professor of Application Security at
Georgian College
Software developer
Former sysadmin
etc.
Aaron Hnatiw
twitter: @insp3ctre
5. OWASP Definition
A race condition is a flaw that produces an unexpected
result when the timing of actions impact other actions.
An example may be seen on a multithreaded application
where actions are being performed on the same data.
Race conditions, by their very nature, are difficult to test
for.
https://www.owasp.org/index.php/Testing_for_Race_Conditions_%28OWASP-AT-010%29
21. Usual whitebox method
1. Identify all shared data
2. Identify where that shared data is accessed across
systems
3. Find where that data access is not synchronized
4. Make a TON of requests
44. PHP
• You could compile PHP with “--enable-sysvsem"...
• Not supported everywhere
• Not useful in a distributed environment
• May not be possible in a shared hosting
environment
• You’re pretty much stuck with implementing this at
the database or file level (more on that later)
• If you know of any other way, please let me know!
47. ACID-Compliant Databases
• Atomicity: All or nothing. A transaction either succeeds or
rolls back.
• Consistency: On the completion of a transaction, the
database is structurally sound. Otherwise, it reverts back to
the previous sound state.
• Isolation: Transactions do not interfere with each other.
• This point is KEY.
• Durability: The results of applying a transaction are
permanent, even in the presence of failures.
48. Isolation
• Highest level: serializable.
• Transactions essentially occur serially (one after another), rather
than concurrently.
• Next level: repeatable read.
• Close, but still allows race conditions.
• Be prepared to retry transactions often, because in most cases, these
isolation levels can result in a large number of transaction failures.
• Obvious downside- using higher levels of isolation can slow down
your application.
49. Solution- MySQL
• From the documentation: "MySQL Server (version
3.23-max and all versions 4.0 and above) supports
transactions with the InnoDB transactional storage
engine. InnoDB provides full ACID compliance."
• Use SERIALIZABLE isolation level
• Default is REPEATABLE-READ
• Can be set globally, for a session, or for individual
transactions
More info: https://dev.mysql.com/doc/refman/5.5/en/innodb-transaction-isolation-
levels.html
50. MySQL (cont’d)
• System variable:
SET GLOBAL @@GLOBAL.tx_isolation=`SERIALIZABLE`;
• Command-line option at mysqld startup:
--transaction-isolation=SERIALIZABLE
• Option file:
[mysqld]
transaction-isolation = SERIALIZABLE
• Command-line, BEFORE starting a transaction:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
More Info: https://dev.mysql.com/doc/refman/5.5/en/set-transaction.html
51. Solution- PostgreSQL
• Use the SERIALIZABLE transaction isolation level
• Default is READ COMMITTED: all queries see a snapshot of
committed data at the time of the query.
• Command-line:
• START TRANSACTION;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
• START TRANSACTION ISOLATION LEVEL SERIALIZABLE;
• BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
More info: https://www.postgresql.org/docs/9.3/static/sql-set-transaction.html
54. For most use cases:
• Optimize your queries
• Use a single query whenever possible (e.g. UPDATE xTable SET yValue = yValue+1
WHERE id = ‘zID’)
• Inserts instead of updates
• Use unique indexes
• Use an ORM for “optimistic locking”
• Most ORMs do their own optimizations and locking to prevent race conditions, as
opposed to relying on the database’s strict “pessimistic locking”
• Use the READ COMMITTED isolation level
• Not as strict as Serializable, but provides more speed, less locking errors, and more
consistency than REPEATABLE READ
55. Solution- MongoDB
• No serialization
• From 2.2 on, uses database-level read and write locks, depending on operation (https://
docs.mongodb.com/manual/faq/concurrency/#which-operations-lock-the-database)
• Single document writes are atomic (but not isolated) by default
• $isolated operator
• Acquires an exclusive lock to all documents being written to (only applies when writing to multiple
documents).
• Does not work on sharded clusters.
More info: https://docs.mongodb.com/v3.2/core/write-operations-atomicity/
57. Native Methods
• Windows
• LockFile function of the Windows API: https://msdn.microsoft.com/en-us/library/
aa365202.aspx
• Unix
• flock()/lockf(): essentially the same function
• fcntl(): http://pubs.opengroup.org/onlinepubs/9699919799/functions/fcntl.html
• Some typical “gotchas”: http://0pointer.de/blog/projects/locking.html
• Lock file: create a temporary file (e.g. ~myfile.lck), which exists while a file needs to
be locked. Check for the lock file before accessing its associated file.
• Probably the best way to do this at the file-level.
58. Don’t overdo it though!
Avoid locking hell.
Don’t share resources unless you have to.
60. Ensure your database can
keep up
• Often the slowest point in the application logic chain
• Database speed should keep at pace with the speed
of users making requests to your web application
• Best bet- host on the same network
• Not the same server though- tiered architecture is
best
• This provides defence-in-depth; not a panacea
61. Fetch data only right as you
need it
Again, defence-in-depth. This is by no means a
complete solution on its own.
62. CSRF Tokens
• You can’t automate a bunch of requests if they require
a unique token every time
• More of a client-side solution, does not necessarily
address the root cause of a race condition
• Do this even for non-sensitive actions
• Attacker’s perspective- found a CSRF vuln? Try
leveraging that into a race condition as well!
63. Further Reading
• https://www.josipfranjkovic.com/blog/race-conditions-on-web
• http://sakurity.com/blog/2015/05/21/starbucks.html
• https://defuse.ca/race-conditions-in-web-applications.htm
• Web Application Hacker’s Handbook, 2nd Ed.; chapter 11, "Example 12:
Racing Against the Login" (page 426)
• http://www.hakim.ws/BHUSA08/speakers/
Stender_Vidergar_Concurrency_Attacks/
BH_US_08_Stender_Vidergar_Concurrency_Attacks_in_Web_Applications
_Presentation.pdf
• https://www.owasp.org/index.php/Reviewing_Code_for_Race_Conditions