Decoding billions of integers per second through vectorization
Decoding billions of integers per second through vectorization
Decoding billions of integers per second through vectorization
Decoding billions of integers per second through vectorization
Decoding billions of integers per second through vectorization
Decoding billions of integers per second through vectorization
Decoding billions of integers per second through vectorization
Decoding billions of integers per second through vectorization
Decoding billions of integers per second through vectorization
Decoding billions of integers per second through vectorization
Decoding billions of integers per second through vectorization
Decoding billions of integers per second through vectorization
Decoding billions of integers per second through vectorization
Decoding billions of integers per second through vectorization
Decoding billions of integers per second through vectorization
Decoding billions of integers per second through vectorization
Decoding billions of integers per second through vectorization
Decoding billions of integers per second through vectorization
Decoding billions of integers per second through vectorization
Decoding billions of integers per second through vectorization
Decoding billions of integers per second through vectorization
Decoding billions of integers per second through vectorization
Decoding billions of integers per second through vectorization
Decoding billions of integers per second through vectorization
Decoding billions of integers per second through vectorization
Upcoming SlideShare
Loading in …5
×

Decoding billions of integers per second through vectorization

782 views

Published on

In many important applications -- such as search engines and relational database systems -- data is stored in the form of arrays of integers. Encoding and, most importantly, decoding of these arrays consumes considerable CPU time. Therefore, substantial effort has been made to reduce costs associated with compression and decompression. In particular, researchers have exploited the superscalar nature of modern processors and SIMD instructions. Nevertheless, we introduce a novel vectorized scheme called SIMD-BP128 that improves over previously proposed vectorized approaches. It is nearly twice as fast as the previously fastest schemes on desktop processors (varint-G8IU and PFOR). At the same time, SIMD-BP128 saves up to 2 bits per integer. For even better compression, we propose another new vectorized scheme (SIMD-FastPFOR) that has a compression rate within 10% of a state-of-the-art scheme (Simple-8b) while being two times faster during decoding.

Published in: Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
782
On SlideShare
0
From Embeds
0
Number of Embeds
36
Actions
Shares
0
Downloads
11
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

×